Dec 16 13:06:35.914271 kernel: Linux version 6.12.61-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.1_p20250801 p4) 14.3.1 20250801, GNU ld (Gentoo 2.45 p3) 2.45.0) #1 SMP PREEMPT_DYNAMIC Fri Dec 12 15:17:57 -00 2025 Dec 16 13:06:35.914316 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=4dd8de2ff094d97322e7371b16ddee5fc8348868bcdd9ec7bcd11ea9d3933fee Dec 16 13:06:35.914336 kernel: BIOS-provided physical RAM map: Dec 16 13:06:35.914349 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Dec 16 13:06:35.914361 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000786cdfff] usable Dec 16 13:06:35.914374 kernel: BIOS-e820: [mem 0x00000000786ce000-0x000000007894dfff] reserved Dec 16 13:06:35.914389 kernel: BIOS-e820: [mem 0x000000007894e000-0x000000007895dfff] ACPI data Dec 16 13:06:35.914403 kernel: BIOS-e820: [mem 0x000000007895e000-0x00000000789ddfff] ACPI NVS Dec 16 13:06:35.914416 kernel: BIOS-e820: [mem 0x00000000789de000-0x000000007c97bfff] usable Dec 16 13:06:35.914428 kernel: BIOS-e820: [mem 0x000000007c97c000-0x000000007c9fffff] reserved Dec 16 13:06:35.914444 kernel: NX (Execute Disable) protection: active Dec 16 13:06:35.914457 kernel: APIC: Static calls initialized Dec 16 13:06:35.914470 kernel: e820: update [mem 0x768c0018-0x768c8e57] usable ==> usable Dec 16 13:06:35.914484 kernel: extended physical RAM map: Dec 16 13:06:35.914501 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000009ffff] usable Dec 16 13:06:35.914517 kernel: reserve setup_data: [mem 0x0000000000100000-0x00000000768c0017] usable Dec 16 13:06:35.914532 kernel: reserve setup_data: [mem 0x00000000768c0018-0x00000000768c8e57] usable Dec 16 13:06:35.914546 kernel: reserve setup_data: [mem 0x00000000768c8e58-0x00000000786cdfff] usable Dec 16 13:06:35.914561 kernel: reserve setup_data: [mem 0x00000000786ce000-0x000000007894dfff] reserved Dec 16 13:06:35.914575 kernel: reserve setup_data: [mem 0x000000007894e000-0x000000007895dfff] ACPI data Dec 16 13:06:35.914589 kernel: reserve setup_data: [mem 0x000000007895e000-0x00000000789ddfff] ACPI NVS Dec 16 13:06:35.914603 kernel: reserve setup_data: [mem 0x00000000789de000-0x000000007c97bfff] usable Dec 16 13:06:35.914618 kernel: reserve setup_data: [mem 0x000000007c97c000-0x000000007c9fffff] reserved Dec 16 13:06:35.914632 kernel: efi: EFI v2.7 by EDK II Dec 16 13:06:35.914646 kernel: efi: SMBIOS=0x7886a000 ACPI=0x7895d000 ACPI 2.0=0x7895d014 MEMATTR=0x77002518 Dec 16 13:06:35.914663 kernel: secureboot: Secure boot disabled Dec 16 13:06:35.914677 kernel: SMBIOS 2.7 present. Dec 16 13:06:35.914692 kernel: DMI: Amazon EC2 t3.small/, BIOS 1.0 10/16/2017 Dec 16 13:06:35.914706 kernel: DMI: Memory slots populated: 1/1 Dec 16 13:06:35.914720 kernel: Hypervisor detected: KVM Dec 16 13:06:35.914734 kernel: last_pfn = 0x7c97c max_arch_pfn = 0x400000000 Dec 16 13:06:35.914748 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Dec 16 13:06:35.914797 kernel: kvm-clock: using sched offset of 6862313188 cycles Dec 16 13:06:35.914813 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Dec 16 13:06:35.914828 kernel: tsc: Detected 2499.994 MHz processor Dec 16 13:06:35.914846 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Dec 16 13:06:35.914861 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Dec 16 13:06:35.914876 kernel: last_pfn = 0x7c97c max_arch_pfn = 0x400000000 Dec 16 13:06:35.914890 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Dec 16 13:06:35.914906 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Dec 16 13:06:35.914926 kernel: Using GB pages for direct mapping Dec 16 13:06:35.914945 kernel: ACPI: Early table checksum verification disabled Dec 16 13:06:35.914960 kernel: ACPI: RSDP 0x000000007895D014 000024 (v02 AMAZON) Dec 16 13:06:35.914976 kernel: ACPI: XSDT 0x000000007895C0E8 00006C (v01 AMAZON AMZNFACP 00000001 01000013) Dec 16 13:06:35.914992 kernel: ACPI: FACP 0x0000000078955000 000114 (v01 AMAZON AMZNFACP 00000001 AMZN 00000001) Dec 16 13:06:35.915007 kernel: ACPI: DSDT 0x0000000078956000 00115A (v01 AMAZON AMZNDSDT 00000001 AMZN 00000001) Dec 16 13:06:35.915026 kernel: ACPI: FACS 0x00000000789D0000 000040 Dec 16 13:06:35.915041 kernel: ACPI: WAET 0x000000007895B000 000028 (v01 AMAZON AMZNWAET 00000001 AMZN 00000001) Dec 16 13:06:35.915057 kernel: ACPI: SLIT 0x000000007895A000 00006C (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Dec 16 13:06:35.915072 kernel: ACPI: APIC 0x0000000078959000 000076 (v01 AMAZON AMZNAPIC 00000001 AMZN 00000001) Dec 16 13:06:35.915088 kernel: ACPI: SRAT 0x0000000078958000 0000A0 (v01 AMAZON AMZNSRAT 00000001 AMZN 00000001) Dec 16 13:06:35.915103 kernel: ACPI: HPET 0x0000000078954000 000038 (v01 AMAZON AMZNHPET 00000001 AMZN 00000001) Dec 16 13:06:35.915119 kernel: ACPI: SSDT 0x0000000078953000 000759 (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Dec 16 13:06:35.915138 kernel: ACPI: SSDT 0x0000000078952000 00007F (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Dec 16 13:06:35.915153 kernel: ACPI: BGRT 0x0000000078951000 000038 (v01 AMAZON AMAZON 00000002 01000013) Dec 16 13:06:35.915169 kernel: ACPI: Reserving FACP table memory at [mem 0x78955000-0x78955113] Dec 16 13:06:35.915185 kernel: ACPI: Reserving DSDT table memory at [mem 0x78956000-0x78957159] Dec 16 13:06:35.915200 kernel: ACPI: Reserving FACS table memory at [mem 0x789d0000-0x789d003f] Dec 16 13:06:35.915217 kernel: ACPI: Reserving WAET table memory at [mem 0x7895b000-0x7895b027] Dec 16 13:06:35.915232 kernel: ACPI: Reserving SLIT table memory at [mem 0x7895a000-0x7895a06b] Dec 16 13:06:35.915248 kernel: ACPI: Reserving APIC table memory at [mem 0x78959000-0x78959075] Dec 16 13:06:35.915266 kernel: ACPI: Reserving SRAT table memory at [mem 0x78958000-0x7895809f] Dec 16 13:06:35.915282 kernel: ACPI: Reserving HPET table memory at [mem 0x78954000-0x78954037] Dec 16 13:06:35.915297 kernel: ACPI: Reserving SSDT table memory at [mem 0x78953000-0x78953758] Dec 16 13:06:35.915312 kernel: ACPI: Reserving SSDT table memory at [mem 0x78952000-0x7895207e] Dec 16 13:06:35.915328 kernel: ACPI: Reserving BGRT table memory at [mem 0x78951000-0x78951037] Dec 16 13:06:35.915343 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x7fffffff] Dec 16 13:06:35.915359 kernel: NUMA: Initialized distance table, cnt=1 Dec 16 13:06:35.915377 kernel: NODE_DATA(0) allocated [mem 0x7a8eddc0-0x7a8f4fff] Dec 16 13:06:35.915393 kernel: Zone ranges: Dec 16 13:06:35.915408 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Dec 16 13:06:35.915424 kernel: DMA32 [mem 0x0000000001000000-0x000000007c97bfff] Dec 16 13:06:35.915439 kernel: Normal empty Dec 16 13:06:35.915455 kernel: Device empty Dec 16 13:06:35.915470 kernel: Movable zone start for each node Dec 16 13:06:35.915485 kernel: Early memory node ranges Dec 16 13:06:35.915504 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Dec 16 13:06:35.915519 kernel: node 0: [mem 0x0000000000100000-0x00000000786cdfff] Dec 16 13:06:35.915535 kernel: node 0: [mem 0x00000000789de000-0x000000007c97bfff] Dec 16 13:06:35.915551 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007c97bfff] Dec 16 13:06:35.915566 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Dec 16 13:06:35.915582 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Dec 16 13:06:35.915598 kernel: On node 0, zone DMA32: 784 pages in unavailable ranges Dec 16 13:06:35.915616 kernel: On node 0, zone DMA32: 13956 pages in unavailable ranges Dec 16 13:06:35.915631 kernel: ACPI: PM-Timer IO Port: 0xb008 Dec 16 13:06:35.915647 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Dec 16 13:06:35.915663 kernel: IOAPIC[0]: apic_id 0, version 32, address 0xfec00000, GSI 0-23 Dec 16 13:06:35.915678 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Dec 16 13:06:35.915694 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Dec 16 13:06:35.915709 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Dec 16 13:06:35.915728 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Dec 16 13:06:35.915743 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Dec 16 13:06:35.915786 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Dec 16 13:06:35.915802 kernel: TSC deadline timer available Dec 16 13:06:35.915817 kernel: CPU topo: Max. logical packages: 1 Dec 16 13:06:35.915833 kernel: CPU topo: Max. logical dies: 1 Dec 16 13:06:35.915848 kernel: CPU topo: Max. dies per package: 1 Dec 16 13:06:35.915863 kernel: CPU topo: Max. threads per core: 2 Dec 16 13:06:35.915882 kernel: CPU topo: Num. cores per package: 1 Dec 16 13:06:35.915898 kernel: CPU topo: Num. threads per package: 2 Dec 16 13:06:35.915913 kernel: CPU topo: Allowing 2 present CPUs plus 0 hotplug CPUs Dec 16 13:06:35.915928 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Dec 16 13:06:35.915944 kernel: [mem 0x7ca00000-0xffffffff] available for PCI devices Dec 16 13:06:35.915960 kernel: Booting paravirtualized kernel on KVM Dec 16 13:06:35.915976 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Dec 16 13:06:35.915992 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Dec 16 13:06:35.916021 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u1048576 Dec 16 13:06:35.916040 kernel: pcpu-alloc: s207832 r8192 d29736 u1048576 alloc=1*2097152 Dec 16 13:06:35.916055 kernel: pcpu-alloc: [0] 0 1 Dec 16 13:06:35.916071 kernel: kvm-guest: PV spinlocks enabled Dec 16 13:06:35.916086 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Dec 16 13:06:35.916105 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=4dd8de2ff094d97322e7371b16ddee5fc8348868bcdd9ec7bcd11ea9d3933fee Dec 16 13:06:35.916124 kernel: random: crng init done Dec 16 13:06:35.916139 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Dec 16 13:06:35.916155 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Dec 16 13:06:35.916171 kernel: Fallback order for Node 0: 0 Dec 16 13:06:35.916187 kernel: Built 1 zonelists, mobility grouping on. Total pages: 509451 Dec 16 13:06:35.916204 kernel: Policy zone: DMA32 Dec 16 13:06:35.916233 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Dec 16 13:06:35.916250 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Dec 16 13:06:35.916266 kernel: Kernel/User page tables isolation: enabled Dec 16 13:06:35.916285 kernel: ftrace: allocating 40103 entries in 157 pages Dec 16 13:06:35.916302 kernel: ftrace: allocated 157 pages with 5 groups Dec 16 13:06:35.916319 kernel: Dynamic Preempt: voluntary Dec 16 13:06:35.916335 kernel: rcu: Preemptible hierarchical RCU implementation. Dec 16 13:06:35.916353 kernel: rcu: RCU event tracing is enabled. Dec 16 13:06:35.916370 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Dec 16 13:06:35.916390 kernel: Trampoline variant of Tasks RCU enabled. Dec 16 13:06:35.916407 kernel: Rude variant of Tasks RCU enabled. Dec 16 13:06:35.916423 kernel: Tracing variant of Tasks RCU enabled. Dec 16 13:06:35.916440 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Dec 16 13:06:35.916456 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Dec 16 13:06:35.916472 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 16 13:06:35.916492 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 16 13:06:35.916509 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 16 13:06:35.916526 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Dec 16 13:06:35.916542 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Dec 16 13:06:35.916559 kernel: Console: colour dummy device 80x25 Dec 16 13:06:35.916575 kernel: printk: legacy console [tty0] enabled Dec 16 13:06:35.916592 kernel: printk: legacy console [ttyS0] enabled Dec 16 13:06:35.916612 kernel: ACPI: Core revision 20240827 Dec 16 13:06:35.916629 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 30580167144 ns Dec 16 13:06:35.916645 kernel: APIC: Switch to symmetric I/O mode setup Dec 16 13:06:35.916662 kernel: x2apic enabled Dec 16 13:06:35.916679 kernel: APIC: Switched APIC routing to: physical x2apic Dec 16 13:06:35.916695 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x240933eba6e, max_idle_ns: 440795246008 ns Dec 16 13:06:35.916712 kernel: Calibrating delay loop (skipped) preset value.. 4999.98 BogoMIPS (lpj=2499994) Dec 16 13:06:35.916732 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Dec 16 13:06:35.916749 kernel: Last level dTLB entries: 4KB 64, 2MB 32, 4MB 32, 1GB 4 Dec 16 13:06:35.916784 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Dec 16 13:06:35.916800 kernel: Spectre V2 : Mitigation: Retpolines Dec 16 13:06:35.916816 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Dec 16 13:06:35.916832 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Dec 16 13:06:35.916849 kernel: RETBleed: Vulnerable Dec 16 13:06:35.916865 kernel: Speculative Store Bypass: Vulnerable Dec 16 13:06:35.916880 kernel: MDS: Vulnerable: Clear CPU buffers attempted, no microcode Dec 16 13:06:35.916900 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Dec 16 13:06:35.916916 kernel: GDS: Unknown: Dependent on hypervisor status Dec 16 13:06:35.916932 kernel: active return thunk: its_return_thunk Dec 16 13:06:35.916948 kernel: ITS: Mitigation: Aligned branch/return thunks Dec 16 13:06:35.916964 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Dec 16 13:06:35.916981 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Dec 16 13:06:35.916998 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Dec 16 13:06:35.917015 kernel: x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers' Dec 16 13:06:35.917031 kernel: x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR' Dec 16 13:06:35.917048 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Dec 16 13:06:35.917067 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Dec 16 13:06:35.917083 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Dec 16 13:06:35.917099 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' Dec 16 13:06:35.917116 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Dec 16 13:06:35.917132 kernel: x86/fpu: xstate_offset[3]: 832, xstate_sizes[3]: 64 Dec 16 13:06:35.917148 kernel: x86/fpu: xstate_offset[4]: 896, xstate_sizes[4]: 64 Dec 16 13:06:35.917164 kernel: x86/fpu: xstate_offset[5]: 960, xstate_sizes[5]: 64 Dec 16 13:06:35.917180 kernel: x86/fpu: xstate_offset[6]: 1024, xstate_sizes[6]: 512 Dec 16 13:06:35.917196 kernel: x86/fpu: xstate_offset[7]: 1536, xstate_sizes[7]: 1024 Dec 16 13:06:35.917212 kernel: x86/fpu: xstate_offset[9]: 2560, xstate_sizes[9]: 8 Dec 16 13:06:35.917228 kernel: x86/fpu: Enabled xstate features 0x2ff, context size is 2568 bytes, using 'compacted' format. Dec 16 13:06:35.917248 kernel: Freeing SMP alternatives memory: 32K Dec 16 13:06:35.917263 kernel: pid_max: default: 32768 minimum: 301 Dec 16 13:06:35.917279 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Dec 16 13:06:35.917295 kernel: landlock: Up and running. Dec 16 13:06:35.917311 kernel: SELinux: Initializing. Dec 16 13:06:35.917327 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Dec 16 13:06:35.917344 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Dec 16 13:06:35.917360 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8259CL CPU @ 2.50GHz (family: 0x6, model: 0x55, stepping: 0x7) Dec 16 13:06:35.917376 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Dec 16 13:06:35.917393 kernel: signal: max sigframe size: 3632 Dec 16 13:06:35.917412 kernel: rcu: Hierarchical SRCU implementation. Dec 16 13:06:35.917429 kernel: rcu: Max phase no-delay instances is 400. Dec 16 13:06:35.917446 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Dec 16 13:06:35.917462 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Dec 16 13:06:35.917479 kernel: smp: Bringing up secondary CPUs ... Dec 16 13:06:35.917496 kernel: smpboot: x86: Booting SMP configuration: Dec 16 13:06:35.917512 kernel: .... node #0, CPUs: #1 Dec 16 13:06:35.917532 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Dec 16 13:06:35.917550 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Dec 16 13:06:35.917566 kernel: smp: Brought up 1 node, 2 CPUs Dec 16 13:06:35.917583 kernel: smpboot: Total of 2 processors activated (9999.97 BogoMIPS) Dec 16 13:06:35.917600 kernel: Memory: 1926484K/2037804K available (14336K kernel code, 2444K rwdata, 29892K rodata, 15464K init, 2576K bss, 106756K reserved, 0K cma-reserved) Dec 16 13:06:35.917618 kernel: devtmpfs: initialized Dec 16 13:06:35.917634 kernel: x86/mm: Memory block size: 128MB Dec 16 13:06:35.917654 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x7895e000-0x789ddfff] (524288 bytes) Dec 16 13:06:35.917670 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Dec 16 13:06:35.917688 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Dec 16 13:06:35.917704 kernel: pinctrl core: initialized pinctrl subsystem Dec 16 13:06:35.917721 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Dec 16 13:06:35.917737 kernel: audit: initializing netlink subsys (disabled) Dec 16 13:06:35.917765 kernel: audit: type=2000 audit(1765890391.159:1): state=initialized audit_enabled=0 res=1 Dec 16 13:06:35.917785 kernel: thermal_sys: Registered thermal governor 'step_wise' Dec 16 13:06:35.917803 kernel: thermal_sys: Registered thermal governor 'user_space' Dec 16 13:06:35.917819 kernel: cpuidle: using governor menu Dec 16 13:06:35.917836 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Dec 16 13:06:35.917852 kernel: dca service started, version 1.12.1 Dec 16 13:06:35.917869 kernel: PCI: Using configuration type 1 for base access Dec 16 13:06:35.917886 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Dec 16 13:06:35.917905 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Dec 16 13:06:35.917922 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Dec 16 13:06:35.917939 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Dec 16 13:06:35.917955 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Dec 16 13:06:35.917971 kernel: ACPI: Added _OSI(Module Device) Dec 16 13:06:35.917988 kernel: ACPI: Added _OSI(Processor Device) Dec 16 13:06:35.918005 kernel: ACPI: Added _OSI(Processor Aggregator Device) Dec 16 13:06:35.918025 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Dec 16 13:06:35.918041 kernel: ACPI: Interpreter enabled Dec 16 13:06:35.918057 kernel: ACPI: PM: (supports S0 S5) Dec 16 13:06:35.918074 kernel: ACPI: Using IOAPIC for interrupt routing Dec 16 13:06:35.918090 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Dec 16 13:06:35.918107 kernel: PCI: Using E820 reservations for host bridge windows Dec 16 13:06:35.918123 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Dec 16 13:06:35.918143 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Dec 16 13:06:35.918478 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Dec 16 13:06:35.918682 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Dec 16 13:06:35.919031 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Dec 16 13:06:35.919055 kernel: acpiphp: Slot [3] registered Dec 16 13:06:35.919072 kernel: acpiphp: Slot [4] registered Dec 16 13:06:35.919096 kernel: acpiphp: Slot [5] registered Dec 16 13:06:35.919113 kernel: acpiphp: Slot [6] registered Dec 16 13:06:35.919129 kernel: acpiphp: Slot [7] registered Dec 16 13:06:35.919146 kernel: acpiphp: Slot [8] registered Dec 16 13:06:35.919163 kernel: acpiphp: Slot [9] registered Dec 16 13:06:35.919180 kernel: acpiphp: Slot [10] registered Dec 16 13:06:35.919198 kernel: acpiphp: Slot [11] registered Dec 16 13:06:35.919218 kernel: acpiphp: Slot [12] registered Dec 16 13:06:35.919235 kernel: acpiphp: Slot [13] registered Dec 16 13:06:35.919251 kernel: acpiphp: Slot [14] registered Dec 16 13:06:35.919267 kernel: acpiphp: Slot [15] registered Dec 16 13:06:35.919285 kernel: acpiphp: Slot [16] registered Dec 16 13:06:35.919301 kernel: acpiphp: Slot [17] registered Dec 16 13:06:35.919318 kernel: acpiphp: Slot [18] registered Dec 16 13:06:35.919334 kernel: acpiphp: Slot [19] registered Dec 16 13:06:35.919354 kernel: acpiphp: Slot [20] registered Dec 16 13:06:35.919371 kernel: acpiphp: Slot [21] registered Dec 16 13:06:35.919388 kernel: acpiphp: Slot [22] registered Dec 16 13:06:35.919404 kernel: acpiphp: Slot [23] registered Dec 16 13:06:35.919421 kernel: acpiphp: Slot [24] registered Dec 16 13:06:35.919437 kernel: acpiphp: Slot [25] registered Dec 16 13:06:35.919454 kernel: acpiphp: Slot [26] registered Dec 16 13:06:35.919473 kernel: acpiphp: Slot [27] registered Dec 16 13:06:35.919489 kernel: acpiphp: Slot [28] registered Dec 16 13:06:35.919506 kernel: acpiphp: Slot [29] registered Dec 16 13:06:35.919523 kernel: acpiphp: Slot [30] registered Dec 16 13:06:35.919539 kernel: acpiphp: Slot [31] registered Dec 16 13:06:35.919556 kernel: PCI host bridge to bus 0000:00 Dec 16 13:06:35.919750 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Dec 16 13:06:35.919951 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Dec 16 13:06:35.920128 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Dec 16 13:06:35.920302 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Dec 16 13:06:35.920475 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x2000ffffffff window] Dec 16 13:06:35.921083 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Dec 16 13:06:35.921306 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 conventional PCI endpoint Dec 16 13:06:35.921551 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 conventional PCI endpoint Dec 16 13:06:35.922929 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x000000 conventional PCI endpoint Dec 16 13:06:35.923170 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Dec 16 13:06:35.923370 kernel: pci 0000:00:01.3: PIIX4 devres E PIO at fff0-ffff Dec 16 13:06:35.923543 kernel: pci 0000:00:01.3: PIIX4 devres F MMIO at ffc00000-ffffffff Dec 16 13:06:35.923719 kernel: pci 0000:00:01.3: PIIX4 devres G PIO at fff0-ffff Dec 16 13:06:35.923908 kernel: pci 0000:00:01.3: PIIX4 devres H MMIO at ffc00000-ffffffff Dec 16 13:06:35.924078 kernel: pci 0000:00:01.3: PIIX4 devres I PIO at fff0-ffff Dec 16 13:06:35.924247 kernel: pci 0000:00:01.3: PIIX4 devres J PIO at fff0-ffff Dec 16 13:06:35.924428 kernel: pci 0000:00:03.0: [1d0f:1111] type 00 class 0x030000 conventional PCI endpoint Dec 16 13:06:35.924599 kernel: pci 0000:00:03.0: BAR 0 [mem 0x80000000-0x803fffff pref] Dec 16 13:06:35.926152 kernel: pci 0000:00:03.0: ROM [mem 0xffff0000-0xffffffff pref] Dec 16 13:06:35.926367 kernel: pci 0000:00:03.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Dec 16 13:06:35.926552 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 PCIe Endpoint Dec 16 13:06:35.926725 kernel: pci 0000:00:04.0: BAR 0 [mem 0x80404000-0x80407fff] Dec 16 13:06:35.926932 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 PCIe Endpoint Dec 16 13:06:35.933136 kernel: pci 0000:00:05.0: BAR 0 [mem 0x80400000-0x80403fff] Dec 16 13:06:35.933166 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Dec 16 13:06:35.933185 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Dec 16 13:06:35.933202 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Dec 16 13:06:35.933219 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Dec 16 13:06:35.933237 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Dec 16 13:06:35.933255 kernel: iommu: Default domain type: Translated Dec 16 13:06:35.933284 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Dec 16 13:06:35.933301 kernel: efivars: Registered efivars operations Dec 16 13:06:35.933318 kernel: PCI: Using ACPI for IRQ routing Dec 16 13:06:35.933337 kernel: PCI: pci_cache_line_size set to 64 bytes Dec 16 13:06:35.933354 kernel: e820: reserve RAM buffer [mem 0x768c0018-0x77ffffff] Dec 16 13:06:35.933371 kernel: e820: reserve RAM buffer [mem 0x786ce000-0x7bffffff] Dec 16 13:06:35.933388 kernel: e820: reserve RAM buffer [mem 0x7c97c000-0x7fffffff] Dec 16 13:06:35.933584 kernel: pci 0000:00:03.0: vgaarb: setting as boot VGA device Dec 16 13:06:35.935901 kernel: pci 0000:00:03.0: vgaarb: bridge control possible Dec 16 13:06:35.936166 kernel: pci 0000:00:03.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Dec 16 13:06:35.936198 kernel: vgaarb: loaded Dec 16 13:06:35.936215 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0 Dec 16 13:06:35.936231 kernel: hpet0: 8 comparators, 32-bit 62.500000 MHz counter Dec 16 13:06:35.936258 kernel: clocksource: Switched to clocksource kvm-clock Dec 16 13:06:35.936275 kernel: VFS: Disk quotas dquot_6.6.0 Dec 16 13:06:35.936290 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Dec 16 13:06:35.936307 kernel: pnp: PnP ACPI init Dec 16 13:06:35.936322 kernel: pnp: PnP ACPI: found 5 devices Dec 16 13:06:35.936339 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Dec 16 13:06:35.936355 kernel: NET: Registered PF_INET protocol family Dec 16 13:06:35.936371 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Dec 16 13:06:35.936391 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Dec 16 13:06:35.936407 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Dec 16 13:06:35.936422 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Dec 16 13:06:35.936438 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Dec 16 13:06:35.936455 kernel: TCP: Hash tables configured (established 16384 bind 16384) Dec 16 13:06:35.936471 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Dec 16 13:06:35.936491 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Dec 16 13:06:35.936507 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Dec 16 13:06:35.936524 kernel: NET: Registered PF_XDP protocol family Dec 16 13:06:35.936735 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Dec 16 13:06:35.936925 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Dec 16 13:06:35.937088 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Dec 16 13:06:35.937252 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Dec 16 13:06:35.937425 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x2000ffffffff window] Dec 16 13:06:35.937613 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Dec 16 13:06:35.937634 kernel: PCI: CLS 0 bytes, default 64 Dec 16 13:06:35.937651 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Dec 16 13:06:35.937682 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x240933eba6e, max_idle_ns: 440795246008 ns Dec 16 13:06:35.937698 kernel: clocksource: Switched to clocksource tsc Dec 16 13:06:35.937712 kernel: Initialise system trusted keyrings Dec 16 13:06:35.937731 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Dec 16 13:06:35.937746 kernel: Key type asymmetric registered Dec 16 13:06:35.937794 kernel: Asymmetric key parser 'x509' registered Dec 16 13:06:35.937809 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Dec 16 13:06:35.937824 kernel: io scheduler mq-deadline registered Dec 16 13:06:35.937840 kernel: io scheduler kyber registered Dec 16 13:06:35.937856 kernel: io scheduler bfq registered Dec 16 13:06:35.937873 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Dec 16 13:06:35.937888 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Dec 16 13:06:35.937904 kernel: 00:04: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Dec 16 13:06:35.937920 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Dec 16 13:06:35.937934 kernel: i8042: Warning: Keylock active Dec 16 13:06:35.937949 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Dec 16 13:06:35.937964 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Dec 16 13:06:35.938166 kernel: rtc_cmos 00:00: RTC can wake from S4 Dec 16 13:06:35.938349 kernel: rtc_cmos 00:00: registered as rtc0 Dec 16 13:06:35.938516 kernel: rtc_cmos 00:00: setting system clock to 2025-12-16T13:06:32 UTC (1765890392) Dec 16 13:06:35.938684 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Dec 16 13:06:35.938725 kernel: intel_pstate: CPU model not supported Dec 16 13:06:35.938744 kernel: efifb: probing for efifb Dec 16 13:06:35.938778 kernel: efifb: framebuffer at 0x80000000, using 1876k, total 1875k Dec 16 13:06:35.938795 kernel: efifb: mode is 800x600x32, linelength=3200, pages=1 Dec 16 13:06:35.938811 kernel: efifb: scrolling: redraw Dec 16 13:06:35.938826 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Dec 16 13:06:35.938841 kernel: Console: switching to colour frame buffer device 100x37 Dec 16 13:06:35.938857 kernel: fb0: EFI VGA frame buffer device Dec 16 13:06:35.938873 kernel: pstore: Using crash dump compression: deflate Dec 16 13:06:35.938890 kernel: pstore: Registered efi_pstore as persistent store backend Dec 16 13:06:35.938905 kernel: NET: Registered PF_INET6 protocol family Dec 16 13:06:35.938921 kernel: Segment Routing with IPv6 Dec 16 13:06:35.938936 kernel: In-situ OAM (IOAM) with IPv6 Dec 16 13:06:35.938952 kernel: NET: Registered PF_PACKET protocol family Dec 16 13:06:35.938968 kernel: Key type dns_resolver registered Dec 16 13:06:35.938983 kernel: IPI shorthand broadcast: enabled Dec 16 13:06:35.938999 kernel: sched_clock: Marking stable (1497003640, 153770928)->(1742367391, -91592823) Dec 16 13:06:35.939017 kernel: registered taskstats version 1 Dec 16 13:06:35.939033 kernel: Loading compiled-in X.509 certificates Dec 16 13:06:35.939049 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.61-flatcar: b90706f42f055ab9f35fc8fc29156d877adb12c4' Dec 16 13:06:35.939064 kernel: Demotion targets for Node 0: null Dec 16 13:06:35.939080 kernel: Key type .fscrypt registered Dec 16 13:06:35.939096 kernel: Key type fscrypt-provisioning registered Dec 16 13:06:35.939112 kernel: ima: No TPM chip found, activating TPM-bypass! Dec 16 13:06:35.939130 kernel: ima: Allocated hash algorithm: sha1 Dec 16 13:06:35.939145 kernel: ima: No architecture policies found Dec 16 13:06:35.939161 kernel: clk: Disabling unused clocks Dec 16 13:06:35.939177 kernel: Freeing unused kernel image (initmem) memory: 15464K Dec 16 13:06:35.939193 kernel: Write protecting the kernel read-only data: 45056k Dec 16 13:06:35.939212 kernel: Freeing unused kernel image (rodata/data gap) memory: 828K Dec 16 13:06:35.939228 kernel: Run /init as init process Dec 16 13:06:35.939243 kernel: with arguments: Dec 16 13:06:35.939258 kernel: /init Dec 16 13:06:35.939274 kernel: with environment: Dec 16 13:06:35.939289 kernel: HOME=/ Dec 16 13:06:35.939304 kernel: TERM=linux Dec 16 13:06:35.939455 kernel: nvme nvme0: pci function 0000:00:04.0 Dec 16 13:06:35.939479 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Dec 16 13:06:35.943420 kernel: nvme nvme0: 2/0/0 default/read/poll queues Dec 16 13:06:35.943471 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Dec 16 13:06:35.943488 kernel: GPT:25804799 != 33554431 Dec 16 13:06:35.943514 kernel: GPT:Alternate GPT header not at the end of the disk. Dec 16 13:06:35.943529 kernel: GPT:25804799 != 33554431 Dec 16 13:06:35.943545 kernel: GPT: Use GNU Parted to correct GPT errors. Dec 16 13:06:35.943560 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Dec 16 13:06:35.943576 kernel: SCSI subsystem initialized Dec 16 13:06:35.943593 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Dec 16 13:06:35.943609 kernel: device-mapper: uevent: version 1.0.3 Dec 16 13:06:35.943628 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Dec 16 13:06:35.943644 kernel: device-mapper: verity: sha256 using shash "sha256-generic" Dec 16 13:06:35.943660 kernel: raid6: avx512x4 gen() 17863 MB/s Dec 16 13:06:35.943678 kernel: raid6: avx512x2 gen() 17074 MB/s Dec 16 13:06:35.943694 kernel: raid6: avx512x1 gen() 18023 MB/s Dec 16 13:06:35.943711 kernel: raid6: avx2x4 gen() 17916 MB/s Dec 16 13:06:35.943726 kernel: raid6: avx2x2 gen() 17857 MB/s Dec 16 13:06:35.943746 kernel: raid6: avx2x1 gen() 13324 MB/s Dec 16 13:06:35.943778 kernel: raid6: using algorithm avx512x1 gen() 18023 MB/s Dec 16 13:06:35.943794 kernel: raid6: .... xor() 20997 MB/s, rmw enabled Dec 16 13:06:35.943810 kernel: raid6: using avx512x2 recovery algorithm Dec 16 13:06:35.943826 kernel: xor: automatically using best checksumming function avx Dec 16 13:06:35.943842 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Dec 16 13:06:35.943859 kernel: Btrfs loaded, zoned=no, fsverity=no Dec 16 13:06:35.943878 kernel: BTRFS: device fsid ea73a94a-fb20-4d45-8448-4c6f4c422a4f devid 1 transid 35 /dev/mapper/usr (254:0) scanned by mount (152) Dec 16 13:06:35.943894 kernel: BTRFS info (device dm-0): first mount of filesystem ea73a94a-fb20-4d45-8448-4c6f4c422a4f Dec 16 13:06:35.943909 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Dec 16 13:06:35.943925 kernel: BTRFS info (device dm-0): enabling ssd optimizations Dec 16 13:06:35.943941 kernel: BTRFS info (device dm-0): disabling log replay at mount time Dec 16 13:06:35.943956 kernel: BTRFS info (device dm-0): enabling free space tree Dec 16 13:06:35.943973 kernel: loop: module loaded Dec 16 13:06:35.943991 kernel: loop0: detected capacity change from 0 to 100136 Dec 16 13:06:35.944006 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Dec 16 13:06:35.944034 systemd[1]: Successfully made /usr/ read-only. Dec 16 13:06:35.944054 systemd[1]: systemd 257.9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Dec 16 13:06:35.944072 systemd[1]: Detected virtualization amazon. Dec 16 13:06:35.944089 systemd[1]: Detected architecture x86-64. Dec 16 13:06:35.944108 systemd[1]: Running in initrd. Dec 16 13:06:35.944123 systemd[1]: No hostname configured, using default hostname. Dec 16 13:06:35.944140 systemd[1]: Hostname set to . Dec 16 13:06:35.944156 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Dec 16 13:06:35.944172 systemd[1]: Queued start job for default target initrd.target. Dec 16 13:06:35.944189 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Dec 16 13:06:35.944205 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 16 13:06:35.944225 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 16 13:06:35.944245 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Dec 16 13:06:35.944261 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 16 13:06:35.944279 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Dec 16 13:06:35.944296 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Dec 16 13:06:35.944315 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 16 13:06:35.944331 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 16 13:06:35.944348 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Dec 16 13:06:35.944364 systemd[1]: Reached target paths.target - Path Units. Dec 16 13:06:35.944380 systemd[1]: Reached target slices.target - Slice Units. Dec 16 13:06:35.944396 systemd[1]: Reached target swap.target - Swaps. Dec 16 13:06:35.944413 systemd[1]: Reached target timers.target - Timer Units. Dec 16 13:06:35.944433 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Dec 16 13:06:35.944450 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 16 13:06:35.944466 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Dec 16 13:06:35.944483 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Dec 16 13:06:35.944500 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Dec 16 13:06:35.944517 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 16 13:06:35.944534 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 16 13:06:35.944553 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 16 13:06:35.944570 systemd[1]: Reached target sockets.target - Socket Units. Dec 16 13:06:35.944587 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Dec 16 13:06:35.944604 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Dec 16 13:06:35.944620 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 16 13:06:35.944637 systemd[1]: Finished network-cleanup.service - Network Cleanup. Dec 16 13:06:35.944654 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Dec 16 13:06:35.944673 systemd[1]: Starting systemd-fsck-usr.service... Dec 16 13:06:35.944690 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 16 13:06:35.944707 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 16 13:06:35.944725 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 16 13:06:35.944745 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Dec 16 13:06:35.944772 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 16 13:06:35.944789 systemd[1]: Finished systemd-fsck-usr.service. Dec 16 13:06:35.944806 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 16 13:06:35.944852 systemd-journald[289]: Collecting audit messages is enabled. Dec 16 13:06:35.944891 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 16 13:06:35.944909 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Dec 16 13:06:35.944925 kernel: audit: type=1130 audit(1765890395.939:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:06:35.944944 systemd-journald[289]: Journal started Dec 16 13:06:35.944977 systemd-journald[289]: Runtime Journal (/run/log/journal/ec289c946a30595dff5bd7fff5ab9de6) is 4.7M, max 38M, 33.2M free. Dec 16 13:06:35.939000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:06:35.947836 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 16 13:06:35.952608 systemd[1]: Started systemd-journald.service - Journal Service. Dec 16 13:06:35.953000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:06:35.959784 kernel: audit: type=1130 audit(1765890395.953:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:06:35.961977 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 16 13:06:35.972444 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 16 13:06:35.973000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:06:35.981876 kernel: audit: type=1130 audit(1765890395.973:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:06:35.981914 kernel: Bridge firewalling registered Dec 16 13:06:35.983088 systemd-modules-load[291]: Inserted module 'br_netfilter' Dec 16 13:06:35.984971 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 16 13:06:35.984000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:06:35.990015 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 16 13:06:35.992999 kernel: audit: type=1130 audit(1765890395.984:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:06:36.000624 systemd-tmpfiles[306]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Dec 16 13:06:36.014991 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 16 13:06:36.015000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:06:36.026319 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 16 13:06:36.035124 kernel: audit: type=1130 audit(1765890396.015:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:06:36.035000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:06:36.049250 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 13:06:36.054316 kernel: audit: type=1130 audit(1765890396.035:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:06:36.054355 kernel: audit: type=1130 audit(1765890396.049:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:06:36.049000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:06:36.055219 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 16 13:06:36.060000 audit: BPF prog-id=6 op=LOAD Dec 16 13:06:36.063114 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 16 13:06:36.065368 kernel: audit: type=1334 audit(1765890396.060:9): prog-id=6 op=LOAD Dec 16 13:06:36.094072 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 16 13:06:36.095000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:06:36.102957 kernel: audit: type=1130 audit(1765890396.095:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:06:36.105052 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Dec 16 13:06:36.177667 dracut-cmdline[330]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=4dd8de2ff094d97322e7371b16ddee5fc8348868bcdd9ec7bcd11ea9d3933fee Dec 16 13:06:36.178843 systemd-resolved[318]: Positive Trust Anchors: Dec 16 13:06:36.178855 systemd-resolved[318]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 16 13:06:36.178861 systemd-resolved[318]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Dec 16 13:06:36.178923 systemd-resolved[318]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 16 13:06:36.222572 systemd-resolved[318]: Defaulting to hostname 'linux'. Dec 16 13:06:36.225207 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 16 13:06:36.226000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:06:36.227088 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 16 13:06:36.234541 kernel: audit: type=1130 audit(1765890396.226:11): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:06:36.452866 kernel: Loading iSCSI transport class v2.0-870. Dec 16 13:06:36.545790 kernel: iscsi: registered transport (tcp) Dec 16 13:06:36.615909 kernel: iscsi: registered transport (qla4xxx) Dec 16 13:06:36.616006 kernel: QLogic iSCSI HBA Driver Dec 16 13:06:36.658276 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Dec 16 13:06:36.688076 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Dec 16 13:06:36.696967 kernel: audit: type=1130 audit(1765890396.689:12): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:06:36.689000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:06:36.692433 systemd[1]: Reached target network-pre.target - Preparation for Network. Dec 16 13:06:36.764566 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Dec 16 13:06:36.764000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:06:36.768978 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Dec 16 13:06:36.775862 kernel: audit: type=1130 audit(1765890396.764:13): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:06:36.775673 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Dec 16 13:06:36.821861 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Dec 16 13:06:36.823000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:06:36.830019 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 16 13:06:36.846003 kernel: audit: type=1130 audit(1765890396.823:14): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:06:36.846064 kernel: audit: type=1334 audit(1765890396.824:15): prog-id=7 op=LOAD Dec 16 13:06:36.846084 kernel: audit: type=1334 audit(1765890396.824:16): prog-id=8 op=LOAD Dec 16 13:06:36.824000 audit: BPF prog-id=7 op=LOAD Dec 16 13:06:36.824000 audit: BPF prog-id=8 op=LOAD Dec 16 13:06:36.903518 systemd-udevd[571]: Using default interface naming scheme 'v257'. Dec 16 13:06:36.923605 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 16 13:06:36.925000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:06:36.933279 kernel: audit: type=1130 audit(1765890396.925:17): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:06:36.932463 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Dec 16 13:06:36.970296 dracut-pre-trigger[636]: rd.md=0: removing MD RAID activation Dec 16 13:06:36.973625 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 16 13:06:36.975000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:06:36.982853 kernel: audit: type=1130 audit(1765890396.975:18): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:06:36.983901 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 16 13:06:36.977000 audit: BPF prog-id=9 op=LOAD Dec 16 13:06:36.989787 kernel: audit: type=1334 audit(1765890396.977:19): prog-id=9 op=LOAD Dec 16 13:06:37.019874 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Dec 16 13:06:37.020000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:06:37.026044 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 16 13:06:37.058702 systemd-networkd[682]: lo: Link UP Dec 16 13:06:37.059949 systemd-networkd[682]: lo: Gained carrier Dec 16 13:06:37.061000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:06:37.061276 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 16 13:06:37.062403 systemd[1]: Reached target network.target - Network. Dec 16 13:06:37.123307 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 16 13:06:37.123000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:06:37.127276 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Dec 16 13:06:37.251145 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 16 13:06:37.251442 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 13:06:37.253000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:06:37.253947 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Dec 16 13:06:37.258432 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 16 13:06:37.268505 kernel: ena 0000:00:05.0: ENA device version: 0.10 Dec 16 13:06:37.268937 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Dec 16 13:06:37.274544 kernel: ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy. Dec 16 13:06:37.280786 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80400000, mac addr 06:b1:00:f8:93:f3 Dec 16 13:06:37.290328 (udev-worker)[712]: Network interface NamePolicy= disabled on kernel command line. Dec 16 13:06:37.300797 kernel: cryptd: max_cpu_qlen set to 1000 Dec 16 13:06:37.328051 systemd-networkd[682]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Dec 16 13:06:37.328071 systemd-networkd[682]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 16 13:06:37.336021 systemd-networkd[682]: eth0: Link UP Dec 16 13:06:37.336204 systemd-networkd[682]: eth0: Gained carrier Dec 16 13:06:37.336225 systemd-networkd[682]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Dec 16 13:06:37.342134 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 13:06:37.342000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:06:37.345926 systemd-networkd[682]: eth0: DHCPv4 address 172.31.28.98/20, gateway 172.31.16.1 acquired from 172.31.16.1 Dec 16 13:06:37.354794 kernel: AES CTR mode by8 optimization enabled Dec 16 13:06:37.365806 kernel: input: ImPS/2 Generic Wheel Mouse as /devices/platform/i8042/serio1/input/input2 Dec 16 13:06:37.397785 kernel: nvme nvme0: using unchecked data buffer Dec 16 13:06:37.558959 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Dec 16 13:06:37.620003 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Dec 16 13:06:37.644966 disk-uuid[876]: Primary Header is updated. Dec 16 13:06:37.644966 disk-uuid[876]: Secondary Entries is updated. Dec 16 13:06:37.644966 disk-uuid[876]: Secondary Header is updated. Dec 16 13:06:37.679339 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Dec 16 13:06:37.706597 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Dec 16 13:06:37.764692 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Dec 16 13:06:38.007523 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Dec 16 13:06:38.007000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:06:38.009300 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Dec 16 13:06:38.010216 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 16 13:06:38.011651 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 16 13:06:38.014045 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Dec 16 13:06:38.052037 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Dec 16 13:06:38.052000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:06:38.792071 disk-uuid[883]: Warning: The kernel is still using the old partition table. Dec 16 13:06:38.792071 disk-uuid[883]: The new table will be used at the next reboot or after you Dec 16 13:06:38.792071 disk-uuid[883]: run partprobe(8) or kpartx(8) Dec 16 13:06:38.792071 disk-uuid[883]: The operation has completed successfully. Dec 16 13:06:38.804372 systemd[1]: disk-uuid.service: Deactivated successfully. Dec 16 13:06:38.804531 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Dec 16 13:06:38.805000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:06:38.805000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:06:38.807626 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Dec 16 13:06:38.846074 systemd-networkd[682]: eth0: Gained IPv6LL Dec 16 13:06:38.864819 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 (259:5) scanned by mount (1078) Dec 16 13:06:38.869018 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem c87e2a2e-b8fc-4d1d-98f3-593ea9a0f098 Dec 16 13:06:38.869108 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Dec 16 13:06:38.902055 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Dec 16 13:06:38.902153 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Dec 16 13:06:38.921807 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem c87e2a2e-b8fc-4d1d-98f3-593ea9a0f098 Dec 16 13:06:38.923044 systemd[1]: Finished ignition-setup.service - Ignition (setup). Dec 16 13:06:38.923000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:06:38.929119 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Dec 16 13:06:40.201616 ignition[1097]: Ignition 2.22.0 Dec 16 13:06:40.201634 ignition[1097]: Stage: fetch-offline Dec 16 13:06:40.201934 ignition[1097]: no configs at "/usr/lib/ignition/base.d" Dec 16 13:06:40.201951 ignition[1097]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 16 13:06:40.204000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:06:40.204962 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Dec 16 13:06:40.202265 ignition[1097]: Ignition finished successfully Dec 16 13:06:40.207321 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Dec 16 13:06:40.258047 ignition[1104]: Ignition 2.22.0 Dec 16 13:06:40.258064 ignition[1104]: Stage: fetch Dec 16 13:06:40.258526 ignition[1104]: no configs at "/usr/lib/ignition/base.d" Dec 16 13:06:40.258538 ignition[1104]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 16 13:06:40.258680 ignition[1104]: PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 16 13:06:40.304686 ignition[1104]: PUT result: OK Dec 16 13:06:40.323662 ignition[1104]: parsed url from cmdline: "" Dec 16 13:06:40.323676 ignition[1104]: no config URL provided Dec 16 13:06:40.323689 ignition[1104]: reading system config file "/usr/lib/ignition/user.ign" Dec 16 13:06:40.323712 ignition[1104]: no config at "/usr/lib/ignition/user.ign" Dec 16 13:06:40.323750 ignition[1104]: PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 16 13:06:40.325164 ignition[1104]: PUT result: OK Dec 16 13:06:40.325237 ignition[1104]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Dec 16 13:06:40.329020 ignition[1104]: GET result: OK Dec 16 13:06:40.329200 ignition[1104]: parsing config with SHA512: 32e4b160548e09668cc1f6003d580f2227cc24ec911f73d76e9b35ffae5e68eeaedcede9ea6a20368c1e5275ba176f6243850bad76676d0f3817c8c4cdb96e1b Dec 16 13:06:40.341703 unknown[1104]: fetched base config from "system" Dec 16 13:06:40.341720 unknown[1104]: fetched base config from "system" Dec 16 13:06:40.342409 ignition[1104]: fetch: fetch complete Dec 16 13:06:40.341728 unknown[1104]: fetched user config from "aws" Dec 16 13:06:40.342417 ignition[1104]: fetch: fetch passed Dec 16 13:06:40.342586 ignition[1104]: Ignition finished successfully Dec 16 13:06:40.345000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:06:40.346386 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Dec 16 13:06:40.348889 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Dec 16 13:06:40.393135 ignition[1110]: Ignition 2.22.0 Dec 16 13:06:40.393154 ignition[1110]: Stage: kargs Dec 16 13:06:40.393633 ignition[1110]: no configs at "/usr/lib/ignition/base.d" Dec 16 13:06:40.393646 ignition[1110]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 16 13:06:40.393824 ignition[1110]: PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 16 13:06:40.394955 ignition[1110]: PUT result: OK Dec 16 13:06:40.397658 ignition[1110]: kargs: kargs passed Dec 16 13:06:40.397745 ignition[1110]: Ignition finished successfully Dec 16 13:06:40.400369 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Dec 16 13:06:40.400000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:06:40.402049 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Dec 16 13:06:40.445314 ignition[1117]: Ignition 2.22.0 Dec 16 13:06:40.445332 ignition[1117]: Stage: disks Dec 16 13:06:40.445811 ignition[1117]: no configs at "/usr/lib/ignition/base.d" Dec 16 13:06:40.445824 ignition[1117]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 16 13:06:40.445946 ignition[1117]: PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 16 13:06:40.447057 ignition[1117]: PUT result: OK Dec 16 13:06:40.449803 ignition[1117]: disks: disks passed Dec 16 13:06:40.449882 ignition[1117]: Ignition finished successfully Dec 16 13:06:40.452441 systemd[1]: Finished ignition-disks.service - Ignition (disks). Dec 16 13:06:40.452000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:06:40.453158 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Dec 16 13:06:40.453560 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Dec 16 13:06:40.454146 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 16 13:06:40.454713 systemd[1]: Reached target sysinit.target - System Initialization. Dec 16 13:06:40.455413 systemd[1]: Reached target basic.target - Basic System. Dec 16 13:06:40.457251 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Dec 16 13:06:40.560359 systemd-fsck[1125]: ROOT: clean, 15/1631200 files, 112378/1617920 blocks Dec 16 13:06:40.563253 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Dec 16 13:06:40.562000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:06:40.566909 systemd[1]: Mounting sysroot.mount - /sysroot... Dec 16 13:06:40.825797 kernel: EXT4-fs (nvme0n1p9): mounted filesystem 7cac6192-738c-43cc-9341-24f71d091e91 r/w with ordered data mode. Quota mode: none. Dec 16 13:06:40.826307 systemd[1]: Mounted sysroot.mount - /sysroot. Dec 16 13:06:40.829279 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Dec 16 13:06:40.884849 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 16 13:06:40.887089 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Dec 16 13:06:40.889745 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Dec 16 13:06:40.890577 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Dec 16 13:06:40.891332 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Dec 16 13:06:40.902515 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Dec 16 13:06:40.905388 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Dec 16 13:06:40.920221 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 (259:5) scanned by mount (1144) Dec 16 13:06:40.924876 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem c87e2a2e-b8fc-4d1d-98f3-593ea9a0f098 Dec 16 13:06:40.924971 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Dec 16 13:06:40.933274 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Dec 16 13:06:40.933379 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Dec 16 13:06:40.936180 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 16 13:06:41.908788 initrd-setup-root[1168]: cut: /sysroot/etc/passwd: No such file or directory Dec 16 13:06:41.949181 initrd-setup-root[1175]: cut: /sysroot/etc/group: No such file or directory Dec 16 13:06:41.989222 initrd-setup-root[1182]: cut: /sysroot/etc/shadow: No such file or directory Dec 16 13:06:41.995749 initrd-setup-root[1189]: cut: /sysroot/etc/gshadow: No such file or directory Dec 16 13:06:42.558410 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Dec 16 13:06:42.567462 kernel: kauditd_printk_skb: 15 callbacks suppressed Dec 16 13:06:42.567506 kernel: audit: type=1130 audit(1765890402.558:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:06:42.558000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:06:42.561929 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Dec 16 13:06:42.569218 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Dec 16 13:06:42.587873 systemd[1]: sysroot-oem.mount: Deactivated successfully. Dec 16 13:06:42.591632 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem c87e2a2e-b8fc-4d1d-98f3-593ea9a0f098 Dec 16 13:06:42.618479 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Dec 16 13:06:42.619000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:06:42.625792 kernel: audit: type=1130 audit(1765890402.619:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:06:42.635402 ignition[1257]: INFO : Ignition 2.22.0 Dec 16 13:06:42.635402 ignition[1257]: INFO : Stage: mount Dec 16 13:06:42.637077 ignition[1257]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 16 13:06:42.637077 ignition[1257]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 16 13:06:42.637077 ignition[1257]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 16 13:06:42.638691 ignition[1257]: INFO : PUT result: OK Dec 16 13:06:42.639890 ignition[1257]: INFO : mount: mount passed Dec 16 13:06:42.648840 ignition[1257]: INFO : Ignition finished successfully Dec 16 13:06:42.649812 systemd[1]: Finished ignition-mount.service - Ignition (mount). Dec 16 13:06:42.656650 kernel: audit: type=1130 audit(1765890402.649:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:06:42.649000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:06:42.652954 systemd[1]: Starting ignition-files.service - Ignition (files)... Dec 16 13:06:42.676079 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 16 13:06:42.713974 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 (259:5) scanned by mount (1269) Dec 16 13:06:42.716939 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem c87e2a2e-b8fc-4d1d-98f3-593ea9a0f098 Dec 16 13:06:42.717018 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Dec 16 13:06:42.726507 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Dec 16 13:06:42.726591 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Dec 16 13:06:42.729124 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 16 13:06:42.816831 ignition[1285]: INFO : Ignition 2.22.0 Dec 16 13:06:42.816831 ignition[1285]: INFO : Stage: files Dec 16 13:06:42.818286 ignition[1285]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 16 13:06:42.818286 ignition[1285]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 16 13:06:42.818286 ignition[1285]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 16 13:06:42.819873 ignition[1285]: INFO : PUT result: OK Dec 16 13:06:42.821604 ignition[1285]: DEBUG : files: compiled without relabeling support, skipping Dec 16 13:06:42.822584 ignition[1285]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Dec 16 13:06:42.822584 ignition[1285]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Dec 16 13:06:42.829494 ignition[1285]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Dec 16 13:06:42.830888 ignition[1285]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Dec 16 13:06:42.832159 ignition[1285]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Dec 16 13:06:42.831462 unknown[1285]: wrote ssh authorized keys file for user: core Dec 16 13:06:42.835656 ignition[1285]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Dec 16 13:06:42.836644 ignition[1285]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Dec 16 13:06:42.914576 ignition[1285]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Dec 16 13:06:43.168486 ignition[1285]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Dec 16 13:06:43.168486 ignition[1285]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Dec 16 13:06:43.172006 ignition[1285]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Dec 16 13:06:43.172006 ignition[1285]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Dec 16 13:06:43.172006 ignition[1285]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Dec 16 13:06:43.172006 ignition[1285]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 16 13:06:43.172006 ignition[1285]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 16 13:06:43.172006 ignition[1285]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 16 13:06:43.172006 ignition[1285]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 16 13:06:43.178099 ignition[1285]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Dec 16 13:06:43.178099 ignition[1285]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Dec 16 13:06:43.178099 ignition[1285]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Dec 16 13:06:43.182172 ignition[1285]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Dec 16 13:06:43.182172 ignition[1285]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Dec 16 13:06:43.182172 ignition[1285]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-x86-64.raw: attempt #1 Dec 16 13:06:43.520672 ignition[1285]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Dec 16 13:06:44.051819 ignition[1285]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Dec 16 13:06:44.051819 ignition[1285]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Dec 16 13:06:44.077368 ignition[1285]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 16 13:06:44.085935 ignition[1285]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 16 13:06:44.085935 ignition[1285]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Dec 16 13:06:44.085935 ignition[1285]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Dec 16 13:06:44.091570 ignition[1285]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Dec 16 13:06:44.091570 ignition[1285]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Dec 16 13:06:44.091570 ignition[1285]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Dec 16 13:06:44.091570 ignition[1285]: INFO : files: files passed Dec 16 13:06:44.091570 ignition[1285]: INFO : Ignition finished successfully Dec 16 13:06:44.123867 kernel: audit: type=1130 audit(1765890404.091:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:06:44.091000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:06:44.091146 systemd[1]: Finished ignition-files.service - Ignition (files). Dec 16 13:06:44.097081 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Dec 16 13:06:44.110389 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Dec 16 13:06:44.140512 systemd[1]: ignition-quench.service: Deactivated successfully. Dec 16 13:06:44.141659 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Dec 16 13:06:44.142000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:06:44.142000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:06:44.158676 kernel: audit: type=1130 audit(1765890404.142:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:06:44.158789 kernel: audit: type=1131 audit(1765890404.142:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:06:44.164333 initrd-setup-root-after-ignition[1318]: grep: Dec 16 13:06:44.165921 initrd-setup-root-after-ignition[1322]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 16 13:06:44.167430 initrd-setup-root-after-ignition[1318]: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 16 13:06:44.167430 initrd-setup-root-after-ignition[1318]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Dec 16 13:06:44.175562 kernel: audit: type=1130 audit(1765890404.167:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:06:44.167000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:06:44.167091 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 16 13:06:44.168533 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Dec 16 13:06:44.177001 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Dec 16 13:06:44.243451 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Dec 16 13:06:44.243600 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Dec 16 13:06:44.255006 kernel: audit: type=1130 audit(1765890404.244:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:06:44.255052 kernel: audit: type=1131 audit(1765890404.244:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:06:44.244000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:06:44.244000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:06:44.246033 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Dec 16 13:06:44.255683 systemd[1]: Reached target initrd.target - Initrd Default Target. Dec 16 13:06:44.256977 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Dec 16 13:06:44.258443 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Dec 16 13:06:44.292394 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 16 13:06:44.299952 kernel: audit: type=1130 audit(1765890404.292:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:06:44.292000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:06:44.295621 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Dec 16 13:06:44.317679 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Dec 16 13:06:44.318239 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Dec 16 13:06:44.319493 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 16 13:06:44.320587 systemd[1]: Stopped target timers.target - Timer Units. Dec 16 13:06:44.321585 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Dec 16 13:06:44.321000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:06:44.321873 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 16 13:06:44.323199 systemd[1]: Stopped target initrd.target - Initrd Default Target. Dec 16 13:06:44.324270 systemd[1]: Stopped target basic.target - Basic System. Dec 16 13:06:44.325107 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Dec 16 13:06:44.325999 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Dec 16 13:06:44.326961 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Dec 16 13:06:44.327638 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Dec 16 13:06:44.328496 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Dec 16 13:06:44.329420 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Dec 16 13:06:44.330266 systemd[1]: Stopped target sysinit.target - System Initialization. Dec 16 13:06:44.331548 systemd[1]: Stopped target local-fs.target - Local File Systems. Dec 16 13:06:44.332365 systemd[1]: Stopped target swap.target - Swaps. Dec 16 13:06:44.333127 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Dec 16 13:06:44.333000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:06:44.333391 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Dec 16 13:06:44.334438 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Dec 16 13:06:44.335442 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 16 13:06:44.336124 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Dec 16 13:06:44.336293 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 16 13:06:44.337000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:06:44.336943 systemd[1]: dracut-initqueue.service: Deactivated successfully. Dec 16 13:06:44.338000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:06:44.337188 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Dec 16 13:06:44.339000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:06:44.338241 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Dec 16 13:06:44.338492 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 16 13:06:44.339364 systemd[1]: ignition-files.service: Deactivated successfully. Dec 16 13:06:44.339590 systemd[1]: Stopped ignition-files.service - Ignition (files). Dec 16 13:06:44.342961 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Dec 16 13:06:44.343000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:06:44.343451 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Dec 16 13:06:44.343701 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Dec 16 13:06:44.347092 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Dec 16 13:06:44.350223 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Dec 16 13:06:44.350548 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 16 13:06:44.350000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:06:44.351615 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Dec 16 13:06:44.353000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:06:44.352873 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Dec 16 13:06:44.353978 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Dec 16 13:06:44.355000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:06:44.355161 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Dec 16 13:06:44.368632 systemd[1]: initrd-cleanup.service: Deactivated successfully. Dec 16 13:06:44.369000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:06:44.369000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:06:44.369610 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Dec 16 13:06:44.392056 ignition[1342]: INFO : Ignition 2.22.0 Dec 16 13:06:44.394017 ignition[1342]: INFO : Stage: umount Dec 16 13:06:44.394017 ignition[1342]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 16 13:06:44.394017 ignition[1342]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 16 13:06:44.394017 ignition[1342]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 16 13:06:44.397847 ignition[1342]: INFO : PUT result: OK Dec 16 13:06:44.395170 systemd[1]: sysroot-boot.mount: Deactivated successfully. Dec 16 13:06:44.401512 systemd[1]: sysroot-boot.service: Deactivated successfully. Dec 16 13:06:44.401000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:06:44.401673 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Dec 16 13:06:44.403911 ignition[1342]: INFO : umount: umount passed Dec 16 13:06:44.403911 ignition[1342]: INFO : Ignition finished successfully Dec 16 13:06:44.405982 systemd[1]: ignition-mount.service: Deactivated successfully. Dec 16 13:06:44.406146 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Dec 16 13:06:44.406000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:06:44.407379 systemd[1]: ignition-disks.service: Deactivated successfully. Dec 16 13:06:44.407000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:06:44.407456 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Dec 16 13:06:44.407000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:06:44.408031 systemd[1]: ignition-kargs.service: Deactivated successfully. Dec 16 13:06:44.408000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:06:44.408102 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Dec 16 13:06:44.408740 systemd[1]: ignition-fetch.service: Deactivated successfully. Dec 16 13:06:44.410000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:06:44.408835 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Dec 16 13:06:44.409492 systemd[1]: Stopped target network.target - Network. Dec 16 13:06:44.410160 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Dec 16 13:06:44.410235 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Dec 16 13:06:44.411066 systemd[1]: Stopped target paths.target - Path Units. Dec 16 13:06:44.411676 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Dec 16 13:06:44.411839 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 16 13:06:44.412343 systemd[1]: Stopped target slices.target - Slice Units. Dec 16 13:06:44.416000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:06:44.413094 systemd[1]: Stopped target sockets.target - Socket Units. Dec 16 13:06:44.416000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup-pre comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:06:44.413741 systemd[1]: iscsid.socket: Deactivated successfully. Dec 16 13:06:44.417000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:06:44.413837 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Dec 16 13:06:44.414419 systemd[1]: iscsiuio.socket: Deactivated successfully. Dec 16 13:06:44.414467 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 16 13:06:44.415222 systemd[1]: systemd-journald-audit.socket: Deactivated successfully. Dec 16 13:06:44.415261 systemd[1]: Closed systemd-journald-audit.socket - Journal Audit Socket. Dec 16 13:06:44.415860 systemd[1]: ignition-setup.service: Deactivated successfully. Dec 16 13:06:44.415943 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Dec 16 13:06:44.417075 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Dec 16 13:06:44.417143 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Dec 16 13:06:44.417792 systemd[1]: initrd-setup-root.service: Deactivated successfully. Dec 16 13:06:44.417867 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Dec 16 13:06:44.418601 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Dec 16 13:06:44.419514 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Dec 16 13:06:44.432417 systemd[1]: systemd-resolved.service: Deactivated successfully. Dec 16 13:06:44.433133 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Dec 16 13:06:44.433000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:06:44.437236 systemd[1]: systemd-networkd.service: Deactivated successfully. Dec 16 13:06:44.437475 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Dec 16 13:06:44.437000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:06:44.440000 audit: BPF prog-id=6 op=UNLOAD Dec 16 13:06:44.441860 systemd[1]: Stopped target network-pre.target - Preparation for Network. Dec 16 13:06:44.441000 audit: BPF prog-id=9 op=UNLOAD Dec 16 13:06:44.442486 systemd[1]: systemd-networkd.socket: Deactivated successfully. Dec 16 13:06:44.442553 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Dec 16 13:06:44.445707 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Dec 16 13:06:44.447014 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Dec 16 13:06:44.449000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:06:44.447110 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 16 13:06:44.452000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:06:44.450061 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 16 13:06:44.453000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:06:44.450150 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Dec 16 13:06:44.453018 systemd[1]: systemd-modules-load.service: Deactivated successfully. Dec 16 13:06:44.453107 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Dec 16 13:06:44.454242 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 16 13:06:44.477988 systemd[1]: systemd-udevd.service: Deactivated successfully. Dec 16 13:06:44.478000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:06:44.478193 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 16 13:06:44.484000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:06:44.485000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:06:44.486000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:06:44.480300 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Dec 16 13:06:44.492000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:06:44.480412 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Dec 16 13:06:44.493000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:06:44.482215 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Dec 16 13:06:44.495000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:06:44.482274 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Dec 16 13:06:44.484229 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Dec 16 13:06:44.484327 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Dec 16 13:06:44.485251 systemd[1]: dracut-cmdline.service: Deactivated successfully. Dec 16 13:06:44.485338 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Dec 16 13:06:44.486304 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 16 13:06:44.486396 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 16 13:06:44.488537 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Dec 16 13:06:44.490384 systemd[1]: systemd-network-generator.service: Deactivated successfully. Dec 16 13:06:44.490488 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Dec 16 13:06:44.492983 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Dec 16 13:06:44.493074 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 16 13:06:44.494617 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 16 13:06:44.494698 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 13:06:44.516972 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Dec 16 13:06:44.519435 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Dec 16 13:06:44.520000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:06:44.520000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:06:44.524002 systemd[1]: network-cleanup.service: Deactivated successfully. Dec 16 13:06:44.524182 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Dec 16 13:06:44.524000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:06:44.525837 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Dec 16 13:06:44.527992 systemd[1]: Starting initrd-switch-root.service - Switch Root... Dec 16 13:06:44.576286 systemd[1]: Switching root. Dec 16 13:06:44.672034 systemd-journald[289]: Journal stopped Dec 16 13:06:47.806933 systemd-journald[289]: Received SIGTERM from PID 1 (systemd). Dec 16 13:06:47.807068 kernel: SELinux: policy capability network_peer_controls=1 Dec 16 13:06:47.807108 kernel: SELinux: policy capability open_perms=1 Dec 16 13:06:47.807133 kernel: SELinux: policy capability extended_socket_class=1 Dec 16 13:06:47.807157 kernel: SELinux: policy capability always_check_network=0 Dec 16 13:06:47.807182 kernel: SELinux: policy capability cgroup_seclabel=1 Dec 16 13:06:47.807213 kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 16 13:06:47.807239 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Dec 16 13:06:47.807269 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Dec 16 13:06:47.807297 kernel: SELinux: policy capability userspace_initial_context=0 Dec 16 13:06:47.807322 systemd[1]: Successfully loaded SELinux policy in 229.545ms. Dec 16 13:06:47.807356 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 29.729ms. Dec 16 13:06:47.807387 systemd[1]: systemd 257.9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Dec 16 13:06:47.807416 systemd[1]: Detected virtualization amazon. Dec 16 13:06:47.807446 systemd[1]: Detected architecture x86-64. Dec 16 13:06:47.807472 systemd[1]: Detected first boot. Dec 16 13:06:47.807501 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Dec 16 13:06:47.807528 kernel: Guest personality initialized and is inactive Dec 16 13:06:47.807559 kernel: VMCI host device registered (name=vmci, major=10, minor=258) Dec 16 13:06:47.807584 kernel: Initialized host personality Dec 16 13:06:47.807608 kernel: NET: Registered PF_VSOCK protocol family Dec 16 13:06:47.807633 zram_generator::config[1386]: No configuration found. Dec 16 13:06:47.807665 systemd[1]: Populated /etc with preset unit settings. Dec 16 13:06:47.807702 systemd[1]: initrd-switch-root.service: Deactivated successfully. Dec 16 13:06:47.807731 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Dec 16 13:06:47.817956 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Dec 16 13:06:47.818028 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Dec 16 13:06:47.818057 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Dec 16 13:06:47.818084 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Dec 16 13:06:47.818111 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Dec 16 13:06:47.818154 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Dec 16 13:06:47.818178 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Dec 16 13:06:47.818208 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Dec 16 13:06:47.818234 systemd[1]: Created slice user.slice - User and Session Slice. Dec 16 13:06:47.818261 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 16 13:06:47.818291 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 16 13:06:47.818319 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Dec 16 13:06:47.818351 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Dec 16 13:06:47.818379 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Dec 16 13:06:47.818405 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 16 13:06:47.818431 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Dec 16 13:06:47.818458 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 16 13:06:47.818485 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 16 13:06:47.818515 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Dec 16 13:06:47.818540 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Dec 16 13:06:47.818565 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Dec 16 13:06:47.818594 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Dec 16 13:06:47.818622 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 16 13:06:47.818649 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 16 13:06:47.818676 systemd[1]: Reached target remote-veritysetup.target - Remote Verity Protected Volumes. Dec 16 13:06:47.818706 systemd[1]: Reached target slices.target - Slice Units. Dec 16 13:06:47.818745 systemd[1]: Reached target swap.target - Swaps. Dec 16 13:06:47.823435 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Dec 16 13:06:47.823480 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Dec 16 13:06:47.823510 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Dec 16 13:06:47.823536 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Dec 16 13:06:47.826908 systemd[1]: Listening on systemd-mountfsd.socket - DDI File System Mounter Socket. Dec 16 13:06:47.826961 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 16 13:06:47.826990 systemd[1]: Listening on systemd-nsresourced.socket - Namespace Resource Manager Socket. Dec 16 13:06:47.827014 systemd[1]: Listening on systemd-oomd.socket - Userspace Out-Of-Memory (OOM) Killer Socket. Dec 16 13:06:47.827040 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 16 13:06:47.827068 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 16 13:06:47.827095 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Dec 16 13:06:47.827121 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Dec 16 13:06:47.827394 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Dec 16 13:06:47.827425 systemd[1]: Mounting media.mount - External Media Directory... Dec 16 13:06:47.827452 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 16 13:06:47.827478 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Dec 16 13:06:47.827507 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Dec 16 13:06:47.827533 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Dec 16 13:06:47.827561 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Dec 16 13:06:47.827592 systemd[1]: Reached target machines.target - Containers. Dec 16 13:06:47.827620 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Dec 16 13:06:47.827647 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 16 13:06:47.827672 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 16 13:06:47.831141 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Dec 16 13:06:47.831181 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 16 13:06:47.831218 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 16 13:06:47.831245 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 16 13:06:47.831271 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Dec 16 13:06:47.831297 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 16 13:06:47.831326 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Dec 16 13:06:47.831354 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Dec 16 13:06:47.831382 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Dec 16 13:06:47.831421 kernel: kauditd_printk_skb: 54 callbacks suppressed Dec 16 13:06:47.831455 kernel: audit: type=1131 audit(1765890407.572:99): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:06:47.831482 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Dec 16 13:06:47.831513 systemd[1]: Stopped systemd-fsck-usr.service. Dec 16 13:06:47.831540 kernel: audit: type=1131 audit(1765890407.582:100): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:06:47.831566 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 16 13:06:47.831592 kernel: audit: type=1334 audit(1765890407.591:101): prog-id=14 op=UNLOAD Dec 16 13:06:47.831620 kernel: audit: type=1334 audit(1765890407.591:102): prog-id=13 op=UNLOAD Dec 16 13:06:47.831645 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 16 13:06:47.831673 kernel: audit: type=1334 audit(1765890407.592:103): prog-id=15 op=LOAD Dec 16 13:06:47.831696 kernel: audit: type=1334 audit(1765890407.594:104): prog-id=16 op=LOAD Dec 16 13:06:47.831721 kernel: audit: type=1334 audit(1765890407.594:105): prog-id=17 op=LOAD Dec 16 13:06:47.831746 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 16 13:06:47.831795 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Dec 16 13:06:47.831819 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Dec 16 13:06:47.831839 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Dec 16 13:06:47.831861 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 16 13:06:47.831884 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 16 13:06:47.831909 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Dec 16 13:06:47.831932 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Dec 16 13:06:47.831954 systemd[1]: Mounted media.mount - External Media Directory. Dec 16 13:06:47.831980 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Dec 16 13:06:47.832003 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Dec 16 13:06:47.832026 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Dec 16 13:06:47.832050 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 16 13:06:47.832073 kernel: audit: type=1130 audit(1765890407.685:106): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:06:47.832096 systemd[1]: modprobe@configfs.service: Deactivated successfully. Dec 16 13:06:47.832119 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Dec 16 13:06:47.832145 kernel: audit: type=1130 audit(1765890407.698:107): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:06:47.832167 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 16 13:06:47.832189 kernel: audit: type=1131 audit(1765890407.698:108): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:06:47.832213 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 16 13:06:47.832236 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 16 13:06:47.832263 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 16 13:06:47.832286 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 16 13:06:47.832310 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 16 13:06:47.832333 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 16 13:06:47.832355 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Dec 16 13:06:47.832375 kernel: fuse: init (API version 7.41) Dec 16 13:06:47.832396 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Dec 16 13:06:47.832419 systemd[1]: modprobe@fuse.service: Deactivated successfully. Dec 16 13:06:47.832441 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Dec 16 13:06:47.832512 systemd-journald[1463]: Collecting audit messages is enabled. Dec 16 13:06:47.832554 systemd[1]: Reached target network-pre.target - Preparation for Network. Dec 16 13:06:47.832578 systemd[1]: Listening on systemd-importd.socket - Disk Image Download Service Socket. Dec 16 13:06:47.832602 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Dec 16 13:06:47.832627 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Dec 16 13:06:47.832655 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Dec 16 13:06:47.832681 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 16 13:06:47.832706 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Dec 16 13:06:47.832729 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 16 13:06:47.836542 systemd-journald[1463]: Journal started Dec 16 13:06:47.836640 systemd-journald[1463]: Runtime Journal (/run/log/journal/ec289c946a30595dff5bd7fff5ab9de6) is 4.7M, max 38M, 33.2M free. Dec 16 13:06:47.836737 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Dec 16 13:06:47.410000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Dec 16 13:06:47.572000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:06:47.582000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:06:47.591000 audit: BPF prog-id=14 op=UNLOAD Dec 16 13:06:47.591000 audit: BPF prog-id=13 op=UNLOAD Dec 16 13:06:47.592000 audit: BPF prog-id=15 op=LOAD Dec 16 13:06:47.594000 audit: BPF prog-id=16 op=LOAD Dec 16 13:06:47.594000 audit: BPF prog-id=17 op=LOAD Dec 16 13:06:47.685000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:06:47.698000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:06:47.698000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:06:47.718000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:06:47.718000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:06:47.726000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:06:47.726000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:06:47.732000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:06:47.734000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:06:47.738000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:06:47.746000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:06:47.751000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:06:47.764000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:06:47.764000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:06:47.787000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Dec 16 13:06:47.787000 audit[1463]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=5 a1=7fff0cafcc90 a2=4000 a3=0 items=0 ppid=1 pid=1463 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:06:47.787000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Dec 16 13:06:47.309297 systemd[1]: Queued start job for default target multi-user.target. Dec 16 13:06:47.329175 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Dec 16 13:06:47.330016 systemd[1]: systemd-journald.service: Deactivated successfully. Dec 16 13:06:47.851612 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Dec 16 13:06:47.851691 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 16 13:06:47.861513 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Dec 16 13:06:47.861626 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 16 13:06:47.871817 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 16 13:06:47.880787 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Dec 16 13:06:47.885785 systemd[1]: Started systemd-journald.service - Journal Service. Dec 16 13:06:47.887000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:06:47.895603 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Dec 16 13:06:47.897000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-load-credentials comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:06:47.899333 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Dec 16 13:06:47.900841 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Dec 16 13:06:47.904000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:06:47.903240 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 16 13:06:47.934005 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Dec 16 13:06:47.937837 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Dec 16 13:06:47.938000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:06:47.943840 kernel: loop1: detected capacity change from 0 to 229808 Dec 16 13:06:47.943101 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Dec 16 13:06:47.950107 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Dec 16 13:06:47.972598 systemd-journald[1463]: Time spent on flushing to /var/log/journal/ec289c946a30595dff5bd7fff5ab9de6 is 73.089ms for 1156 entries. Dec 16 13:06:47.972598 systemd-journald[1463]: System Journal (/var/log/journal/ec289c946a30595dff5bd7fff5ab9de6) is 8M, max 588.1M, 580.1M free. Dec 16 13:06:48.088776 systemd-journald[1463]: Received client request to flush runtime journal. Dec 16 13:06:48.088868 kernel: ACPI: bus type drm_connector registered Dec 16 13:06:47.998000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:06:47.998000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:06:48.002000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:06:48.007000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:06:47.994458 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 16 13:06:47.995142 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 16 13:06:48.002708 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Dec 16 13:06:48.004607 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 16 13:06:48.012273 systemd[1]: Starting systemd-sysusers.service - Create System Users... Dec 16 13:06:48.093003 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Dec 16 13:06:48.096000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:06:48.097478 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Dec 16 13:06:48.097000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:06:48.126827 systemd[1]: Finished systemd-sysusers.service - Create System Users. Dec 16 13:06:48.126000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:06:48.128000 audit: BPF prog-id=18 op=LOAD Dec 16 13:06:48.128000 audit: BPF prog-id=19 op=LOAD Dec 16 13:06:48.129000 audit: BPF prog-id=20 op=LOAD Dec 16 13:06:48.131150 systemd[1]: Starting systemd-oomd.service - Userspace Out-Of-Memory (OOM) Killer... Dec 16 13:06:48.134000 audit: BPF prog-id=21 op=LOAD Dec 16 13:06:48.136686 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 16 13:06:48.143096 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 16 13:06:48.195000 audit: BPF prog-id=22 op=LOAD Dec 16 13:06:48.195000 audit: BPF prog-id=23 op=LOAD Dec 16 13:06:48.195000 audit: BPF prog-id=24 op=LOAD Dec 16 13:06:48.200000 audit: BPF prog-id=25 op=LOAD Dec 16 13:06:48.200000 audit: BPF prog-id=26 op=LOAD Dec 16 13:06:48.200000 audit: BPF prog-id=27 op=LOAD Dec 16 13:06:48.199000 systemd[1]: Starting systemd-nsresourced.service - Namespace Resource Manager... Dec 16 13:06:48.203835 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Dec 16 13:06:48.219095 systemd-tmpfiles[1539]: ACLs are not supported, ignoring. Dec 16 13:06:48.219554 systemd-tmpfiles[1539]: ACLs are not supported, ignoring. Dec 16 13:06:48.230807 kernel: loop2: detected capacity change from 0 to 73200 Dec 16 13:06:48.235629 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 16 13:06:48.236000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:06:48.295073 systemd-nsresourced[1541]: Not setting up BPF subsystem, as functionality has been disabled at compile time. Dec 16 13:06:48.302092 systemd[1]: Started systemd-nsresourced.service - Namespace Resource Manager. Dec 16 13:06:48.303000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-nsresourced comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:06:48.306040 systemd[1]: Started systemd-userdbd.service - User Database Manager. Dec 16 13:06:48.307000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:06:48.332005 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Dec 16 13:06:48.455000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-oomd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:06:48.452859 systemd-oomd[1537]: No swap; memory pressure usage will be degraded Dec 16 13:06:48.453771 systemd[1]: Started systemd-oomd.service - Userspace Out-Of-Memory (OOM) Killer. Dec 16 13:06:48.486957 systemd-resolved[1538]: Positive Trust Anchors: Dec 16 13:06:48.487441 systemd-resolved[1538]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 16 13:06:48.487451 systemd-resolved[1538]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Dec 16 13:06:48.487518 systemd-resolved[1538]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 16 13:06:48.494538 systemd-resolved[1538]: Defaulting to hostname 'linux'. Dec 16 13:06:48.496495 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 16 13:06:48.496000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:06:48.497558 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 16 13:06:48.558787 kernel: loop3: detected capacity change from 0 to 119256 Dec 16 13:06:48.846279 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Dec 16 13:06:48.846000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:06:48.846000 audit: BPF prog-id=8 op=UNLOAD Dec 16 13:06:48.846000 audit: BPF prog-id=7 op=UNLOAD Dec 16 13:06:48.847000 audit: BPF prog-id=28 op=LOAD Dec 16 13:06:48.847000 audit: BPF prog-id=29 op=LOAD Dec 16 13:06:48.849212 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 16 13:06:48.887992 kernel: loop4: detected capacity change from 0 to 111544 Dec 16 13:06:48.889638 systemd-udevd[1562]: Using default interface naming scheme 'v257'. Dec 16 13:06:49.004902 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 16 13:06:49.005000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:06:49.007000 audit: BPF prog-id=30 op=LOAD Dec 16 13:06:49.010158 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 16 13:06:49.091050 (udev-worker)[1573]: Network interface NamePolicy= disabled on kernel command line. Dec 16 13:06:49.108294 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Dec 16 13:06:49.124489 systemd-networkd[1567]: lo: Link UP Dec 16 13:06:49.124948 systemd-networkd[1567]: lo: Gained carrier Dec 16 13:06:49.127139 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 16 13:06:49.127000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:06:49.128782 systemd[1]: Reached target network.target - Network. Dec 16 13:06:49.131976 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Dec 16 13:06:49.135974 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Dec 16 13:06:49.170455 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Dec 16 13:06:49.170000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd-persistent-storage comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:06:49.190849 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Dec 16 13:06:49.190958 kernel: loop5: detected capacity change from 0 to 229808 Dec 16 13:06:49.196020 kernel: ACPI: button: Power Button [PWRF] Dec 16 13:06:49.198793 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input4 Dec 16 13:06:49.202779 kernel: ACPI: button: Sleep Button [SLPF] Dec 16 13:06:49.206846 kernel: mousedev: PS/2 mouse device common for all mice Dec 16 13:06:49.222791 kernel: loop6: detected capacity change from 0 to 73200 Dec 16 13:06:49.231789 kernel: piix4_smbus 0000:00:01.3: SMBus base address uninitialized - upgrade BIOS or use force_addr=0xaddr Dec 16 13:06:49.234582 systemd-networkd[1567]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Dec 16 13:06:49.234595 systemd-networkd[1567]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 16 13:06:49.237928 systemd-networkd[1567]: eth0: Link UP Dec 16 13:06:49.238330 systemd-networkd[1567]: eth0: Gained carrier Dec 16 13:06:49.239299 systemd-networkd[1567]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Dec 16 13:06:49.244282 kernel: loop7: detected capacity change from 0 to 119256 Dec 16 13:06:49.249959 systemd-networkd[1567]: eth0: DHCPv4 address 172.31.28.98/20, gateway 172.31.16.1 acquired from 172.31.16.1 Dec 16 13:06:49.262793 kernel: loop1: detected capacity change from 0 to 111544 Dec 16 13:06:49.281400 (sd-merge)[1591]: Using extensions 'containerd-flatcar.raw', 'docker-flatcar.raw', 'kubernetes.raw', 'oem-ami.raw'. Dec 16 13:06:49.291715 (sd-merge)[1591]: Merged extensions into '/usr'. Dec 16 13:06:49.303987 systemd[1]: Reload requested from client PID 1495 ('systemd-sysext') (unit systemd-sysext.service)... Dec 16 13:06:49.304012 systemd[1]: Reloading... Dec 16 13:06:49.477830 zram_generator::config[1636]: No configuration found. Dec 16 13:06:49.929584 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Dec 16 13:06:49.930689 systemd[1]: Reloading finished in 625 ms. Dec 16 13:06:49.951562 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Dec 16 13:06:49.951000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:06:50.005852 systemd[1]: Starting ensure-sysext.service... Dec 16 13:06:50.012118 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Dec 16 13:06:50.015476 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 16 13:06:50.028000 audit: BPF prog-id=31 op=LOAD Dec 16 13:06:50.028000 audit: BPF prog-id=21 op=UNLOAD Dec 16 13:06:50.029000 audit: BPF prog-id=32 op=LOAD Dec 16 13:06:50.029000 audit: BPF prog-id=25 op=UNLOAD Dec 16 13:06:50.029000 audit: BPF prog-id=33 op=LOAD Dec 16 13:06:50.029000 audit: BPF prog-id=34 op=LOAD Dec 16 13:06:50.029000 audit: BPF prog-id=26 op=UNLOAD Dec 16 13:06:50.029000 audit: BPF prog-id=27 op=UNLOAD Dec 16 13:06:50.025247 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 16 13:06:50.031000 audit: BPF prog-id=35 op=LOAD Dec 16 13:06:50.032000 audit: BPF prog-id=18 op=UNLOAD Dec 16 13:06:50.032000 audit: BPF prog-id=36 op=LOAD Dec 16 13:06:50.032000 audit: BPF prog-id=37 op=LOAD Dec 16 13:06:50.032000 audit: BPF prog-id=19 op=UNLOAD Dec 16 13:06:50.032000 audit: BPF prog-id=20 op=UNLOAD Dec 16 13:06:50.034000 audit: BPF prog-id=38 op=LOAD Dec 16 13:06:50.035000 audit: BPF prog-id=15 op=UNLOAD Dec 16 13:06:50.035000 audit: BPF prog-id=39 op=LOAD Dec 16 13:06:50.035000 audit: BPF prog-id=40 op=LOAD Dec 16 13:06:50.035000 audit: BPF prog-id=16 op=UNLOAD Dec 16 13:06:50.035000 audit: BPF prog-id=17 op=UNLOAD Dec 16 13:06:50.036000 audit: BPF prog-id=41 op=LOAD Dec 16 13:06:50.036000 audit: BPF prog-id=30 op=UNLOAD Dec 16 13:06:50.038000 audit: BPF prog-id=42 op=LOAD Dec 16 13:06:50.038000 audit: BPF prog-id=22 op=UNLOAD Dec 16 13:06:50.038000 audit: BPF prog-id=43 op=LOAD Dec 16 13:06:50.038000 audit: BPF prog-id=44 op=LOAD Dec 16 13:06:50.038000 audit: BPF prog-id=23 op=UNLOAD Dec 16 13:06:50.038000 audit: BPF prog-id=24 op=UNLOAD Dec 16 13:06:50.039000 audit: BPF prog-id=45 op=LOAD Dec 16 13:06:50.039000 audit: BPF prog-id=46 op=LOAD Dec 16 13:06:50.039000 audit: BPF prog-id=28 op=UNLOAD Dec 16 13:06:50.039000 audit: BPF prog-id=29 op=UNLOAD Dec 16 13:06:50.054031 systemd[1]: Reload requested from client PID 1776 ('systemctl') (unit ensure-sysext.service)... Dec 16 13:06:50.054056 systemd[1]: Reloading... Dec 16 13:06:50.070960 systemd-tmpfiles[1778]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Dec 16 13:06:50.071004 systemd-tmpfiles[1778]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Dec 16 13:06:50.071417 systemd-tmpfiles[1778]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Dec 16 13:06:50.075317 systemd-tmpfiles[1778]: ACLs are not supported, ignoring. Dec 16 13:06:50.075410 systemd-tmpfiles[1778]: ACLs are not supported, ignoring. Dec 16 13:06:50.088728 systemd-tmpfiles[1778]: Detected autofs mount point /boot during canonicalization of boot. Dec 16 13:06:50.088744 systemd-tmpfiles[1778]: Skipping /boot Dec 16 13:06:50.103502 systemd-tmpfiles[1778]: Detected autofs mount point /boot during canonicalization of boot. Dec 16 13:06:50.103527 systemd-tmpfiles[1778]: Skipping /boot Dec 16 13:06:50.168881 zram_generator::config[1818]: No configuration found. Dec 16 13:06:50.445647 systemd[1]: Reloading finished in 391 ms. Dec 16 13:06:50.462000 audit: BPF prog-id=47 op=LOAD Dec 16 13:06:50.462000 audit: BPF prog-id=48 op=LOAD Dec 16 13:06:50.462000 audit: BPF prog-id=45 op=UNLOAD Dec 16 13:06:50.462000 audit: BPF prog-id=46 op=UNLOAD Dec 16 13:06:50.463000 audit: BPF prog-id=49 op=LOAD Dec 16 13:06:50.463000 audit: BPF prog-id=32 op=UNLOAD Dec 16 13:06:50.463000 audit: BPF prog-id=50 op=LOAD Dec 16 13:06:50.463000 audit: BPF prog-id=51 op=LOAD Dec 16 13:06:50.463000 audit: BPF prog-id=33 op=UNLOAD Dec 16 13:06:50.463000 audit: BPF prog-id=34 op=UNLOAD Dec 16 13:06:50.464000 audit: BPF prog-id=52 op=LOAD Dec 16 13:06:50.464000 audit: BPF prog-id=41 op=UNLOAD Dec 16 13:06:50.465000 audit: BPF prog-id=53 op=LOAD Dec 16 13:06:50.465000 audit: BPF prog-id=38 op=UNLOAD Dec 16 13:06:50.465000 audit: BPF prog-id=54 op=LOAD Dec 16 13:06:50.465000 audit: BPF prog-id=55 op=LOAD Dec 16 13:06:50.465000 audit: BPF prog-id=39 op=UNLOAD Dec 16 13:06:50.465000 audit: BPF prog-id=40 op=UNLOAD Dec 16 13:06:50.466000 audit: BPF prog-id=56 op=LOAD Dec 16 13:06:50.466000 audit: BPF prog-id=31 op=UNLOAD Dec 16 13:06:50.467000 audit: BPF prog-id=57 op=LOAD Dec 16 13:06:50.467000 audit: BPF prog-id=42 op=UNLOAD Dec 16 13:06:50.467000 audit: BPF prog-id=58 op=LOAD Dec 16 13:06:50.467000 audit: BPF prog-id=59 op=LOAD Dec 16 13:06:50.467000 audit: BPF prog-id=43 op=UNLOAD Dec 16 13:06:50.467000 audit: BPF prog-id=44 op=UNLOAD Dec 16 13:06:50.467000 audit: BPF prog-id=60 op=LOAD Dec 16 13:06:50.467000 audit: BPF prog-id=35 op=UNLOAD Dec 16 13:06:50.467000 audit: BPF prog-id=61 op=LOAD Dec 16 13:06:50.467000 audit: BPF prog-id=62 op=LOAD Dec 16 13:06:50.467000 audit: BPF prog-id=36 op=UNLOAD Dec 16 13:06:50.467000 audit: BPF prog-id=37 op=UNLOAD Dec 16 13:06:50.480000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:06:50.479975 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Dec 16 13:06:50.481478 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 16 13:06:50.481000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:06:50.485173 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 13:06:50.484000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:06:50.500447 systemd[1]: Starting audit-rules.service - Load Audit Rules... Dec 16 13:06:50.503945 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Dec 16 13:06:50.509902 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Dec 16 13:06:50.516436 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Dec 16 13:06:50.524926 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Dec 16 13:06:50.531570 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 16 13:06:50.531883 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 16 13:06:50.536086 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 16 13:06:50.539202 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 16 13:06:50.546196 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 16 13:06:50.548011 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 16 13:06:50.548378 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Dec 16 13:06:50.548545 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 16 13:06:50.548703 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 16 13:06:50.555291 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 16 13:06:50.555642 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 16 13:06:50.555911 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 16 13:06:50.556165 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Dec 16 13:06:50.556312 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 16 13:06:50.556443 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 16 13:06:50.556911 systemd-networkd[1567]: eth0: Gained IPv6LL Dec 16 13:06:50.566000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd-wait-online comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:06:50.566421 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Dec 16 13:06:50.573138 systemd[1]: Reached target network-online.target - Network is Online. Dec 16 13:06:50.574000 audit[1879]: SYSTEM_BOOT pid=1879 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Dec 16 13:06:50.595286 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 16 13:06:50.595780 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 16 13:06:50.602339 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 16 13:06:50.603959 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 16 13:06:50.604618 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Dec 16 13:06:50.605054 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 16 13:06:50.605319 systemd[1]: Reached target time-set.target - System Time Set. Dec 16 13:06:50.606995 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 16 13:06:50.611683 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Dec 16 13:06:50.612000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:06:50.616654 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Dec 16 13:06:50.617000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:06:50.623727 systemd[1]: Finished ensure-sysext.service. Dec 16 13:06:50.624000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:06:50.637542 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 16 13:06:50.637815 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 16 13:06:50.637000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:06:50.637000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:06:50.642379 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 16 13:06:50.643274 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 16 13:06:50.643000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:06:50.643000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:06:50.644936 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 16 13:06:50.645291 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 16 13:06:50.645000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:06:50.645000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:06:50.646380 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 16 13:06:50.646837 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 16 13:06:50.646000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:06:50.646000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:06:50.650449 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 16 13:06:50.650535 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 16 13:06:50.702000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Dec 16 13:06:50.702000 audit[1911]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffd5c7a3c60 a2=420 a3=0 items=0 ppid=1875 pid=1911 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:06:50.702000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Dec 16 13:06:50.704658 augenrules[1911]: No rules Dec 16 13:06:50.705796 systemd[1]: audit-rules.service: Deactivated successfully. Dec 16 13:06:50.706148 systemd[1]: Finished audit-rules.service - Load Audit Rules. Dec 16 13:06:50.807918 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Dec 16 13:06:50.808900 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 16 13:06:54.426308 ldconfig[1877]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Dec 16 13:06:54.438385 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Dec 16 13:06:54.440327 systemd[1]: Starting systemd-update-done.service - Update is Completed... Dec 16 13:06:54.466197 systemd[1]: Finished systemd-update-done.service - Update is Completed. Dec 16 13:06:54.467295 systemd[1]: Reached target sysinit.target - System Initialization. Dec 16 13:06:54.467972 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Dec 16 13:06:54.468464 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Dec 16 13:06:54.468994 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Dec 16 13:06:54.469747 systemd[1]: Started logrotate.timer - Daily rotation of log files. Dec 16 13:06:54.470296 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Dec 16 13:06:54.470845 systemd[1]: Started systemd-sysupdate-reboot.timer - Reboot Automatically After System Update. Dec 16 13:06:54.471459 systemd[1]: Started systemd-sysupdate.timer - Automatic System Update. Dec 16 13:06:54.471879 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Dec 16 13:06:54.472254 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Dec 16 13:06:54.472301 systemd[1]: Reached target paths.target - Path Units. Dec 16 13:06:54.472670 systemd[1]: Reached target timers.target - Timer Units. Dec 16 13:06:54.475252 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Dec 16 13:06:54.477406 systemd[1]: Starting docker.socket - Docker Socket for the API... Dec 16 13:06:54.480224 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Dec 16 13:06:54.480895 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Dec 16 13:06:54.481354 systemd[1]: Reached target ssh-access.target - SSH Access Available. Dec 16 13:06:54.484614 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Dec 16 13:06:54.485524 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Dec 16 13:06:54.487067 systemd[1]: Listening on docker.socket - Docker Socket for the API. Dec 16 13:06:54.488556 systemd[1]: Reached target sockets.target - Socket Units. Dec 16 13:06:54.489063 systemd[1]: Reached target basic.target - Basic System. Dec 16 13:06:54.489525 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Dec 16 13:06:54.489565 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Dec 16 13:06:54.491528 systemd[1]: Starting containerd.service - containerd container runtime... Dec 16 13:06:54.495968 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Dec 16 13:06:54.499085 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Dec 16 13:06:54.507425 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Dec 16 13:06:54.511196 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Dec 16 13:06:54.514308 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Dec 16 13:06:54.515183 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Dec 16 13:06:54.519012 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Dec 16 13:06:54.521724 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 13:06:54.531044 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Dec 16 13:06:54.538333 systemd[1]: Started ntpd.service - Network Time Service. Dec 16 13:06:54.541599 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Dec 16 13:06:54.551506 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Dec 16 13:06:54.560696 systemd[1]: Starting setup-oem.service - Setup OEM... Dec 16 13:06:54.581683 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Dec 16 13:06:54.587956 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Dec 16 13:06:54.612198 systemd[1]: Starting systemd-logind.service - User Login Management... Dec 16 13:06:54.613002 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Dec 16 13:06:54.615021 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Dec 16 13:06:54.623111 systemd[1]: Starting update-engine.service - Update Engine... Dec 16 13:06:54.636681 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Dec 16 13:06:54.643992 jq[1927]: false Dec 16 13:06:54.650168 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Dec 16 13:06:54.653370 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Dec 16 13:06:54.655151 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Dec 16 13:06:54.675940 jq[1943]: true Dec 16 13:06:54.676358 google_oslogin_nss_cache[1929]: oslogin_cache_refresh[1929]: Refreshing passwd entry cache Dec 16 13:06:54.667606 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Dec 16 13:06:54.670626 oslogin_cache_refresh[1929]: Refreshing passwd entry cache Dec 16 13:06:54.672088 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Dec 16 13:06:54.707892 google_oslogin_nss_cache[1929]: oslogin_cache_refresh[1929]: Failure getting users, quitting Dec 16 13:06:54.707892 google_oslogin_nss_cache[1929]: oslogin_cache_refresh[1929]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Dec 16 13:06:54.707892 google_oslogin_nss_cache[1929]: oslogin_cache_refresh[1929]: Refreshing group entry cache Dec 16 13:06:54.707226 oslogin_cache_refresh[1929]: Failure getting users, quitting Dec 16 13:06:54.707253 oslogin_cache_refresh[1929]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Dec 16 13:06:54.707316 oslogin_cache_refresh[1929]: Refreshing group entry cache Dec 16 13:06:54.724139 google_oslogin_nss_cache[1929]: oslogin_cache_refresh[1929]: Failure getting groups, quitting Dec 16 13:06:54.724139 google_oslogin_nss_cache[1929]: oslogin_cache_refresh[1929]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Dec 16 13:06:54.720695 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Dec 16 13:06:54.713969 oslogin_cache_refresh[1929]: Failure getting groups, quitting Dec 16 13:06:54.721175 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Dec 16 13:06:54.713992 oslogin_cache_refresh[1929]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Dec 16 13:06:54.748260 extend-filesystems[1928]: Found /dev/nvme0n1p6 Dec 16 13:06:54.781178 systemd[1]: motdgen.service: Deactivated successfully. Dec 16 13:06:54.782488 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Dec 16 13:06:54.787891 extend-filesystems[1928]: Found /dev/nvme0n1p9 Dec 16 13:06:54.798611 update_engine[1942]: I20251216 13:06:54.798477 1942 main.cc:92] Flatcar Update Engine starting Dec 16 13:06:54.800722 extend-filesystems[1928]: Checking size of /dev/nvme0n1p9 Dec 16 13:06:54.816853 tar[1961]: linux-amd64/LICENSE Dec 16 13:06:54.816853 tar[1961]: linux-amd64/helm Dec 16 13:06:54.825369 jq[1953]: true Dec 16 13:06:54.839716 coreos-metadata[1924]: Dec 16 13:06:54.839 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Dec 16 13:06:54.849740 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Dec 16 13:06:54.860908 coreos-metadata[1924]: Dec 16 13:06:54.860 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Dec 16 13:06:54.861977 coreos-metadata[1924]: Dec 16 13:06:54.861 INFO Fetch successful Dec 16 13:06:54.861977 coreos-metadata[1924]: Dec 16 13:06:54.861 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Dec 16 13:06:54.862796 coreos-metadata[1924]: Dec 16 13:06:54.862 INFO Fetch successful Dec 16 13:06:54.862796 coreos-metadata[1924]: Dec 16 13:06:54.862 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Dec 16 13:06:54.863496 coreos-metadata[1924]: Dec 16 13:06:54.863 INFO Fetch successful Dec 16 13:06:54.863782 coreos-metadata[1924]: Dec 16 13:06:54.863 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Dec 16 13:06:54.864903 coreos-metadata[1924]: Dec 16 13:06:54.864 INFO Fetch successful Dec 16 13:06:54.865019 coreos-metadata[1924]: Dec 16 13:06:54.864 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Dec 16 13:06:54.865563 coreos-metadata[1924]: Dec 16 13:06:54.865 INFO Fetch failed with 404: resource not found Dec 16 13:06:54.865876 coreos-metadata[1924]: Dec 16 13:06:54.865 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Dec 16 13:06:54.866388 coreos-metadata[1924]: Dec 16 13:06:54.866 INFO Fetch successful Dec 16 13:06:54.866652 coreos-metadata[1924]: Dec 16 13:06:54.866 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Dec 16 13:06:54.868352 coreos-metadata[1924]: Dec 16 13:06:54.868 INFO Fetch successful Dec 16 13:06:54.868352 coreos-metadata[1924]: Dec 16 13:06:54.868 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Dec 16 13:06:54.869022 coreos-metadata[1924]: Dec 16 13:06:54.868 INFO Fetch successful Dec 16 13:06:54.869022 coreos-metadata[1924]: Dec 16 13:06:54.868 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Dec 16 13:06:54.869688 coreos-metadata[1924]: Dec 16 13:06:54.869 INFO Fetch successful Dec 16 13:06:54.869688 coreos-metadata[1924]: Dec 16 13:06:54.869 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Dec 16 13:06:54.871778 coreos-metadata[1924]: Dec 16 13:06:54.870 INFO Fetch successful Dec 16 13:06:54.878473 ntpd[1932]: ntpd 4.2.8p18@1.4062-o Fri Dec 12 14:43:57 UTC 2025 (1): Starting Dec 16 13:06:54.884139 ntpd[1932]: 16 Dec 13:06:54 ntpd[1932]: ntpd 4.2.8p18@1.4062-o Fri Dec 12 14:43:57 UTC 2025 (1): Starting Dec 16 13:06:54.884139 ntpd[1932]: 16 Dec 13:06:54 ntpd[1932]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Dec 16 13:06:54.884139 ntpd[1932]: 16 Dec 13:06:54 ntpd[1932]: ---------------------------------------------------- Dec 16 13:06:54.884139 ntpd[1932]: 16 Dec 13:06:54 ntpd[1932]: ntp-4 is maintained by Network Time Foundation, Dec 16 13:06:54.884139 ntpd[1932]: 16 Dec 13:06:54 ntpd[1932]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Dec 16 13:06:54.884139 ntpd[1932]: 16 Dec 13:06:54 ntpd[1932]: corporation. Support and training for ntp-4 are Dec 16 13:06:54.884139 ntpd[1932]: 16 Dec 13:06:54 ntpd[1932]: available at https://www.nwtime.org/support Dec 16 13:06:54.884139 ntpd[1932]: 16 Dec 13:06:54 ntpd[1932]: ---------------------------------------------------- Dec 16 13:06:54.881638 ntpd[1932]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Dec 16 13:06:54.881650 ntpd[1932]: ---------------------------------------------------- Dec 16 13:06:54.881660 ntpd[1932]: ntp-4 is maintained by Network Time Foundation, Dec 16 13:06:54.881669 ntpd[1932]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Dec 16 13:06:54.881677 ntpd[1932]: corporation. Support and training for ntp-4 are Dec 16 13:06:54.881686 ntpd[1932]: available at https://www.nwtime.org/support Dec 16 13:06:54.881695 ntpd[1932]: ---------------------------------------------------- Dec 16 13:06:54.895184 ntpd[1932]: proto: precision = 0.067 usec (-24) Dec 16 13:06:54.897953 ntpd[1932]: 16 Dec 13:06:54 ntpd[1932]: proto: precision = 0.067 usec (-24) Dec 16 13:06:54.899718 dbus-daemon[1925]: [system] SELinux support is enabled Dec 16 13:06:54.900129 systemd[1]: Started dbus.service - D-Bus System Message Bus. Dec 16 13:06:54.905641 ntpd[1932]: basedate set to 2025-11-30 Dec 16 13:06:54.906912 ntpd[1932]: 16 Dec 13:06:54 ntpd[1932]: basedate set to 2025-11-30 Dec 16 13:06:54.906912 ntpd[1932]: 16 Dec 13:06:54 ntpd[1932]: gps base set to 2025-11-30 (week 2395) Dec 16 13:06:54.906912 ntpd[1932]: 16 Dec 13:06:54 ntpd[1932]: Listen and drop on 0 v6wildcard [::]:123 Dec 16 13:06:54.906912 ntpd[1932]: 16 Dec 13:06:54 ntpd[1932]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Dec 16 13:06:54.906912 ntpd[1932]: 16 Dec 13:06:54 ntpd[1932]: Listen normally on 2 lo 127.0.0.1:123 Dec 16 13:06:54.906912 ntpd[1932]: 16 Dec 13:06:54 ntpd[1932]: Listen normally on 3 eth0 172.31.28.98:123 Dec 16 13:06:54.906912 ntpd[1932]: 16 Dec 13:06:54 ntpd[1932]: Listen normally on 4 lo [::1]:123 Dec 16 13:06:54.906912 ntpd[1932]: 16 Dec 13:06:54 ntpd[1932]: Listen normally on 5 eth0 [fe80::4b1:ff:fef8:93f3%2]:123 Dec 16 13:06:54.906912 ntpd[1932]: 16 Dec 13:06:54 ntpd[1932]: Listening on routing socket on fd #22 for interface updates Dec 16 13:06:54.905673 ntpd[1932]: gps base set to 2025-11-30 (week 2395) Dec 16 13:06:54.905864 ntpd[1932]: Listen and drop on 0 v6wildcard [::]:123 Dec 16 13:06:54.905896 ntpd[1932]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Dec 16 13:06:54.906134 ntpd[1932]: Listen normally on 2 lo 127.0.0.1:123 Dec 16 13:06:54.906161 ntpd[1932]: Listen normally on 3 eth0 172.31.28.98:123 Dec 16 13:06:54.906191 ntpd[1932]: Listen normally on 4 lo [::1]:123 Dec 16 13:06:54.906219 ntpd[1932]: Listen normally on 5 eth0 [fe80::4b1:ff:fef8:93f3%2]:123 Dec 16 13:06:54.906245 ntpd[1932]: Listening on routing socket on fd #22 for interface updates Dec 16 13:06:54.907841 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Dec 16 13:06:54.907888 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Dec 16 13:06:54.908803 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Dec 16 13:06:54.908836 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Dec 16 13:06:54.923690 systemd[1]: Finished setup-oem.service - Setup OEM. Dec 16 13:06:54.930459 ntpd[1932]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Dec 16 13:06:54.936485 ntpd[1932]: 16 Dec 13:06:54 ntpd[1932]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Dec 16 13:06:54.936485 ntpd[1932]: 16 Dec 13:06:54 ntpd[1932]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Dec 16 13:06:54.930508 ntpd[1932]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Dec 16 13:06:54.942434 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Dec 16 13:06:54.946083 dbus-daemon[1925]: [system] Activating via systemd: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.2' (uid=244 pid=1567 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Dec 16 13:06:54.958899 extend-filesystems[1928]: Resized partition /dev/nvme0n1p9 Dec 16 13:06:54.961410 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Dec 16 13:06:54.976460 systemd[1]: Started update-engine.service - Update Engine. Dec 16 13:06:54.982644 update_engine[1942]: I20251216 13:06:54.982079 1942 update_check_scheduler.cc:74] Next update check in 2m56s Dec 16 13:06:54.996800 extend-filesystems[2022]: resize2fs 1.47.3 (8-Jul-2025) Dec 16 13:06:55.009124 systemd[1]: Started locksmithd.service - Cluster reboot manager. Dec 16 13:06:55.026793 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 1617920 to 2604027 blocks Dec 16 13:06:55.032516 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Dec 16 13:06:55.038895 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Dec 16 13:06:55.100425 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 2604027 Dec 16 13:06:55.097907 systemd-logind[1939]: Watching system buttons on /dev/input/event2 (Power Button) Dec 16 13:06:55.097942 systemd-logind[1939]: Watching system buttons on /dev/input/event3 (Sleep Button) Dec 16 13:06:55.097969 systemd-logind[1939]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Dec 16 13:06:55.101626 systemd-logind[1939]: New seat seat0. Dec 16 13:06:55.105802 systemd[1]: Started systemd-logind.service - User Login Management. Dec 16 13:06:55.126064 extend-filesystems[2022]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Dec 16 13:06:55.126064 extend-filesystems[2022]: old_desc_blocks = 1, new_desc_blocks = 2 Dec 16 13:06:55.126064 extend-filesystems[2022]: The filesystem on /dev/nvme0n1p9 is now 2604027 (4k) blocks long. Dec 16 13:06:55.160926 extend-filesystems[1928]: Resized filesystem in /dev/nvme0n1p9 Dec 16 13:06:55.129705 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Dec 16 13:06:55.166012 bash[2024]: Updated "/home/core/.ssh/authorized_keys" Dec 16 13:06:55.131654 systemd[1]: extend-filesystems.service: Deactivated successfully. Dec 16 13:06:55.133066 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Dec 16 13:06:55.155211 systemd[1]: Starting sshkeys.service... Dec 16 13:06:55.253186 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Dec 16 13:06:55.282013 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Dec 16 13:06:55.435038 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Dec 16 13:06:55.439666 dbus-daemon[1925]: [system] Successfully activated service 'org.freedesktop.hostname1' Dec 16 13:06:55.442837 dbus-daemon[1925]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.6' (uid=0 pid=2015 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Dec 16 13:06:55.451919 systemd[1]: Starting polkit.service - Authorization Manager... Dec 16 13:06:55.512303 amazon-ssm-agent[2008]: Initializing new seelog logger Dec 16 13:06:55.512303 amazon-ssm-agent[2008]: New Seelog Logger Creation Complete Dec 16 13:06:55.512303 amazon-ssm-agent[2008]: 2025/12/16 13:06:55 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Dec 16 13:06:55.512303 amazon-ssm-agent[2008]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Dec 16 13:06:55.514184 amazon-ssm-agent[2008]: 2025/12/16 13:06:55 processing appconfig overrides Dec 16 13:06:55.524349 amazon-ssm-agent[2008]: 2025/12/16 13:06:55 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Dec 16 13:06:55.524349 amazon-ssm-agent[2008]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Dec 16 13:06:55.524520 amazon-ssm-agent[2008]: 2025/12/16 13:06:55 processing appconfig overrides Dec 16 13:06:55.526399 locksmithd[2019]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Dec 16 13:06:55.528900 amazon-ssm-agent[2008]: 2025/12/16 13:06:55 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Dec 16 13:06:55.528900 amazon-ssm-agent[2008]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Dec 16 13:06:55.529037 amazon-ssm-agent[2008]: 2025/12/16 13:06:55 processing appconfig overrides Dec 16 13:06:55.529624 amazon-ssm-agent[2008]: 2025-12-16 13:06:55.5242 INFO Proxy environment variables: Dec 16 13:06:55.542790 amazon-ssm-agent[2008]: 2025/12/16 13:06:55 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Dec 16 13:06:55.542790 amazon-ssm-agent[2008]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Dec 16 13:06:55.542790 amazon-ssm-agent[2008]: 2025/12/16 13:06:55 processing appconfig overrides Dec 16 13:06:55.631815 amazon-ssm-agent[2008]: 2025-12-16 13:06:55.5242 INFO https_proxy: Dec 16 13:06:55.661353 coreos-metadata[2049]: Dec 16 13:06:55.661 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Dec 16 13:06:55.665636 coreos-metadata[2049]: Dec 16 13:06:55.665 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Dec 16 13:06:55.675490 coreos-metadata[2049]: Dec 16 13:06:55.675 INFO Fetch successful Dec 16 13:06:55.675634 coreos-metadata[2049]: Dec 16 13:06:55.675 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Dec 16 13:06:55.678494 coreos-metadata[2049]: Dec 16 13:06:55.678 INFO Fetch successful Dec 16 13:06:55.686052 unknown[2049]: wrote ssh authorized keys file for user: core Dec 16 13:06:55.733810 amazon-ssm-agent[2008]: 2025-12-16 13:06:55.5242 INFO http_proxy: Dec 16 13:06:55.783372 update-ssh-keys[2140]: Updated "/home/core/.ssh/authorized_keys" Dec 16 13:06:55.782536 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Dec 16 13:06:55.790647 systemd[1]: Finished sshkeys.service. Dec 16 13:06:55.836271 amazon-ssm-agent[2008]: 2025-12-16 13:06:55.5242 INFO no_proxy: Dec 16 13:06:55.939211 amazon-ssm-agent[2008]: 2025-12-16 13:06:55.5244 INFO Checking if agent identity type OnPrem can be assumed Dec 16 13:06:55.975162 containerd[1969]: time="2025-12-16T13:06:55Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Dec 16 13:06:55.978021 containerd[1969]: time="2025-12-16T13:06:55.977970033Z" level=info msg="starting containerd" revision=fcd43222d6b07379a4be9786bda52438f0dd16a1 version=v2.1.5 Dec 16 13:06:56.038424 amazon-ssm-agent[2008]: 2025-12-16 13:06:55.5246 INFO Checking if agent identity type EC2 can be assumed Dec 16 13:06:56.080895 containerd[1969]: time="2025-12-16T13:06:56.079342375Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="16.422µs" Dec 16 13:06:56.080895 containerd[1969]: time="2025-12-16T13:06:56.079401013Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Dec 16 13:06:56.080895 containerd[1969]: time="2025-12-16T13:06:56.079458961Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Dec 16 13:06:56.080895 containerd[1969]: time="2025-12-16T13:06:56.079478001Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Dec 16 13:06:56.080895 containerd[1969]: time="2025-12-16T13:06:56.079674051Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Dec 16 13:06:56.080895 containerd[1969]: time="2025-12-16T13:06:56.079695054Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Dec 16 13:06:56.080895 containerd[1969]: time="2025-12-16T13:06:56.080292939Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Dec 16 13:06:56.080895 containerd[1969]: time="2025-12-16T13:06:56.080319372Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Dec 16 13:06:56.080895 containerd[1969]: time="2025-12-16T13:06:56.080625852Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Dec 16 13:06:56.080895 containerd[1969]: time="2025-12-16T13:06:56.080641612Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Dec 16 13:06:56.080895 containerd[1969]: time="2025-12-16T13:06:56.080655522Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Dec 16 13:06:56.080895 containerd[1969]: time="2025-12-16T13:06:56.080666172Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Dec 16 13:06:56.089299 polkitd[2069]: Started polkitd version 126 Dec 16 13:06:56.101097 containerd[1969]: time="2025-12-16T13:06:56.099905292Z" level=info msg="skip loading plugin" error="EROFS unsupported, please `modprobe erofs`: skip plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Dec 16 13:06:56.101097 containerd[1969]: time="2025-12-16T13:06:56.099959226Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Dec 16 13:06:56.101097 containerd[1969]: time="2025-12-16T13:06:56.100135398Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Dec 16 13:06:56.101097 containerd[1969]: time="2025-12-16T13:06:56.100428850Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Dec 16 13:06:56.101097 containerd[1969]: time="2025-12-16T13:06:56.100470019Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Dec 16 13:06:56.101097 containerd[1969]: time="2025-12-16T13:06:56.100485652Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Dec 16 13:06:56.101097 containerd[1969]: time="2025-12-16T13:06:56.100529668Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Dec 16 13:06:56.101097 containerd[1969]: time="2025-12-16T13:06:56.100846894Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Dec 16 13:06:56.101097 containerd[1969]: time="2025-12-16T13:06:56.100943710Z" level=info msg="metadata content store policy set" policy=shared Dec 16 13:06:56.118782 containerd[1969]: time="2025-12-16T13:06:56.116890082Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Dec 16 13:06:56.118782 containerd[1969]: time="2025-12-16T13:06:56.116977254Z" level=info msg="loading plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Dec 16 13:06:56.118782 containerd[1969]: time="2025-12-16T13:06:56.117098989Z" level=info msg="skip loading plugin" error="could not find mkfs.erofs: exec: \"mkfs.erofs\": executable file not found in $PATH: skip plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Dec 16 13:06:56.118782 containerd[1969]: time="2025-12-16T13:06:56.117123673Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Dec 16 13:06:56.118782 containerd[1969]: time="2025-12-16T13:06:56.117154063Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Dec 16 13:06:56.118782 containerd[1969]: time="2025-12-16T13:06:56.117173544Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Dec 16 13:06:56.118782 containerd[1969]: time="2025-12-16T13:06:56.117191488Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Dec 16 13:06:56.118782 containerd[1969]: time="2025-12-16T13:06:56.117206144Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Dec 16 13:06:56.118782 containerd[1969]: time="2025-12-16T13:06:56.117223908Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Dec 16 13:06:56.118782 containerd[1969]: time="2025-12-16T13:06:56.117240888Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Dec 16 13:06:56.118782 containerd[1969]: time="2025-12-16T13:06:56.117260716Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Dec 16 13:06:56.118782 containerd[1969]: time="2025-12-16T13:06:56.117274906Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Dec 16 13:06:56.118782 containerd[1969]: time="2025-12-16T13:06:56.117289773Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Dec 16 13:06:56.118782 containerd[1969]: time="2025-12-16T13:06:56.117309863Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Dec 16 13:06:56.119399 containerd[1969]: time="2025-12-16T13:06:56.117484415Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Dec 16 13:06:56.119399 containerd[1969]: time="2025-12-16T13:06:56.117512535Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Dec 16 13:06:56.119399 containerd[1969]: time="2025-12-16T13:06:56.117533846Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Dec 16 13:06:56.119399 containerd[1969]: time="2025-12-16T13:06:56.117548959Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Dec 16 13:06:56.119399 containerd[1969]: time="2025-12-16T13:06:56.117566638Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Dec 16 13:06:56.119399 containerd[1969]: time="2025-12-16T13:06:56.117579961Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Dec 16 13:06:56.119399 containerd[1969]: time="2025-12-16T13:06:56.117597222Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Dec 16 13:06:56.119399 containerd[1969]: time="2025-12-16T13:06:56.117611804Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Dec 16 13:06:56.119399 containerd[1969]: time="2025-12-16T13:06:56.117626850Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Dec 16 13:06:56.119399 containerd[1969]: time="2025-12-16T13:06:56.117644819Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Dec 16 13:06:56.119399 containerd[1969]: time="2025-12-16T13:06:56.117660097Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Dec 16 13:06:56.119399 containerd[1969]: time="2025-12-16T13:06:56.117695419Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Dec 16 13:06:56.119399 containerd[1969]: time="2025-12-16T13:06:56.117794646Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Dec 16 13:06:56.119399 containerd[1969]: time="2025-12-16T13:06:56.117814919Z" level=info msg="Start snapshots syncer" Dec 16 13:06:56.119399 containerd[1969]: time="2025-12-16T13:06:56.117853864Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Dec 16 13:06:56.119946 containerd[1969]: time="2025-12-16T13:06:56.118252647Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"cgroupWritable\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"\",\"binDirs\":[\"/opt/cni/bin\"],\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogLineSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Dec 16 13:06:56.119946 containerd[1969]: time="2025-12-16T13:06:56.118333377Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Dec 16 13:06:56.120143 containerd[1969]: time="2025-12-16T13:06:56.118421039Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Dec 16 13:06:56.120143 containerd[1969]: time="2025-12-16T13:06:56.118553977Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Dec 16 13:06:56.120143 containerd[1969]: time="2025-12-16T13:06:56.118584077Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Dec 16 13:06:56.120143 containerd[1969]: time="2025-12-16T13:06:56.118606307Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Dec 16 13:06:56.120143 containerd[1969]: time="2025-12-16T13:06:56.118629081Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Dec 16 13:06:56.120143 containerd[1969]: time="2025-12-16T13:06:56.118652157Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Dec 16 13:06:56.120143 containerd[1969]: time="2025-12-16T13:06:56.118673242Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Dec 16 13:06:56.120143 containerd[1969]: time="2025-12-16T13:06:56.118692060Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Dec 16 13:06:56.120143 containerd[1969]: time="2025-12-16T13:06:56.118723651Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Dec 16 13:06:56.120143 containerd[1969]: time="2025-12-16T13:06:56.118742520Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Dec 16 13:06:56.121373 polkitd[2069]: Loading rules from directory /etc/polkit-1/rules.d Dec 16 13:06:56.123779 containerd[1969]: time="2025-12-16T13:06:56.123713451Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Dec 16 13:06:56.123903 containerd[1969]: time="2025-12-16T13:06:56.123883390Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Dec 16 13:06:56.124680 containerd[1969]: time="2025-12-16T13:06:56.124651312Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Dec 16 13:06:56.124894 containerd[1969]: time="2025-12-16T13:06:56.124872123Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Dec 16 13:06:56.124969 containerd[1969]: time="2025-12-16T13:06:56.124952610Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Dec 16 13:06:56.125043 containerd[1969]: time="2025-12-16T13:06:56.125031180Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Dec 16 13:06:56.125121 containerd[1969]: time="2025-12-16T13:06:56.125101784Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Dec 16 13:06:56.125193 containerd[1969]: time="2025-12-16T13:06:56.125181871Z" level=info msg="runtime interface created" Dec 16 13:06:56.125246 containerd[1969]: time="2025-12-16T13:06:56.125236267Z" level=info msg="created NRI interface" Dec 16 13:06:56.125312 containerd[1969]: time="2025-12-16T13:06:56.125300074Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Dec 16 13:06:56.125394 containerd[1969]: time="2025-12-16T13:06:56.125383019Z" level=info msg="Connect containerd service" Dec 16 13:06:56.125485 containerd[1969]: time="2025-12-16T13:06:56.125473891Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Dec 16 13:06:56.126593 containerd[1969]: time="2025-12-16T13:06:56.126559224Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 16 13:06:56.131735 polkitd[2069]: Loading rules from directory /run/polkit-1/rules.d Dec 16 13:06:56.132399 polkitd[2069]: Error opening rules directory: Error opening directory “/run/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Dec 16 13:06:56.133770 polkitd[2069]: Loading rules from directory /usr/local/share/polkit-1/rules.d Dec 16 13:06:56.133835 polkitd[2069]: Error opening rules directory: Error opening directory “/usr/local/share/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Dec 16 13:06:56.133885 polkitd[2069]: Loading rules from directory /usr/share/polkit-1/rules.d Dec 16 13:06:56.139457 amazon-ssm-agent[2008]: 2025-12-16 13:06:55.9063 INFO Agent will take identity from EC2 Dec 16 13:06:56.141461 polkitd[2069]: Finished loading, compiling and executing 2 rules Dec 16 13:06:56.142036 systemd[1]: Started polkit.service - Authorization Manager. Dec 16 13:06:56.150410 dbus-daemon[1925]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Dec 16 13:06:56.157543 polkitd[2069]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Dec 16 13:06:56.227941 amazon-ssm-agent[2008]: 2025/12/16 13:06:56 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Dec 16 13:06:56.228244 amazon-ssm-agent[2008]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Dec 16 13:06:56.228521 amazon-ssm-agent[2008]: 2025/12/16 13:06:56 processing appconfig overrides Dec 16 13:06:56.243397 amazon-ssm-agent[2008]: 2025-12-16 13:06:55.9275 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.3.0.0 Dec 16 13:06:56.270785 systemd-hostnamed[2015]: Hostname set to (transient) Dec 16 13:06:56.273182 amazon-ssm-agent[2008]: 2025-12-16 13:06:55.9275 INFO [amazon-ssm-agent] OS: linux, Arch: amd64 Dec 16 13:06:56.273182 amazon-ssm-agent[2008]: 2025-12-16 13:06:55.9276 INFO [amazon-ssm-agent] Starting Core Agent Dec 16 13:06:56.273182 amazon-ssm-agent[2008]: 2025-12-16 13:06:55.9276 INFO [amazon-ssm-agent] Registrar detected. Attempting registration Dec 16 13:06:56.273182 amazon-ssm-agent[2008]: 2025-12-16 13:06:55.9276 INFO [Registrar] Starting registrar module Dec 16 13:06:56.273182 amazon-ssm-agent[2008]: 2025-12-16 13:06:55.9394 INFO [EC2Identity] Checking disk for registration info Dec 16 13:06:56.273182 amazon-ssm-agent[2008]: 2025-12-16 13:06:55.9395 INFO [EC2Identity] No registration info found for ec2 instance, attempting registration Dec 16 13:06:56.273182 amazon-ssm-agent[2008]: 2025-12-16 13:06:55.9395 INFO [EC2Identity] Generating registration keypair Dec 16 13:06:56.273182 amazon-ssm-agent[2008]: 2025-12-16 13:06:56.1632 INFO [EC2Identity] Checking write access before registering Dec 16 13:06:56.273182 amazon-ssm-agent[2008]: 2025-12-16 13:06:56.1679 INFO [EC2Identity] Registering EC2 instance with Systems Manager Dec 16 13:06:56.273182 amazon-ssm-agent[2008]: 2025-12-16 13:06:56.2277 INFO [EC2Identity] EC2 registration was successful. Dec 16 13:06:56.273182 amazon-ssm-agent[2008]: 2025-12-16 13:06:56.2277 INFO [amazon-ssm-agent] Registration attempted. Resuming core agent startup. Dec 16 13:06:56.273182 amazon-ssm-agent[2008]: 2025-12-16 13:06:56.2278 INFO [CredentialRefresher] credentialRefresher has started Dec 16 13:06:56.273182 amazon-ssm-agent[2008]: 2025-12-16 13:06:56.2278 INFO [CredentialRefresher] Starting credentials refresher loop Dec 16 13:06:56.273182 amazon-ssm-agent[2008]: 2025-12-16 13:06:56.2695 INFO EC2RoleProvider Successfully connected with instance profile role credentials Dec 16 13:06:56.273182 amazon-ssm-agent[2008]: 2025-12-16 13:06:56.2698 INFO [CredentialRefresher] Credentials ready Dec 16 13:06:56.288937 systemd-resolved[1538]: System hostname changed to 'ip-172-31-28-98'. Dec 16 13:06:56.341287 amazon-ssm-agent[2008]: 2025-12-16 13:06:56.2700 INFO [CredentialRefresher] Next credential rotation will be in 29.99999207105 minutes Dec 16 13:06:56.476841 sshd_keygen[1962]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Dec 16 13:06:56.538258 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Dec 16 13:06:56.546243 systemd[1]: Starting issuegen.service - Generate /run/issue... Dec 16 13:06:56.565140 systemd[1]: issuegen.service: Deactivated successfully. Dec 16 13:06:56.567979 systemd[1]: Finished issuegen.service - Generate /run/issue. Dec 16 13:06:56.574450 tar[1961]: linux-amd64/README.md Dec 16 13:06:56.578221 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Dec 16 13:06:56.598824 containerd[1969]: time="2025-12-16T13:06:56.598677996Z" level=info msg="Start subscribing containerd event" Dec 16 13:06:56.599127 containerd[1969]: time="2025-12-16T13:06:56.598987947Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Dec 16 13:06:56.599127 containerd[1969]: time="2025-12-16T13:06:56.599006500Z" level=info msg="Start recovering state" Dec 16 13:06:56.599248 containerd[1969]: time="2025-12-16T13:06:56.599228566Z" level=info msg="Start event monitor" Dec 16 13:06:56.599285 containerd[1969]: time="2025-12-16T13:06:56.599249633Z" level=info msg="Start cni network conf syncer for default" Dec 16 13:06:56.599285 containerd[1969]: time="2025-12-16T13:06:56.599261745Z" level=info msg="Start streaming server" Dec 16 13:06:56.599285 containerd[1969]: time="2025-12-16T13:06:56.599273491Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Dec 16 13:06:56.599409 containerd[1969]: time="2025-12-16T13:06:56.599302267Z" level=info msg="runtime interface starting up..." Dec 16 13:06:56.599409 containerd[1969]: time="2025-12-16T13:06:56.599311702Z" level=info msg="starting plugins..." Dec 16 13:06:56.599409 containerd[1969]: time="2025-12-16T13:06:56.599329195Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Dec 16 13:06:56.599506 containerd[1969]: time="2025-12-16T13:06:56.599059067Z" level=info msg=serving... address=/run/containerd/containerd.sock Dec 16 13:06:56.600830 containerd[1969]: time="2025-12-16T13:06:56.600591167Z" level=info msg="containerd successfully booted in 0.633160s" Dec 16 13:06:56.599859 systemd[1]: Started containerd.service - containerd container runtime. Dec 16 13:06:56.612337 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Dec 16 13:06:56.622041 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Dec 16 13:06:56.625240 systemd[1]: Started getty@tty1.service - Getty on tty1. Dec 16 13:06:56.631240 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Dec 16 13:06:56.632273 systemd[1]: Reached target getty.target - Login Prompts. Dec 16 13:06:57.288398 amazon-ssm-agent[2008]: 2025-12-16 13:06:57.2882 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Dec 16 13:06:57.388699 amazon-ssm-agent[2008]: 2025-12-16 13:06:57.2915 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2205) started Dec 16 13:06:57.489142 amazon-ssm-agent[2008]: 2025-12-16 13:06:57.2915 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Dec 16 13:06:59.579029 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Dec 16 13:06:59.587278 systemd[1]: Started sshd@0-172.31.28.98:22-139.178.89.65:45342.service - OpenSSH per-connection server daemon (139.178.89.65:45342). Dec 16 13:07:00.312928 sshd[2218]: Accepted publickey for core from 139.178.89.65 port 45342 ssh2: RSA SHA256:KHLvalz0pEUwMHEW+CYnePnCR/HY9aPnYIRYzgcsWEk Dec 16 13:07:00.319822 sshd-session[2218]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:07:00.342612 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Dec 16 13:07:00.345067 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Dec 16 13:07:00.364238 systemd-logind[1939]: New session 1 of user core. Dec 16 13:07:00.419722 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Dec 16 13:07:00.425728 systemd[1]: Starting user@500.service - User Manager for UID 500... Dec 16 13:07:00.462347 (systemd)[2223]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Dec 16 13:07:00.468452 systemd-logind[1939]: New session c1 of user core. Dec 16 13:07:00.487119 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 13:07:00.489387 systemd[1]: Reached target multi-user.target - Multi-User System. Dec 16 13:07:00.506676 (kubelet)[2231]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 16 13:07:00.760606 systemd[2223]: Queued start job for default target default.target. Dec 16 13:07:00.776700 systemd[2223]: Created slice app.slice - User Application Slice. Dec 16 13:07:00.777040 systemd[2223]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of User's Temporary Directories. Dec 16 13:07:00.777076 systemd[2223]: Reached target paths.target - Paths. Dec 16 13:07:00.777157 systemd[2223]: Reached target timers.target - Timers. Dec 16 13:07:00.784650 systemd[2223]: Starting dbus.socket - D-Bus User Message Bus Socket... Dec 16 13:07:00.789142 systemd[2223]: Starting systemd-tmpfiles-setup.service - Create User Files and Directories... Dec 16 13:07:00.833535 systemd[2223]: Listening on dbus.socket - D-Bus User Message Bus Socket. Dec 16 13:07:00.833864 systemd[2223]: Reached target sockets.target - Sockets. Dec 16 13:07:00.866863 systemd[2223]: Finished systemd-tmpfiles-setup.service - Create User Files and Directories. Dec 16 13:07:00.867045 systemd[2223]: Reached target basic.target - Basic System. Dec 16 13:07:00.867131 systemd[2223]: Reached target default.target - Main User Target. Dec 16 13:07:00.867175 systemd[2223]: Startup finished in 388ms. Dec 16 13:07:00.870790 systemd[1]: Started user@500.service - User Manager for UID 500. Dec 16 13:07:00.883328 systemd[1]: Started session-1.scope - Session 1 of User core. Dec 16 13:07:00.884294 systemd[1]: Startup finished in 3.810s (kernel) + 10.487s (initrd) + 15.564s (userspace) = 29.861s. Dec 16 13:07:01.084439 systemd[1]: Started sshd@1-172.31.28.98:22-139.178.89.65:57598.service - OpenSSH per-connection server daemon (139.178.89.65:57598). Dec 16 13:07:01.591236 sshd[2242]: Accepted publickey for core from 139.178.89.65 port 57598 ssh2: RSA SHA256:KHLvalz0pEUwMHEW+CYnePnCR/HY9aPnYIRYzgcsWEk Dec 16 13:07:01.598405 sshd-session[2242]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:07:01.621511 systemd-logind[1939]: New session 2 of user core. Dec 16 13:07:01.633121 systemd[1]: Started session-2.scope - Session 2 of User core. Dec 16 13:07:01.744797 sshd[2253]: Connection closed by 139.178.89.65 port 57598 Dec 16 13:07:01.745278 sshd-session[2242]: pam_unix(sshd:session): session closed for user core Dec 16 13:07:01.816608 systemd[1]: sshd@1-172.31.28.98:22-139.178.89.65:57598.service: Deactivated successfully. Dec 16 13:07:01.837627 systemd[1]: session-2.scope: Deactivated successfully. Dec 16 13:07:01.847279 systemd-logind[1939]: Session 2 logged out. Waiting for processes to exit. Dec 16 13:07:01.869217 systemd[1]: Started sshd@2-172.31.28.98:22-139.178.89.65:57612.service - OpenSSH per-connection server daemon (139.178.89.65:57612). Dec 16 13:07:01.871087 systemd-logind[1939]: Removed session 2. Dec 16 13:07:02.207874 systemd-resolved[1538]: Clock change detected. Flushing caches. Dec 16 13:07:02.553298 sshd[2259]: Accepted publickey for core from 139.178.89.65 port 57612 ssh2: RSA SHA256:KHLvalz0pEUwMHEW+CYnePnCR/HY9aPnYIRYzgcsWEk Dec 16 13:07:02.566951 sshd-session[2259]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:07:02.593637 systemd-logind[1939]: New session 3 of user core. Dec 16 13:07:02.609451 systemd[1]: Started session-3.scope - Session 3 of User core. Dec 16 13:07:02.709904 sshd[2262]: Connection closed by 139.178.89.65 port 57612 Dec 16 13:07:02.710977 sshd-session[2259]: pam_unix(sshd:session): session closed for user core Dec 16 13:07:02.805032 systemd[1]: sshd@2-172.31.28.98:22-139.178.89.65:57612.service: Deactivated successfully. Dec 16 13:07:02.814735 systemd[1]: session-3.scope: Deactivated successfully. Dec 16 13:07:02.817181 systemd-logind[1939]: Session 3 logged out. Waiting for processes to exit. Dec 16 13:07:02.821457 systemd[1]: Started sshd@3-172.31.28.98:22-139.178.89.65:57620.service - OpenSSH per-connection server daemon (139.178.89.65:57620). Dec 16 13:07:02.823790 systemd-logind[1939]: Removed session 3. Dec 16 13:07:03.232833 sshd[2268]: Accepted publickey for core from 139.178.89.65 port 57620 ssh2: RSA SHA256:KHLvalz0pEUwMHEW+CYnePnCR/HY9aPnYIRYzgcsWEk Dec 16 13:07:03.234696 sshd-session[2268]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:07:03.254593 systemd-logind[1939]: New session 4 of user core. Dec 16 13:07:03.272446 systemd[1]: Started session-4.scope - Session 4 of User core. Dec 16 13:07:03.346838 sshd[2271]: Connection closed by 139.178.89.65 port 57620 Dec 16 13:07:03.347687 sshd-session[2268]: pam_unix(sshd:session): session closed for user core Dec 16 13:07:03.365523 systemd[1]: sshd@3-172.31.28.98:22-139.178.89.65:57620.service: Deactivated successfully. Dec 16 13:07:03.370455 systemd[1]: session-4.scope: Deactivated successfully. Dec 16 13:07:03.424963 systemd-logind[1939]: Session 4 logged out. Waiting for processes to exit. Dec 16 13:07:03.432671 systemd[1]: Started sshd@4-172.31.28.98:22-139.178.89.65:57624.service - OpenSSH per-connection server daemon (139.178.89.65:57624). Dec 16 13:07:03.439532 systemd-logind[1939]: Removed session 4. Dec 16 13:07:03.683391 sshd[2277]: Accepted publickey for core from 139.178.89.65 port 57624 ssh2: RSA SHA256:KHLvalz0pEUwMHEW+CYnePnCR/HY9aPnYIRYzgcsWEk Dec 16 13:07:03.690717 sshd-session[2277]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:07:03.724409 systemd-logind[1939]: New session 5 of user core. Dec 16 13:07:03.736424 systemd[1]: Started session-5.scope - Session 5 of User core. Dec 16 13:07:04.072547 sudo[2281]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Dec 16 13:07:04.072954 sudo[2281]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 16 13:07:04.091827 sudo[2281]: pam_unix(sudo:session): session closed for user root Dec 16 13:07:04.117538 sshd[2280]: Connection closed by 139.178.89.65 port 57624 Dec 16 13:07:04.127699 sshd-session[2277]: pam_unix(sshd:session): session closed for user core Dec 16 13:07:04.146455 systemd[1]: sshd@4-172.31.28.98:22-139.178.89.65:57624.service: Deactivated successfully. Dec 16 13:07:04.154319 systemd[1]: session-5.scope: Deactivated successfully. Dec 16 13:07:04.184869 systemd-logind[1939]: Session 5 logged out. Waiting for processes to exit. Dec 16 13:07:04.191711 systemd[1]: Started sshd@5-172.31.28.98:22-139.178.89.65:57638.service - OpenSSH per-connection server daemon (139.178.89.65:57638). Dec 16 13:07:04.195835 systemd-logind[1939]: Removed session 5. Dec 16 13:07:04.502155 sshd[2288]: Accepted publickey for core from 139.178.89.65 port 57638 ssh2: RSA SHA256:KHLvalz0pEUwMHEW+CYnePnCR/HY9aPnYIRYzgcsWEk Dec 16 13:07:04.549622 sshd-session[2288]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:07:04.615658 systemd-logind[1939]: New session 6 of user core. Dec 16 13:07:04.624090 systemd[1]: Started session-6.scope - Session 6 of User core. Dec 16 13:07:04.704818 sudo[2293]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Dec 16 13:07:04.705462 sudo[2293]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 16 13:07:04.741181 sudo[2293]: pam_unix(sudo:session): session closed for user root Dec 16 13:07:04.786704 sudo[2292]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Dec 16 13:07:04.787166 sudo[2292]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 16 13:07:04.824519 systemd[1]: Starting audit-rules.service - Load Audit Rules... Dec 16 13:07:04.973181 kernel: kauditd_printk_skb: 133 callbacks suppressed Dec 16 13:07:04.973359 kernel: audit: type=1305 audit(1765890424.962:238): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Dec 16 13:07:04.962000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Dec 16 13:07:04.975935 augenrules[2316]: No rules Dec 16 13:07:04.976742 systemd[1]: audit-rules.service: Deactivated successfully. Dec 16 13:07:04.962000 audit[2316]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7fff89439c90 a2=420 a3=0 items=0 ppid=2296 pid=2316 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:07:04.985282 kernel: audit: type=1300 audit(1765890424.962:238): arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7fff89439c90 a2=420 a3=0 items=0 ppid=2296 pid=2316 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:07:04.977142 systemd[1]: Finished audit-rules.service - Load Audit Rules. Dec 16 13:07:04.962000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Dec 16 13:07:04.987999 sudo[2292]: pam_unix(sudo:session): session closed for user root Dec 16 13:07:04.978000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:07:04.993839 kernel: audit: type=1327 audit(1765890424.962:238): proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Dec 16 13:07:04.993963 kernel: audit: type=1130 audit(1765890424.978:239): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:07:04.993999 kernel: audit: type=1131 audit(1765890424.978:240): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:07:04.978000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:07:04.988000 audit[2292]: USER_END pid=2292 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 16 13:07:05.012253 kernel: audit: type=1106 audit(1765890424.988:241): pid=2292 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 16 13:07:05.023891 kernel: audit: type=1104 audit(1765890424.988:242): pid=2292 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 16 13:07:04.988000 audit[2292]: CRED_DISP pid=2292 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 16 13:07:05.024261 sshd[2291]: Connection closed by 139.178.89.65 port 57638 Dec 16 13:07:05.024809 sshd-session[2288]: pam_unix(sshd:session): session closed for user core Dec 16 13:07:05.036000 audit[2288]: USER_END pid=2288 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 13:07:05.036000 audit[2288]: CRED_DISP pid=2288 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 13:07:05.059724 kernel: audit: type=1106 audit(1765890425.036:243): pid=2288 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 13:07:05.059889 kernel: audit: type=1104 audit(1765890425.036:244): pid=2288 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 13:07:05.066081 kubelet[2231]: E1216 13:07:05.066024 2231 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 16 13:07:05.067882 systemd[1]: sshd@5-172.31.28.98:22-139.178.89.65:57638.service: Deactivated successfully. Dec 16 13:07:05.068000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-172.31.28.98:22-139.178.89.65:57638 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:07:05.072168 systemd[1]: session-6.scope: Deactivated successfully. Dec 16 13:07:05.074290 systemd-logind[1939]: Session 6 logged out. Waiting for processes to exit. Dec 16 13:07:05.075094 kernel: audit: type=1131 audit(1765890425.068:245): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-172.31.28.98:22-139.178.89.65:57638 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:07:05.076294 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 16 13:07:05.076497 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 16 13:07:05.076000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Dec 16 13:07:05.076992 systemd[1]: kubelet.service: Consumed 1.218s CPU time, 268.9M memory peak. Dec 16 13:07:05.080635 systemd-logind[1939]: Removed session 6. Dec 16 13:07:05.083045 systemd[1]: Started sshd@6-172.31.28.98:22-139.178.89.65:57644.service - OpenSSH per-connection server daemon (139.178.89.65:57644). Dec 16 13:07:05.083000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-172.31.28.98:22-139.178.89.65:57644 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:07:05.323000 audit[2327]: USER_ACCT pid=2327 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 13:07:05.323863 sshd[2327]: Accepted publickey for core from 139.178.89.65 port 57644 ssh2: RSA SHA256:KHLvalz0pEUwMHEW+CYnePnCR/HY9aPnYIRYzgcsWEk Dec 16 13:07:05.326000 audit[2327]: CRED_ACQ pid=2327 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 13:07:05.327000 audit[2327]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc3bf53c10 a2=3 a3=0 items=0 ppid=1 pid=2327 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=7 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:07:05.327000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 13:07:05.328038 sshd-session[2327]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:07:05.353191 systemd-logind[1939]: New session 7 of user core. Dec 16 13:07:05.366189 systemd[1]: Started session-7.scope - Session 7 of User core. Dec 16 13:07:05.373000 audit[2327]: USER_START pid=2327 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 13:07:05.378000 audit[2330]: CRED_ACQ pid=2330 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 13:07:05.424000 audit[2331]: USER_ACCT pid=2331 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 16 13:07:05.425192 sudo[2331]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Dec 16 13:07:05.425000 audit[2331]: CRED_REFR pid=2331 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 16 13:07:05.425608 sudo[2331]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 16 13:07:05.430000 audit[2331]: USER_START pid=2331 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 16 13:07:07.766896 systemd[1]: Starting docker.service - Docker Application Container Engine... Dec 16 13:07:07.793820 (dockerd)[2348]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Dec 16 13:07:09.211468 dockerd[2348]: time="2025-12-16T13:07:09.211000897Z" level=info msg="Starting up" Dec 16 13:07:09.216910 dockerd[2348]: time="2025-12-16T13:07:09.216565213Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Dec 16 13:07:09.236373 dockerd[2348]: time="2025-12-16T13:07:09.236318317Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Dec 16 13:07:09.308149 dockerd[2348]: time="2025-12-16T13:07:09.307954379Z" level=info msg="Loading containers: start." Dec 16 13:07:09.322087 kernel: Initializing XFRM netlink socket Dec 16 13:07:09.473000 audit[2396]: NETFILTER_CFG table=nat:2 family=2 entries=2 op=nft_register_chain pid=2396 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 13:07:09.473000 audit[2396]: SYSCALL arch=c000003e syscall=46 success=yes exit=116 a0=3 a1=7ffcaee5f1d0 a2=0 a3=0 items=0 ppid=2348 pid=2396 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:07:09.473000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Dec 16 13:07:09.476000 audit[2398]: NETFILTER_CFG table=filter:3 family=2 entries=2 op=nft_register_chain pid=2398 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 13:07:09.476000 audit[2398]: SYSCALL arch=c000003e syscall=46 success=yes exit=124 a0=3 a1=7ffd86447550 a2=0 a3=0 items=0 ppid=2348 pid=2398 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:07:09.476000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Dec 16 13:07:09.478000 audit[2400]: NETFILTER_CFG table=filter:4 family=2 entries=1 op=nft_register_chain pid=2400 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 13:07:09.478000 audit[2400]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff8c779400 a2=0 a3=0 items=0 ppid=2348 pid=2400 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:07:09.478000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D464F5257415244 Dec 16 13:07:09.480000 audit[2402]: NETFILTER_CFG table=filter:5 family=2 entries=1 op=nft_register_chain pid=2402 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 13:07:09.480000 audit[2402]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffdf1df3630 a2=0 a3=0 items=0 ppid=2348 pid=2402 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:07:09.480000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D425249444745 Dec 16 13:07:09.483000 audit[2404]: NETFILTER_CFG table=filter:6 family=2 entries=1 op=nft_register_chain pid=2404 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 13:07:09.483000 audit[2404]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffc300cc360 a2=0 a3=0 items=0 ppid=2348 pid=2404 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:07:09.483000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D4354 Dec 16 13:07:09.485000 audit[2406]: NETFILTER_CFG table=filter:7 family=2 entries=1 op=nft_register_chain pid=2406 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 13:07:09.485000 audit[2406]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffe78ab3100 a2=0 a3=0 items=0 ppid=2348 pid=2406 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:07:09.485000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Dec 16 13:07:09.494000 audit[2408]: NETFILTER_CFG table=filter:8 family=2 entries=1 op=nft_register_chain pid=2408 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 13:07:09.494000 audit[2408]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7fffcbe43850 a2=0 a3=0 items=0 ppid=2348 pid=2408 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:07:09.494000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Dec 16 13:07:09.501000 audit[2410]: NETFILTER_CFG table=nat:9 family=2 entries=2 op=nft_register_chain pid=2410 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 13:07:09.501000 audit[2410]: SYSCALL arch=c000003e syscall=46 success=yes exit=384 a0=3 a1=7fff69f8fd10 a2=0 a3=0 items=0 ppid=2348 pid=2410 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:07:09.501000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Dec 16 13:07:09.604000 audit[2413]: NETFILTER_CFG table=nat:10 family=2 entries=2 op=nft_register_chain pid=2413 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 13:07:09.604000 audit[2413]: SYSCALL arch=c000003e syscall=46 success=yes exit=472 a0=3 a1=7ffe34d4a0a0 a2=0 a3=0 items=0 ppid=2348 pid=2413 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:07:09.604000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003132372E302E302E302F38 Dec 16 13:07:09.608000 audit[2415]: NETFILTER_CFG table=filter:11 family=2 entries=2 op=nft_register_chain pid=2415 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 13:07:09.608000 audit[2415]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7fff70e25850 a2=0 a3=0 items=0 ppid=2348 pid=2415 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:07:09.608000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D464F5257415244 Dec 16 13:07:09.611000 audit[2417]: NETFILTER_CFG table=filter:12 family=2 entries=1 op=nft_register_rule pid=2417 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 13:07:09.611000 audit[2417]: SYSCALL arch=c000003e syscall=46 success=yes exit=236 a0=3 a1=7fff5d7d29f0 a2=0 a3=0 items=0 ppid=2348 pid=2417 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:07:09.611000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D425249444745 Dec 16 13:07:09.613000 audit[2419]: NETFILTER_CFG table=filter:13 family=2 entries=1 op=nft_register_rule pid=2419 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 13:07:09.613000 audit[2419]: SYSCALL arch=c000003e syscall=46 success=yes exit=248 a0=3 a1=7ffceca01130 a2=0 a3=0 items=0 ppid=2348 pid=2419 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:07:09.613000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Dec 16 13:07:09.616000 audit[2421]: NETFILTER_CFG table=filter:14 family=2 entries=1 op=nft_register_rule pid=2421 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 13:07:09.616000 audit[2421]: SYSCALL arch=c000003e syscall=46 success=yes exit=232 a0=3 a1=7ffd598518c0 a2=0 a3=0 items=0 ppid=2348 pid=2421 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:07:09.616000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D4354 Dec 16 13:07:09.662000 audit[2451]: NETFILTER_CFG table=nat:15 family=10 entries=2 op=nft_register_chain pid=2451 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 13:07:09.662000 audit[2451]: SYSCALL arch=c000003e syscall=46 success=yes exit=116 a0=3 a1=7fff237491f0 a2=0 a3=0 items=0 ppid=2348 pid=2451 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:07:09.662000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Dec 16 13:07:09.664000 audit[2453]: NETFILTER_CFG table=filter:16 family=10 entries=2 op=nft_register_chain pid=2453 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 13:07:09.664000 audit[2453]: SYSCALL arch=c000003e syscall=46 success=yes exit=124 a0=3 a1=7ffd6886e710 a2=0 a3=0 items=0 ppid=2348 pid=2453 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:07:09.664000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Dec 16 13:07:09.667000 audit[2455]: NETFILTER_CFG table=filter:17 family=10 entries=1 op=nft_register_chain pid=2455 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 13:07:09.667000 audit[2455]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff656dd540 a2=0 a3=0 items=0 ppid=2348 pid=2455 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:07:09.667000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D464F5257415244 Dec 16 13:07:09.669000 audit[2457]: NETFILTER_CFG table=filter:18 family=10 entries=1 op=nft_register_chain pid=2457 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 13:07:09.669000 audit[2457]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd30884bb0 a2=0 a3=0 items=0 ppid=2348 pid=2457 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:07:09.669000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D425249444745 Dec 16 13:07:09.672000 audit[2459]: NETFILTER_CFG table=filter:19 family=10 entries=1 op=nft_register_chain pid=2459 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 13:07:09.672000 audit[2459]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffe20093660 a2=0 a3=0 items=0 ppid=2348 pid=2459 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:07:09.672000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D4354 Dec 16 13:07:09.674000 audit[2461]: NETFILTER_CFG table=filter:20 family=10 entries=1 op=nft_register_chain pid=2461 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 13:07:09.674000 audit[2461]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7fffdb479b40 a2=0 a3=0 items=0 ppid=2348 pid=2461 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:07:09.674000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Dec 16 13:07:09.677000 audit[2463]: NETFILTER_CFG table=filter:21 family=10 entries=1 op=nft_register_chain pid=2463 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 13:07:09.677000 audit[2463]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7fffa762e940 a2=0 a3=0 items=0 ppid=2348 pid=2463 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:07:09.677000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Dec 16 13:07:09.679000 audit[2465]: NETFILTER_CFG table=nat:22 family=10 entries=2 op=nft_register_chain pid=2465 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 13:07:09.679000 audit[2465]: SYSCALL arch=c000003e syscall=46 success=yes exit=384 a0=3 a1=7ffe5fb2a1b0 a2=0 a3=0 items=0 ppid=2348 pid=2465 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:07:09.679000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Dec 16 13:07:09.682000 audit[2467]: NETFILTER_CFG table=nat:23 family=10 entries=2 op=nft_register_chain pid=2467 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 13:07:09.682000 audit[2467]: SYSCALL arch=c000003e syscall=46 success=yes exit=484 a0=3 a1=7ffcd71036b0 a2=0 a3=0 items=0 ppid=2348 pid=2467 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:07:09.682000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003A3A312F313238 Dec 16 13:07:09.684000 audit[2469]: NETFILTER_CFG table=filter:24 family=10 entries=2 op=nft_register_chain pid=2469 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 13:07:09.684000 audit[2469]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7ffd22080b90 a2=0 a3=0 items=0 ppid=2348 pid=2469 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:07:09.684000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D464F5257415244 Dec 16 13:07:09.687000 audit[2471]: NETFILTER_CFG table=filter:25 family=10 entries=1 op=nft_register_rule pid=2471 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 13:07:09.687000 audit[2471]: SYSCALL arch=c000003e syscall=46 success=yes exit=236 a0=3 a1=7ffc22a3a840 a2=0 a3=0 items=0 ppid=2348 pid=2471 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:07:09.687000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D425249444745 Dec 16 13:07:09.689000 audit[2473]: NETFILTER_CFG table=filter:26 family=10 entries=1 op=nft_register_rule pid=2473 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 13:07:09.689000 audit[2473]: SYSCALL arch=c000003e syscall=46 success=yes exit=248 a0=3 a1=7ffc2b7f9ba0 a2=0 a3=0 items=0 ppid=2348 pid=2473 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:07:09.689000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Dec 16 13:07:09.692000 audit[2475]: NETFILTER_CFG table=filter:27 family=10 entries=1 op=nft_register_rule pid=2475 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 13:07:09.692000 audit[2475]: SYSCALL arch=c000003e syscall=46 success=yes exit=232 a0=3 a1=7ffdbfa63790 a2=0 a3=0 items=0 ppid=2348 pid=2475 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:07:09.692000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D4354 Dec 16 13:07:09.699000 audit[2480]: NETFILTER_CFG table=filter:28 family=2 entries=1 op=nft_register_chain pid=2480 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 13:07:09.699000 audit[2480]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7fff0ae71d50 a2=0 a3=0 items=0 ppid=2348 pid=2480 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:07:09.699000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Dec 16 13:07:09.701000 audit[2482]: NETFILTER_CFG table=filter:29 family=2 entries=1 op=nft_register_rule pid=2482 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 13:07:09.701000 audit[2482]: SYSCALL arch=c000003e syscall=46 success=yes exit=212 a0=3 a1=7fffeec26060 a2=0 a3=0 items=0 ppid=2348 pid=2482 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:07:09.701000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Dec 16 13:07:09.704000 audit[2484]: NETFILTER_CFG table=filter:30 family=2 entries=1 op=nft_register_rule pid=2484 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 13:07:09.704000 audit[2484]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7ffcdaaea8b0 a2=0 a3=0 items=0 ppid=2348 pid=2484 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:07:09.704000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Dec 16 13:07:09.706000 audit[2486]: NETFILTER_CFG table=filter:31 family=10 entries=1 op=nft_register_chain pid=2486 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 13:07:09.706000 audit[2486]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffea3116250 a2=0 a3=0 items=0 ppid=2348 pid=2486 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:07:09.706000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Dec 16 13:07:09.709000 audit[2488]: NETFILTER_CFG table=filter:32 family=10 entries=1 op=nft_register_rule pid=2488 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 13:07:09.709000 audit[2488]: SYSCALL arch=c000003e syscall=46 success=yes exit=212 a0=3 a1=7ffcc86676a0 a2=0 a3=0 items=0 ppid=2348 pid=2488 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:07:09.709000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Dec 16 13:07:09.711000 audit[2490]: NETFILTER_CFG table=filter:33 family=10 entries=1 op=nft_register_rule pid=2490 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 13:07:09.711000 audit[2490]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7fff07f9fb70 a2=0 a3=0 items=0 ppid=2348 pid=2490 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:07:09.711000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Dec 16 13:07:09.724540 (udev-worker)[2368]: Network interface NamePolicy= disabled on kernel command line. Dec 16 13:07:09.738000 audit[2495]: NETFILTER_CFG table=nat:34 family=2 entries=2 op=nft_register_chain pid=2495 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 13:07:09.738000 audit[2495]: SYSCALL arch=c000003e syscall=46 success=yes exit=520 a0=3 a1=7ffed7d32610 a2=0 a3=0 items=0 ppid=2348 pid=2495 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:07:09.738000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4900504F5354524F5554494E47002D73003137322E31372E302E302F31360000002D6F00646F636B657230002D6A004D415351554552414445 Dec 16 13:07:09.744000 audit[2497]: NETFILTER_CFG table=nat:35 family=2 entries=1 op=nft_register_rule pid=2497 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 13:07:09.744000 audit[2497]: SYSCALL arch=c000003e syscall=46 success=yes exit=288 a0=3 a1=7ffd0ad5a1b0 a2=0 a3=0 items=0 ppid=2348 pid=2497 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:07:09.744000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4900444F434B4552002D6900646F636B657230002D6A0052455455524E Dec 16 13:07:09.756000 audit[2505]: NETFILTER_CFG table=filter:36 family=2 entries=1 op=nft_register_rule pid=2505 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 13:07:09.756000 audit[2505]: SYSCALL arch=c000003e syscall=46 success=yes exit=300 a0=3 a1=7ffff9c0a7b0 a2=0 a3=0 items=0 ppid=2348 pid=2505 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:07:09.756000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D464F5257415244002D6900646F636B657230002D6A00414343455054 Dec 16 13:07:09.769000 audit[2511]: NETFILTER_CFG table=filter:37 family=2 entries=1 op=nft_register_rule pid=2511 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 13:07:09.769000 audit[2511]: SYSCALL arch=c000003e syscall=46 success=yes exit=376 a0=3 a1=7ffc25d3eab0 a2=0 a3=0 items=0 ppid=2348 pid=2511 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:07:09.769000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45520000002D6900646F636B657230002D6F00646F636B657230002D6A0044524F50 Dec 16 13:07:09.773000 audit[2513]: NETFILTER_CFG table=filter:38 family=2 entries=1 op=nft_register_rule pid=2513 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 13:07:09.773000 audit[2513]: SYSCALL arch=c000003e syscall=46 success=yes exit=512 a0=3 a1=7ffc0d3a4850 a2=0 a3=0 items=0 ppid=2348 pid=2513 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:07:09.773000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D4354002D6F00646F636B657230002D6D00636F6E6E747261636B002D2D637473746174650052454C415445442C45535441424C4953484544002D6A00414343455054 Dec 16 13:07:09.775000 audit[2515]: NETFILTER_CFG table=filter:39 family=2 entries=1 op=nft_register_rule pid=2515 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 13:07:09.775000 audit[2515]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7fff674375d0 a2=0 a3=0 items=0 ppid=2348 pid=2515 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:07:09.775000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D425249444745002D6F00646F636B657230002D6A00444F434B4552 Dec 16 13:07:09.778000 audit[2517]: NETFILTER_CFG table=filter:40 family=2 entries=1 op=nft_register_rule pid=2517 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 13:07:09.778000 audit[2517]: SYSCALL arch=c000003e syscall=46 success=yes exit=428 a0=3 a1=7ffc92645fb0 a2=0 a3=0 items=0 ppid=2348 pid=2517 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:07:09.778000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D49534F4C4154494F4E2D53544147452D31002D6900646F636B6572300000002D6F00646F636B657230002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D32 Dec 16 13:07:09.780000 audit[2519]: NETFILTER_CFG table=filter:41 family=2 entries=1 op=nft_register_rule pid=2519 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 13:07:09.780000 audit[2519]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffc590b6ed0 a2=0 a3=0 items=0 ppid=2348 pid=2519 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:07:09.780000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D32002D6F00646F636B657230002D6A0044524F50 Dec 16 13:07:09.781564 systemd-networkd[1567]: docker0: Link UP Dec 16 13:07:09.786811 dockerd[2348]: time="2025-12-16T13:07:09.786751205Z" level=info msg="Loading containers: done." Dec 16 13:07:09.805414 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2607635921-merged.mount: Deactivated successfully. Dec 16 13:07:09.819807 dockerd[2348]: time="2025-12-16T13:07:09.819746036Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Dec 16 13:07:09.820186 dockerd[2348]: time="2025-12-16T13:07:09.819878692Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Dec 16 13:07:09.820272 dockerd[2348]: time="2025-12-16T13:07:09.820247190Z" level=info msg="Initializing buildkit" Dec 16 13:07:09.853920 dockerd[2348]: time="2025-12-16T13:07:09.853871787Z" level=info msg="Completed buildkit initialization" Dec 16 13:07:09.862296 dockerd[2348]: time="2025-12-16T13:07:09.862235053Z" level=info msg="Daemon has completed initialization" Dec 16 13:07:09.862536 dockerd[2348]: time="2025-12-16T13:07:09.862498571Z" level=info msg="API listen on /run/docker.sock" Dec 16 13:07:09.862583 systemd[1]: Started docker.service - Docker Application Container Engine. Dec 16 13:07:09.863000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=docker comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:07:11.854566 containerd[1969]: time="2025-12-16T13:07:11.854517446Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.7\"" Dec 16 13:07:12.612089 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2732819543.mount: Deactivated successfully. Dec 16 13:07:13.938671 containerd[1969]: time="2025-12-16T13:07:13.938480252Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:07:13.941034 containerd[1969]: time="2025-12-16T13:07:13.940932104Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.7: active requests=0, bytes read=28446091" Dec 16 13:07:13.943900 containerd[1969]: time="2025-12-16T13:07:13.943261386Z" level=info msg="ImageCreate event name:\"sha256:021d1ceeffb11df7a9fb9adfa0ad0a30dcd13cb3d630022066f184cdcb93731b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:07:13.947908 containerd[1969]: time="2025-12-16T13:07:13.947857591Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:9585226cb85d1dc0f0ef5f7a75f04e4bc91ddd82de249533bd293aa3cf958dab\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:07:13.950082 containerd[1969]: time="2025-12-16T13:07:13.949920197Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.7\" with image id \"sha256:021d1ceeffb11df7a9fb9adfa0ad0a30dcd13cb3d630022066f184cdcb93731b\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.7\", repo digest \"registry.k8s.io/kube-apiserver@sha256:9585226cb85d1dc0f0ef5f7a75f04e4bc91ddd82de249533bd293aa3cf958dab\", size \"30111311\" in 2.095352331s" Dec 16 13:07:13.950082 containerd[1969]: time="2025-12-16T13:07:13.949975875Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.7\" returns image reference \"sha256:021d1ceeffb11df7a9fb9adfa0ad0a30dcd13cb3d630022066f184cdcb93731b\"" Dec 16 13:07:13.952041 containerd[1969]: time="2025-12-16T13:07:13.951879733Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.7\"" Dec 16 13:07:15.327352 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Dec 16 13:07:15.330884 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 13:07:15.615628 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 13:07:15.617203 kernel: kauditd_printk_skb: 133 callbacks suppressed Dec 16 13:07:15.617349 kernel: audit: type=1130 audit(1765890435.616:297): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:07:15.616000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:07:15.633436 (kubelet)[2628]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 16 13:07:15.724232 kubelet[2628]: E1216 13:07:15.723485 2628 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 16 13:07:15.734379 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 16 13:07:15.734588 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 16 13:07:15.736000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Dec 16 13:07:15.737170 systemd[1]: kubelet.service: Consumed 230ms CPU time, 110.7M memory peak. Dec 16 13:07:15.741097 kernel: audit: type=1131 audit(1765890435.736:298): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Dec 16 13:07:16.107330 containerd[1969]: time="2025-12-16T13:07:16.106901893Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:07:16.109571 containerd[1969]: time="2025-12-16T13:07:16.109277974Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.7: active requests=0, bytes read=26008626" Dec 16 13:07:16.112238 containerd[1969]: time="2025-12-16T13:07:16.112186198Z" level=info msg="ImageCreate event name:\"sha256:29c7cab9d8e681d047281fd3711baf13c28f66923480fb11c8f22ddb7ca742d1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:07:16.116817 containerd[1969]: time="2025-12-16T13:07:16.116757271Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:f69d77ca0626b5a4b7b432c18de0952941181db7341c80eb89731f46d1d0c230\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:07:16.118078 containerd[1969]: time="2025-12-16T13:07:16.118014613Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.7\" with image id \"sha256:29c7cab9d8e681d047281fd3711baf13c28f66923480fb11c8f22ddb7ca742d1\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.7\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:f69d77ca0626b5a4b7b432c18de0952941181db7341c80eb89731f46d1d0c230\", size \"27673815\" in 2.166090567s" Dec 16 13:07:16.118256 containerd[1969]: time="2025-12-16T13:07:16.118236036Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.7\" returns image reference \"sha256:29c7cab9d8e681d047281fd3711baf13c28f66923480fb11c8f22ddb7ca742d1\"" Dec 16 13:07:16.118987 containerd[1969]: time="2025-12-16T13:07:16.118958177Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.7\"" Dec 16 13:07:17.774795 containerd[1969]: time="2025-12-16T13:07:17.774711964Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:07:17.776141 containerd[1969]: time="2025-12-16T13:07:17.776074954Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.7: active requests=0, bytes read=20149965" Dec 16 13:07:17.778249 containerd[1969]: time="2025-12-16T13:07:17.778200049Z" level=info msg="ImageCreate event name:\"sha256:f457f6fcd712acb5b9beef873f6f4a4869182f9eb52ea6e24824fd4ac4eed393\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:07:17.785891 containerd[1969]: time="2025-12-16T13:07:17.785816378Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:21bda321d8b4d48eb059fbc1593203d55d8b3bc7acd0584e04e55504796d78d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:07:17.787101 containerd[1969]: time="2025-12-16T13:07:17.786550952Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.7\" with image id \"sha256:f457f6fcd712acb5b9beef873f6f4a4869182f9eb52ea6e24824fd4ac4eed393\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.7\", repo digest \"registry.k8s.io/kube-scheduler@sha256:21bda321d8b4d48eb059fbc1593203d55d8b3bc7acd0584e04e55504796d78d0\", size \"21815154\" in 1.667555939s" Dec 16 13:07:17.787101 containerd[1969]: time="2025-12-16T13:07:17.786587310Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.7\" returns image reference \"sha256:f457f6fcd712acb5b9beef873f6f4a4869182f9eb52ea6e24824fd4ac4eed393\"" Dec 16 13:07:17.787435 containerd[1969]: time="2025-12-16T13:07:17.787415285Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.7\"" Dec 16 13:07:19.136042 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3503059496.mount: Deactivated successfully. Dec 16 13:07:19.743261 containerd[1969]: time="2025-12-16T13:07:19.743197053Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:07:19.745307 containerd[1969]: time="2025-12-16T13:07:19.745150511Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.7: active requests=0, bytes read=31926374" Dec 16 13:07:19.747089 containerd[1969]: time="2025-12-16T13:07:19.746121265Z" level=info msg="ImageCreate event name:\"sha256:0929027b17fc30cb9de279f3bdba4e130b991a1dab7978a7db2e5feb2091853c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:07:19.748758 containerd[1969]: time="2025-12-16T13:07:19.748719924Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:ec25702b19026e9c0d339bc1c3bd231435a59f28b5fccb21e1b1078a357380f5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:07:19.749337 containerd[1969]: time="2025-12-16T13:07:19.749295408Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.7\" with image id \"sha256:0929027b17fc30cb9de279f3bdba4e130b991a1dab7978a7db2e5feb2091853c\", repo tag \"registry.k8s.io/kube-proxy:v1.33.7\", repo digest \"registry.k8s.io/kube-proxy@sha256:ec25702b19026e9c0d339bc1c3bd231435a59f28b5fccb21e1b1078a357380f5\", size \"31929115\" in 1.961762937s" Dec 16 13:07:19.749440 containerd[1969]: time="2025-12-16T13:07:19.749347217Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.7\" returns image reference \"sha256:0929027b17fc30cb9de279f3bdba4e130b991a1dab7978a7db2e5feb2091853c\"" Dec 16 13:07:19.749919 containerd[1969]: time="2025-12-16T13:07:19.749894845Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Dec 16 13:07:20.390546 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3349027195.mount: Deactivated successfully. Dec 16 13:07:21.615443 containerd[1969]: time="2025-12-16T13:07:21.615377705Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:07:21.618349 containerd[1969]: time="2025-12-16T13:07:21.618173002Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20128467" Dec 16 13:07:21.622048 containerd[1969]: time="2025-12-16T13:07:21.621550829Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:07:21.633913 containerd[1969]: time="2025-12-16T13:07:21.633362811Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:07:21.635343 containerd[1969]: time="2025-12-16T13:07:21.634794026Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 1.884857836s" Dec 16 13:07:21.635343 containerd[1969]: time="2025-12-16T13:07:21.634849573Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Dec 16 13:07:21.636432 containerd[1969]: time="2025-12-16T13:07:21.636391877Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Dec 16 13:07:22.245544 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2781462177.mount: Deactivated successfully. Dec 16 13:07:22.260542 containerd[1969]: time="2025-12-16T13:07:22.260447383Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 16 13:07:22.262770 containerd[1969]: time="2025-12-16T13:07:22.262648634Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Dec 16 13:07:22.265500 containerd[1969]: time="2025-12-16T13:07:22.265415043Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 16 13:07:22.270964 containerd[1969]: time="2025-12-16T13:07:22.269911784Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 16 13:07:22.270964 containerd[1969]: time="2025-12-16T13:07:22.270784852Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 634.347425ms" Dec 16 13:07:22.270964 containerd[1969]: time="2025-12-16T13:07:22.270828938Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Dec 16 13:07:22.271659 containerd[1969]: time="2025-12-16T13:07:22.271630873Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Dec 16 13:07:22.935881 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2993504506.mount: Deactivated successfully. Dec 16 13:07:25.200911 containerd[1969]: time="2025-12-16T13:07:25.200846740Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:07:25.206957 containerd[1969]: time="2025-12-16T13:07:25.206880368Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=46127678" Dec 16 13:07:25.213712 containerd[1969]: time="2025-12-16T13:07:25.212705926Z" level=info msg="ImageCreate event name:\"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:07:25.223880 containerd[1969]: time="2025-12-16T13:07:25.223810338Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:07:25.226378 containerd[1969]: time="2025-12-16T13:07:25.226171829Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"58938593\" in 2.954384529s" Dec 16 13:07:25.226378 containerd[1969]: time="2025-12-16T13:07:25.226223072Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\"" Dec 16 13:07:25.825695 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Dec 16 13:07:25.830363 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 13:07:26.207000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:07:26.207353 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 13:07:26.212109 kernel: audit: type=1130 audit(1765890446.207:299): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:07:26.221531 (kubelet)[2788]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 16 13:07:26.290598 kubelet[2788]: E1216 13:07:26.290548 2788 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 16 13:07:26.294729 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 16 13:07:26.294921 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 16 13:07:26.295000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Dec 16 13:07:26.300115 kernel: audit: type=1131 audit(1765890446.295:300): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Dec 16 13:07:26.295416 systemd[1]: kubelet.service: Consumed 225ms CPU time, 108.9M memory peak. Dec 16 13:07:26.620100 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Dec 16 13:07:26.627093 kernel: audit: type=1131 audit(1765890446.620:301): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hostnamed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:07:26.620000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hostnamed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:07:26.636000 audit: BPF prog-id=66 op=UNLOAD Dec 16 13:07:26.638100 kernel: audit: type=1334 audit(1765890446.636:302): prog-id=66 op=UNLOAD Dec 16 13:07:29.560704 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 13:07:29.560997 systemd[1]: kubelet.service: Consumed 225ms CPU time, 108.9M memory peak. Dec 16 13:07:29.560000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:07:29.571516 kernel: audit: type=1130 audit(1765890449.560:303): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:07:29.571650 kernel: audit: type=1131 audit(1765890449.560:304): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:07:29.560000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:07:29.567454 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 13:07:29.615634 systemd[1]: Reload requested from client PID 2807 ('systemctl') (unit session-7.scope)... Dec 16 13:07:29.615668 systemd[1]: Reloading... Dec 16 13:07:29.793371 zram_generator::config[2854]: No configuration found. Dec 16 13:07:30.118543 systemd[1]: Reloading finished in 502 ms. Dec 16 13:07:30.161938 kernel: audit: type=1334 audit(1765890450.156:305): prog-id=70 op=LOAD Dec 16 13:07:30.162086 kernel: audit: type=1334 audit(1765890450.156:306): prog-id=53 op=UNLOAD Dec 16 13:07:30.156000 audit: BPF prog-id=70 op=LOAD Dec 16 13:07:30.156000 audit: BPF prog-id=53 op=UNLOAD Dec 16 13:07:30.159000 audit: BPF prog-id=71 op=LOAD Dec 16 13:07:30.169195 kernel: audit: type=1334 audit(1765890450.159:307): prog-id=71 op=LOAD Dec 16 13:07:30.169341 kernel: audit: type=1334 audit(1765890450.159:308): prog-id=72 op=LOAD Dec 16 13:07:30.159000 audit: BPF prog-id=72 op=LOAD Dec 16 13:07:30.159000 audit: BPF prog-id=54 op=UNLOAD Dec 16 13:07:30.159000 audit: BPF prog-id=55 op=UNLOAD Dec 16 13:07:30.163000 audit: BPF prog-id=73 op=LOAD Dec 16 13:07:30.163000 audit: BPF prog-id=60 op=UNLOAD Dec 16 13:07:30.163000 audit: BPF prog-id=74 op=LOAD Dec 16 13:07:30.163000 audit: BPF prog-id=75 op=LOAD Dec 16 13:07:30.163000 audit: BPF prog-id=61 op=UNLOAD Dec 16 13:07:30.163000 audit: BPF prog-id=62 op=UNLOAD Dec 16 13:07:30.164000 audit: BPF prog-id=76 op=LOAD Dec 16 13:07:30.164000 audit: BPF prog-id=57 op=UNLOAD Dec 16 13:07:30.164000 audit: BPF prog-id=77 op=LOAD Dec 16 13:07:30.164000 audit: BPF prog-id=78 op=LOAD Dec 16 13:07:30.164000 audit: BPF prog-id=58 op=UNLOAD Dec 16 13:07:30.164000 audit: BPF prog-id=59 op=UNLOAD Dec 16 13:07:30.167000 audit: BPF prog-id=79 op=LOAD Dec 16 13:07:30.167000 audit: BPF prog-id=52 op=UNLOAD Dec 16 13:07:30.169000 audit: BPF prog-id=80 op=LOAD Dec 16 13:07:30.169000 audit: BPF prog-id=56 op=UNLOAD Dec 16 13:07:30.172000 audit: BPF prog-id=81 op=LOAD Dec 16 13:07:30.172000 audit: BPF prog-id=49 op=UNLOAD Dec 16 13:07:30.172000 audit: BPF prog-id=82 op=LOAD Dec 16 13:07:30.172000 audit: BPF prog-id=83 op=LOAD Dec 16 13:07:30.172000 audit: BPF prog-id=50 op=UNLOAD Dec 16 13:07:30.172000 audit: BPF prog-id=51 op=UNLOAD Dec 16 13:07:30.173000 audit: BPF prog-id=84 op=LOAD Dec 16 13:07:30.173000 audit: BPF prog-id=85 op=LOAD Dec 16 13:07:30.173000 audit: BPF prog-id=47 op=UNLOAD Dec 16 13:07:30.173000 audit: BPF prog-id=48 op=UNLOAD Dec 16 13:07:30.176000 audit: BPF prog-id=86 op=LOAD Dec 16 13:07:30.176000 audit: BPF prog-id=63 op=UNLOAD Dec 16 13:07:30.176000 audit: BPF prog-id=87 op=LOAD Dec 16 13:07:30.176000 audit: BPF prog-id=88 op=LOAD Dec 16 13:07:30.176000 audit: BPF prog-id=64 op=UNLOAD Dec 16 13:07:30.176000 audit: BPF prog-id=65 op=UNLOAD Dec 16 13:07:30.177000 audit: BPF prog-id=89 op=LOAD Dec 16 13:07:30.177000 audit: BPF prog-id=69 op=UNLOAD Dec 16 13:07:30.202604 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Dec 16 13:07:30.202732 systemd[1]: kubelet.service: Failed with result 'signal'. Dec 16 13:07:30.203220 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 13:07:30.203000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Dec 16 13:07:30.203318 systemd[1]: kubelet.service: Consumed 157ms CPU time, 98.4M memory peak. Dec 16 13:07:30.206168 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 13:07:30.544683 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 13:07:30.545000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:07:30.569638 (kubelet)[2917]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 16 13:07:30.703135 kubelet[2917]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 16 13:07:30.703135 kubelet[2917]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Dec 16 13:07:30.703135 kubelet[2917]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 16 13:07:30.710727 kubelet[2917]: I1216 13:07:30.710627 2917 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 16 13:07:31.276092 kubelet[2917]: I1216 13:07:31.275171 2917 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Dec 16 13:07:31.276092 kubelet[2917]: I1216 13:07:31.275214 2917 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 16 13:07:31.276092 kubelet[2917]: I1216 13:07:31.275774 2917 server.go:956] "Client rotation is on, will bootstrap in background" Dec 16 13:07:31.344510 kubelet[2917]: I1216 13:07:31.344456 2917 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 16 13:07:31.350594 kubelet[2917]: E1216 13:07:31.350433 2917 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://172.31.28.98:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.28.98:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Dec 16 13:07:31.381814 kubelet[2917]: I1216 13:07:31.381768 2917 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Dec 16 13:07:31.390432 kubelet[2917]: I1216 13:07:31.390385 2917 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 16 13:07:31.394237 kubelet[2917]: I1216 13:07:31.394139 2917 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 16 13:07:31.398622 kubelet[2917]: I1216 13:07:31.394235 2917 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-28-98","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 16 13:07:31.400399 kubelet[2917]: I1216 13:07:31.400344 2917 topology_manager.go:138] "Creating topology manager with none policy" Dec 16 13:07:31.400399 kubelet[2917]: I1216 13:07:31.400396 2917 container_manager_linux.go:303] "Creating device plugin manager" Dec 16 13:07:31.402749 kubelet[2917]: I1216 13:07:31.401900 2917 state_mem.go:36] "Initialized new in-memory state store" Dec 16 13:07:31.405644 kubelet[2917]: I1216 13:07:31.405556 2917 kubelet.go:480] "Attempting to sync node with API server" Dec 16 13:07:31.405803 kubelet[2917]: I1216 13:07:31.405659 2917 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 16 13:07:31.405803 kubelet[2917]: I1216 13:07:31.405701 2917 kubelet.go:386] "Adding apiserver pod source" Dec 16 13:07:31.405803 kubelet[2917]: I1216 13:07:31.405722 2917 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 16 13:07:31.428533 kubelet[2917]: E1216 13:07:31.428432 2917 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://172.31.28.98:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.28.98:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Dec 16 13:07:31.428533 kubelet[2917]: E1216 13:07:31.428527 2917 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://172.31.28.98:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-28-98&limit=500&resourceVersion=0\": dial tcp 172.31.28.98:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Dec 16 13:07:31.429069 kubelet[2917]: I1216 13:07:31.429037 2917 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.1.5" apiVersion="v1" Dec 16 13:07:31.429663 kubelet[2917]: I1216 13:07:31.429622 2917 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Dec 16 13:07:31.430520 kubelet[2917]: W1216 13:07:31.430477 2917 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Dec 16 13:07:31.435734 kubelet[2917]: I1216 13:07:31.435667 2917 watchdog_linux.go:99] "Systemd watchdog is not enabled" Dec 16 13:07:31.436384 kubelet[2917]: I1216 13:07:31.435760 2917 server.go:1289] "Started kubelet" Dec 16 13:07:31.436450 kubelet[2917]: I1216 13:07:31.436407 2917 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Dec 16 13:07:31.442295 kubelet[2917]: I1216 13:07:31.442216 2917 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 16 13:07:31.445489 kubelet[2917]: I1216 13:07:31.445107 2917 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 16 13:07:31.453688 kubelet[2917]: I1216 13:07:31.453650 2917 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 16 13:07:31.457282 kubelet[2917]: E1216 13:07:31.453539 2917 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.28.98:6443/api/v1/namespaces/default/events\": dial tcp 172.31.28.98:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-28-98.1881b4026e145146 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-28-98,UID:ip-172-31-28-98,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-28-98,},FirstTimestamp:2025-12-16 13:07:31.435704646 +0000 UTC m=+0.848025545,LastTimestamp:2025-12-16 13:07:31.435704646 +0000 UTC m=+0.848025545,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-28-98,}" Dec 16 13:07:31.462099 kubelet[2917]: I1216 13:07:31.460828 2917 server.go:317] "Adding debug handlers to kubelet server" Dec 16 13:07:31.467623 kubelet[2917]: I1216 13:07:31.467593 2917 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Dec 16 13:07:31.473435 kernel: kauditd_printk_skb: 38 callbacks suppressed Dec 16 13:07:31.473592 kernel: audit: type=1325 audit(1765890451.468:347): table=mangle:42 family=2 entries=2 op=nft_register_chain pid=2932 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 13:07:31.468000 audit[2932]: NETFILTER_CFG table=mangle:42 family=2 entries=2 op=nft_register_chain pid=2932 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 13:07:31.473749 kubelet[2917]: I1216 13:07:31.473683 2917 volume_manager.go:297] "Starting Kubelet Volume Manager" Dec 16 13:07:31.468000 audit[2932]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffe722dd4c0 a2=0 a3=0 items=0 ppid=2917 pid=2932 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:07:31.480975 kubelet[2917]: E1216 13:07:31.474554 2917 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-28-98\" not found" Dec 16 13:07:31.481146 kernel: audit: type=1300 audit(1765890451.468:347): arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffe722dd4c0 a2=0 a3=0 items=0 ppid=2917 pid=2932 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:07:31.485720 kernel: audit: type=1327 audit(1765890451.468:347): proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Dec 16 13:07:31.468000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Dec 16 13:07:31.485925 kubelet[2917]: I1216 13:07:31.484720 2917 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Dec 16 13:07:31.485925 kubelet[2917]: I1216 13:07:31.484820 2917 reconciler.go:26] "Reconciler: start to sync state" Dec 16 13:07:31.476000 audit[2933]: NETFILTER_CFG table=filter:43 family=2 entries=1 op=nft_register_chain pid=2933 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 13:07:31.496134 kernel: audit: type=1325 audit(1765890451.476:348): table=filter:43 family=2 entries=1 op=nft_register_chain pid=2933 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 13:07:31.496279 kernel: audit: type=1300 audit(1765890451.476:348): arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff9f5f59e0 a2=0 a3=0 items=0 ppid=2917 pid=2933 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:07:31.476000 audit[2933]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff9f5f59e0 a2=0 a3=0 items=0 ppid=2917 pid=2933 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:07:31.496462 kubelet[2917]: E1216 13:07:31.492599 2917 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://172.31.28.98:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.28.98:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Dec 16 13:07:31.496462 kubelet[2917]: E1216 13:07:31.492700 2917 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.28.98:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-28-98?timeout=10s\": dial tcp 172.31.28.98:6443: connect: connection refused" interval="200ms" Dec 16 13:07:31.499237 kernel: audit: type=1327 audit(1765890451.476:348): proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Dec 16 13:07:31.476000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Dec 16 13:07:31.501504 kernel: audit: type=1325 audit(1765890451.476:349): table=filter:44 family=2 entries=2 op=nft_register_chain pid=2935 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 13:07:31.476000 audit[2935]: NETFILTER_CFG table=filter:44 family=2 entries=2 op=nft_register_chain pid=2935 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 13:07:31.510822 kernel: audit: type=1300 audit(1765890451.476:349): arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7ffd55fae2c0 a2=0 a3=0 items=0 ppid=2917 pid=2935 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:07:31.510965 kernel: audit: type=1327 audit(1765890451.476:349): proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Dec 16 13:07:31.476000 audit[2935]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7ffd55fae2c0 a2=0 a3=0 items=0 ppid=2917 pid=2935 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:07:31.476000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Dec 16 13:07:31.511220 kubelet[2917]: I1216 13:07:31.504543 2917 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 16 13:07:31.517400 kernel: audit: type=1325 audit(1765890451.485:350): table=filter:45 family=2 entries=2 op=nft_register_chain pid=2937 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 13:07:31.485000 audit[2937]: NETFILTER_CFG table=filter:45 family=2 entries=2 op=nft_register_chain pid=2937 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 13:07:31.485000 audit[2937]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7ffea1f0df70 a2=0 a3=0 items=0 ppid=2917 pid=2937 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:07:31.485000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Dec 16 13:07:31.522000 audit[2941]: NETFILTER_CFG table=filter:46 family=2 entries=1 op=nft_register_rule pid=2941 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 13:07:31.522000 audit[2941]: SYSCALL arch=c000003e syscall=46 success=yes exit=924 a0=3 a1=7fff21693dd0 a2=0 a3=0 items=0 ppid=2917 pid=2941 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:07:31.522000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E7400626C6F636B20696E636F6D696E67206C6F63616C6E657420636F6E6E656374696F6E73002D2D647374003132372E302E302E302F38 Dec 16 13:07:31.526000 audit[2943]: NETFILTER_CFG table=mangle:47 family=2 entries=1 op=nft_register_chain pid=2943 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 13:07:31.526000 audit[2943]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fff7eea0ee0 a2=0 a3=0 items=0 ppid=2917 pid=2943 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:07:31.526000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Dec 16 13:07:31.527000 audit[2944]: NETFILTER_CFG table=nat:48 family=2 entries=1 op=nft_register_chain pid=2944 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 13:07:31.527000 audit[2944]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe64ed55b0 a2=0 a3=0 items=0 ppid=2917 pid=2944 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:07:31.527000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Dec 16 13:07:31.530000 audit[2942]: NETFILTER_CFG table=mangle:49 family=10 entries=2 op=nft_register_chain pid=2942 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 13:07:31.531722 kubelet[2917]: I1216 13:07:31.524174 2917 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Dec 16 13:07:31.530000 audit[2942]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffdf707f5f0 a2=0 a3=0 items=0 ppid=2917 pid=2942 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:07:31.530000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Dec 16 13:07:31.534400 kubelet[2917]: I1216 13:07:31.533890 2917 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Dec 16 13:07:31.534400 kubelet[2917]: I1216 13:07:31.533925 2917 status_manager.go:230] "Starting to sync pod status with apiserver" Dec 16 13:07:31.534400 kubelet[2917]: I1216 13:07:31.533965 2917 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Dec 16 13:07:31.534400 kubelet[2917]: I1216 13:07:31.533977 2917 kubelet.go:2436] "Starting kubelet main sync loop" Dec 16 13:07:31.535086 kubelet[2917]: E1216 13:07:31.534047 2917 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 16 13:07:31.536000 audit[2946]: NETFILTER_CFG table=mangle:50 family=10 entries=1 op=nft_register_chain pid=2946 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 13:07:31.536000 audit[2946]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fffaff8ccb0 a2=0 a3=0 items=0 ppid=2917 pid=2946 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:07:31.536000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Dec 16 13:07:31.537000 audit[2945]: NETFILTER_CFG table=filter:51 family=2 entries=1 op=nft_register_chain pid=2945 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 13:07:31.537000 audit[2945]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe1fe016c0 a2=0 a3=0 items=0 ppid=2917 pid=2945 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:07:31.537000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Dec 16 13:07:31.539000 audit[2947]: NETFILTER_CFG table=nat:52 family=10 entries=1 op=nft_register_chain pid=2947 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 13:07:31.539000 audit[2947]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fffca7bcc30 a2=0 a3=0 items=0 ppid=2917 pid=2947 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:07:31.539000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Dec 16 13:07:31.539732 kubelet[2917]: E1216 13:07:31.539260 2917 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://172.31.28.98:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.28.98:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Dec 16 13:07:31.540937 kubelet[2917]: E1216 13:07:31.540907 2917 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 16 13:07:31.541385 kubelet[2917]: I1216 13:07:31.541325 2917 factory.go:223] Registration of the containerd container factory successfully Dec 16 13:07:31.541385 kubelet[2917]: I1216 13:07:31.541344 2917 factory.go:223] Registration of the systemd container factory successfully Dec 16 13:07:31.544000 audit[2949]: NETFILTER_CFG table=filter:53 family=10 entries=1 op=nft_register_chain pid=2949 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 13:07:31.544000 audit[2949]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe38a22120 a2=0 a3=0 items=0 ppid=2917 pid=2949 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:07:31.544000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Dec 16 13:07:31.560688 kubelet[2917]: I1216 13:07:31.560656 2917 cpu_manager.go:221] "Starting CPU manager" policy="none" Dec 16 13:07:31.560688 kubelet[2917]: I1216 13:07:31.560676 2917 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Dec 16 13:07:31.560908 kubelet[2917]: I1216 13:07:31.560707 2917 state_mem.go:36] "Initialized new in-memory state store" Dec 16 13:07:31.566707 kubelet[2917]: I1216 13:07:31.566648 2917 policy_none.go:49] "None policy: Start" Dec 16 13:07:31.566707 kubelet[2917]: I1216 13:07:31.566694 2917 memory_manager.go:186] "Starting memorymanager" policy="None" Dec 16 13:07:31.566707 kubelet[2917]: I1216 13:07:31.566717 2917 state_mem.go:35] "Initializing new in-memory state store" Dec 16 13:07:31.575961 kubelet[2917]: E1216 13:07:31.575918 2917 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-28-98\" not found" Dec 16 13:07:31.582395 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Dec 16 13:07:31.593329 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Dec 16 13:07:31.598932 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Dec 16 13:07:31.611576 kubelet[2917]: E1216 13:07:31.611529 2917 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Dec 16 13:07:31.612586 kubelet[2917]: I1216 13:07:31.612555 2917 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 16 13:07:31.612704 kubelet[2917]: I1216 13:07:31.612582 2917 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 16 13:07:31.613180 kubelet[2917]: I1216 13:07:31.613040 2917 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 16 13:07:31.616744 kubelet[2917]: E1216 13:07:31.616711 2917 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Dec 16 13:07:31.616872 kubelet[2917]: E1216 13:07:31.616771 2917 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-28-98\" not found" Dec 16 13:07:31.660666 systemd[1]: Created slice kubepods-burstable-pod49c6171303d12176a924fb9651f45750.slice - libcontainer container kubepods-burstable-pod49c6171303d12176a924fb9651f45750.slice. Dec 16 13:07:31.671688 kubelet[2917]: E1216 13:07:31.671654 2917 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-28-98\" not found" node="ip-172-31-28-98" Dec 16 13:07:31.679789 systemd[1]: Created slice kubepods-burstable-podbcd51d23e6e4d60161206988639c1794.slice - libcontainer container kubepods-burstable-podbcd51d23e6e4d60161206988639c1794.slice. Dec 16 13:07:31.686030 kubelet[2917]: I1216 13:07:31.685982 2917 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/bcd51d23e6e4d60161206988639c1794-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-28-98\" (UID: \"bcd51d23e6e4d60161206988639c1794\") " pod="kube-system/kube-controller-manager-ip-172-31-28-98" Dec 16 13:07:31.686030 kubelet[2917]: I1216 13:07:31.686028 2917 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/49c6171303d12176a924fb9651f45750-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-28-98\" (UID: \"49c6171303d12176a924fb9651f45750\") " pod="kube-system/kube-apiserver-ip-172-31-28-98" Dec 16 13:07:31.686272 kubelet[2917]: I1216 13:07:31.686071 2917 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/bcd51d23e6e4d60161206988639c1794-ca-certs\") pod \"kube-controller-manager-ip-172-31-28-98\" (UID: \"bcd51d23e6e4d60161206988639c1794\") " pod="kube-system/kube-controller-manager-ip-172-31-28-98" Dec 16 13:07:31.686272 kubelet[2917]: I1216 13:07:31.686095 2917 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/bcd51d23e6e4d60161206988639c1794-k8s-certs\") pod \"kube-controller-manager-ip-172-31-28-98\" (UID: \"bcd51d23e6e4d60161206988639c1794\") " pod="kube-system/kube-controller-manager-ip-172-31-28-98" Dec 16 13:07:31.686272 kubelet[2917]: I1216 13:07:31.686119 2917 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/bcd51d23e6e4d60161206988639c1794-kubeconfig\") pod \"kube-controller-manager-ip-172-31-28-98\" (UID: \"bcd51d23e6e4d60161206988639c1794\") " pod="kube-system/kube-controller-manager-ip-172-31-28-98" Dec 16 13:07:31.686272 kubelet[2917]: I1216 13:07:31.686140 2917 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/bba725e2030968353d2961278a9d032d-kubeconfig\") pod \"kube-scheduler-ip-172-31-28-98\" (UID: \"bba725e2030968353d2961278a9d032d\") " pod="kube-system/kube-scheduler-ip-172-31-28-98" Dec 16 13:07:31.686272 kubelet[2917]: I1216 13:07:31.686168 2917 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/49c6171303d12176a924fb9651f45750-ca-certs\") pod \"kube-apiserver-ip-172-31-28-98\" (UID: \"49c6171303d12176a924fb9651f45750\") " pod="kube-system/kube-apiserver-ip-172-31-28-98" Dec 16 13:07:31.686519 kubelet[2917]: I1216 13:07:31.686188 2917 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/49c6171303d12176a924fb9651f45750-k8s-certs\") pod \"kube-apiserver-ip-172-31-28-98\" (UID: \"49c6171303d12176a924fb9651f45750\") " pod="kube-system/kube-apiserver-ip-172-31-28-98" Dec 16 13:07:31.686519 kubelet[2917]: I1216 13:07:31.686208 2917 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/bcd51d23e6e4d60161206988639c1794-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-28-98\" (UID: \"bcd51d23e6e4d60161206988639c1794\") " pod="kube-system/kube-controller-manager-ip-172-31-28-98" Dec 16 13:07:31.689605 kubelet[2917]: E1216 13:07:31.689575 2917 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-28-98\" not found" node="ip-172-31-28-98" Dec 16 13:07:31.693847 systemd[1]: Created slice kubepods-burstable-podbba725e2030968353d2961278a9d032d.slice - libcontainer container kubepods-burstable-podbba725e2030968353d2961278a9d032d.slice. Dec 16 13:07:31.694397 kubelet[2917]: E1216 13:07:31.694220 2917 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.28.98:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-28-98?timeout=10s\": dial tcp 172.31.28.98:6443: connect: connection refused" interval="400ms" Dec 16 13:07:31.697445 kubelet[2917]: E1216 13:07:31.697411 2917 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-28-98\" not found" node="ip-172-31-28-98" Dec 16 13:07:31.716400 kubelet[2917]: I1216 13:07:31.716363 2917 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-28-98" Dec 16 13:07:31.716988 kubelet[2917]: E1216 13:07:31.716938 2917 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.28.98:6443/api/v1/nodes\": dial tcp 172.31.28.98:6443: connect: connection refused" node="ip-172-31-28-98" Dec 16 13:07:31.919402 kubelet[2917]: I1216 13:07:31.919373 2917 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-28-98" Dec 16 13:07:31.919801 kubelet[2917]: E1216 13:07:31.919764 2917 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.28.98:6443/api/v1/nodes\": dial tcp 172.31.28.98:6443: connect: connection refused" node="ip-172-31-28-98" Dec 16 13:07:31.973476 containerd[1969]: time="2025-12-16T13:07:31.973419186Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-28-98,Uid:49c6171303d12176a924fb9651f45750,Namespace:kube-system,Attempt:0,}" Dec 16 13:07:31.992406 containerd[1969]: time="2025-12-16T13:07:31.991748531Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-28-98,Uid:bcd51d23e6e4d60161206988639c1794,Namespace:kube-system,Attempt:0,}" Dec 16 13:07:31.999756 containerd[1969]: time="2025-12-16T13:07:31.999700713Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-28-98,Uid:bba725e2030968353d2961278a9d032d,Namespace:kube-system,Attempt:0,}" Dec 16 13:07:32.105510 kubelet[2917]: E1216 13:07:32.105443 2917 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.28.98:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-28-98?timeout=10s\": dial tcp 172.31.28.98:6443: connect: connection refused" interval="800ms" Dec 16 13:07:32.215264 containerd[1969]: time="2025-12-16T13:07:32.214296811Z" level=info msg="connecting to shim 17942d88f74f10496e0b20f55738589cff68ee34cd764cf5e69002e439422333" address="unix:///run/containerd/s/2d0cbd9ac796a86ff1d15f49230387514cf3c0925d39ec226c2baac7390d0b62" namespace=k8s.io protocol=ttrpc version=3 Dec 16 13:07:32.217352 containerd[1969]: time="2025-12-16T13:07:32.217307749Z" level=info msg="connecting to shim c1a9e38de2a9788a11f7d2af36536b8b38648736793982f56d5d89a5ec006699" address="unix:///run/containerd/s/1a12de6008f564d8f5007f887466ff78bc4ef8abe82954ed7d95e775aca38b05" namespace=k8s.io protocol=ttrpc version=3 Dec 16 13:07:32.218444 containerd[1969]: time="2025-12-16T13:07:32.218412606Z" level=info msg="connecting to shim f7ee51aeafe865e2d5c2f57188737520951d3d42eb95ed1e3218d87868bc2b81" address="unix:///run/containerd/s/f43ced74c4374acacc8fc97eca0c299a7390c4b6591e026df475f41f1a44ce5c" namespace=k8s.io protocol=ttrpc version=3 Dec 16 13:07:32.246712 kubelet[2917]: E1216 13:07:32.246645 2917 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://172.31.28.98:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-28-98&limit=500&resourceVersion=0\": dial tcp 172.31.28.98:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Dec 16 13:07:32.325112 kubelet[2917]: I1216 13:07:32.324624 2917 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-28-98" Dec 16 13:07:32.325112 kubelet[2917]: E1216 13:07:32.325054 2917 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.28.98:6443/api/v1/nodes\": dial tcp 172.31.28.98:6443: connect: connection refused" node="ip-172-31-28-98" Dec 16 13:07:32.351513 systemd[1]: Started cri-containerd-17942d88f74f10496e0b20f55738589cff68ee34cd764cf5e69002e439422333.scope - libcontainer container 17942d88f74f10496e0b20f55738589cff68ee34cd764cf5e69002e439422333. Dec 16 13:07:32.355836 kubelet[2917]: E1216 13:07:32.354624 2917 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://172.31.28.98:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.28.98:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Dec 16 13:07:32.354977 systemd[1]: Started cri-containerd-c1a9e38de2a9788a11f7d2af36536b8b38648736793982f56d5d89a5ec006699.scope - libcontainer container c1a9e38de2a9788a11f7d2af36536b8b38648736793982f56d5d89a5ec006699. Dec 16 13:07:32.357425 systemd[1]: Started cri-containerd-f7ee51aeafe865e2d5c2f57188737520951d3d42eb95ed1e3218d87868bc2b81.scope - libcontainer container f7ee51aeafe865e2d5c2f57188737520951d3d42eb95ed1e3218d87868bc2b81. Dec 16 13:07:32.402000 audit: BPF prog-id=90 op=LOAD Dec 16 13:07:32.404000 audit: BPF prog-id=91 op=LOAD Dec 16 13:07:32.405000 audit: BPF prog-id=92 op=LOAD Dec 16 13:07:32.405000 audit[3011]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=2974 pid=3011 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:07:32.405000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3137393432643838663734663130343936653062323066353537333835 Dec 16 13:07:32.406000 audit: BPF prog-id=92 op=UNLOAD Dec 16 13:07:32.406000 audit[3011]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=2974 pid=3011 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:07:32.406000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3137393432643838663734663130343936653062323066353537333835 Dec 16 13:07:32.406000 audit: BPF prog-id=93 op=LOAD Dec 16 13:07:32.406000 audit[3013]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000128238 a2=98 a3=0 items=0 ppid=2978 pid=3013 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:07:32.406000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6637656535316165616665383635653264356332663537313838373337 Dec 16 13:07:32.406000 audit: BPF prog-id=93 op=UNLOAD Dec 16 13:07:32.406000 audit: BPF prog-id=94 op=LOAD Dec 16 13:07:32.406000 audit[3011]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=2974 pid=3011 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:07:32.406000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3137393432643838663734663130343936653062323066353537333835 Dec 16 13:07:32.407000 audit: BPF prog-id=95 op=LOAD Dec 16 13:07:32.407000 audit[3011]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=2974 pid=3011 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:07:32.407000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3137393432643838663734663130343936653062323066353537333835 Dec 16 13:07:32.407000 audit: BPF prog-id=95 op=UNLOAD Dec 16 13:07:32.407000 audit[3011]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=2974 pid=3011 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:07:32.407000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3137393432643838663734663130343936653062323066353537333835 Dec 16 13:07:32.407000 audit: BPF prog-id=94 op=UNLOAD Dec 16 13:07:32.407000 audit[3011]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=2974 pid=3011 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:07:32.407000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3137393432643838663734663130343936653062323066353537333835 Dec 16 13:07:32.407000 audit: BPF prog-id=96 op=LOAD Dec 16 13:07:32.407000 audit[3011]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=2974 pid=3011 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:07:32.407000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3137393432643838663734663130343936653062323066353537333835 Dec 16 13:07:32.406000 audit[3013]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2978 pid=3013 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:07:32.406000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6637656535316165616665383635653264356332663537313838373337 Dec 16 13:07:32.408000 audit: BPF prog-id=97 op=LOAD Dec 16 13:07:32.408000 audit[3013]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000128488 a2=98 a3=0 items=0 ppid=2978 pid=3013 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:07:32.408000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6637656535316165616665383635653264356332663537313838373337 Dec 16 13:07:32.408000 audit: BPF prog-id=98 op=LOAD Dec 16 13:07:32.408000 audit: BPF prog-id=99 op=LOAD Dec 16 13:07:32.408000 audit[3013]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000128218 a2=98 a3=0 items=0 ppid=2978 pid=3013 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:07:32.408000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6637656535316165616665383635653264356332663537313838373337 Dec 16 13:07:32.409000 audit: BPF prog-id=99 op=UNLOAD Dec 16 13:07:32.409000 audit[3013]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2978 pid=3013 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:07:32.409000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6637656535316165616665383635653264356332663537313838373337 Dec 16 13:07:32.409000 audit: BPF prog-id=97 op=UNLOAD Dec 16 13:07:32.409000 audit[3013]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2978 pid=3013 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:07:32.409000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6637656535316165616665383635653264356332663537313838373337 Dec 16 13:07:32.409000 audit: BPF prog-id=100 op=LOAD Dec 16 13:07:32.409000 audit[3013]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001286e8 a2=98 a3=0 items=0 ppid=2978 pid=3013 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:07:32.409000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6637656535316165616665383635653264356332663537313838373337 Dec 16 13:07:32.412000 audit: BPF prog-id=101 op=LOAD Dec 16 13:07:32.412000 audit[3010]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a8238 a2=98 a3=0 items=0 ppid=2979 pid=3010 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:07:32.412000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6331613965333864653261393738386131316637643261663336353336 Dec 16 13:07:32.416000 audit: BPF prog-id=101 op=UNLOAD Dec 16 13:07:32.416000 audit[3010]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2979 pid=3010 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:07:32.416000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6331613965333864653261393738386131316637643261663336353336 Dec 16 13:07:32.416000 audit: BPF prog-id=102 op=LOAD Dec 16 13:07:32.416000 audit[3010]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a8488 a2=98 a3=0 items=0 ppid=2979 pid=3010 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:07:32.416000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6331613965333864653261393738386131316637643261663336353336 Dec 16 13:07:32.416000 audit: BPF prog-id=103 op=LOAD Dec 16 13:07:32.416000 audit[3010]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001a8218 a2=98 a3=0 items=0 ppid=2979 pid=3010 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:07:32.416000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6331613965333864653261393738386131316637643261663336353336 Dec 16 13:07:32.418000 audit: BPF prog-id=103 op=UNLOAD Dec 16 13:07:32.418000 audit[3010]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2979 pid=3010 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:07:32.418000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6331613965333864653261393738386131316637643261663336353336 Dec 16 13:07:32.418000 audit: BPF prog-id=102 op=UNLOAD Dec 16 13:07:32.418000 audit[3010]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2979 pid=3010 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:07:32.418000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6331613965333864653261393738386131316637643261663336353336 Dec 16 13:07:32.420000 audit: BPF prog-id=104 op=LOAD Dec 16 13:07:32.420000 audit[3010]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a86e8 a2=98 a3=0 items=0 ppid=2979 pid=3010 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:07:32.420000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6331613965333864653261393738386131316637643261663336353336 Dec 16 13:07:32.440465 kubelet[2917]: E1216 13:07:32.440417 2917 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://172.31.28.98:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.28.98:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Dec 16 13:07:32.505935 containerd[1969]: time="2025-12-16T13:07:32.505572819Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-28-98,Uid:49c6171303d12176a924fb9651f45750,Namespace:kube-system,Attempt:0,} returns sandbox id \"17942d88f74f10496e0b20f55738589cff68ee34cd764cf5e69002e439422333\"" Dec 16 13:07:32.509460 containerd[1969]: time="2025-12-16T13:07:32.509367349Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-28-98,Uid:bba725e2030968353d2961278a9d032d,Namespace:kube-system,Attempt:0,} returns sandbox id \"f7ee51aeafe865e2d5c2f57188737520951d3d42eb95ed1e3218d87868bc2b81\"" Dec 16 13:07:32.525986 containerd[1969]: time="2025-12-16T13:07:32.525941610Z" level=info msg="CreateContainer within sandbox \"17942d88f74f10496e0b20f55738589cff68ee34cd764cf5e69002e439422333\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Dec 16 13:07:32.528150 containerd[1969]: time="2025-12-16T13:07:32.527860191Z" level=info msg="CreateContainer within sandbox \"f7ee51aeafe865e2d5c2f57188737520951d3d42eb95ed1e3218d87868bc2b81\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Dec 16 13:07:32.542523 containerd[1969]: time="2025-12-16T13:07:32.542345703Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-28-98,Uid:bcd51d23e6e4d60161206988639c1794,Namespace:kube-system,Attempt:0,} returns sandbox id \"c1a9e38de2a9788a11f7d2af36536b8b38648736793982f56d5d89a5ec006699\"" Dec 16 13:07:32.547490 containerd[1969]: time="2025-12-16T13:07:32.547443306Z" level=info msg="Container a9336930e8964e5fc3f1507907df79056c3b5a5fb11ab34df92a37fe0b237de1: CDI devices from CRI Config.CDIDevices: []" Dec 16 13:07:32.554538 containerd[1969]: time="2025-12-16T13:07:32.554486736Z" level=info msg="CreateContainer within sandbox \"c1a9e38de2a9788a11f7d2af36536b8b38648736793982f56d5d89a5ec006699\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Dec 16 13:07:32.567526 containerd[1969]: time="2025-12-16T13:07:32.567481013Z" level=info msg="CreateContainer within sandbox \"f7ee51aeafe865e2d5c2f57188737520951d3d42eb95ed1e3218d87868bc2b81\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"a9336930e8964e5fc3f1507907df79056c3b5a5fb11ab34df92a37fe0b237de1\"" Dec 16 13:07:32.568788 containerd[1969]: time="2025-12-16T13:07:32.568688314Z" level=info msg="StartContainer for \"a9336930e8964e5fc3f1507907df79056c3b5a5fb11ab34df92a37fe0b237de1\"" Dec 16 13:07:32.570384 containerd[1969]: time="2025-12-16T13:07:32.570328299Z" level=info msg="connecting to shim a9336930e8964e5fc3f1507907df79056c3b5a5fb11ab34df92a37fe0b237de1" address="unix:///run/containerd/s/f43ced74c4374acacc8fc97eca0c299a7390c4b6591e026df475f41f1a44ce5c" protocol=ttrpc version=3 Dec 16 13:07:32.570467 containerd[1969]: time="2025-12-16T13:07:32.570417532Z" level=info msg="Container a6ac8c41d79c1b4cd810d84b99cc37c088cb9a3fd7cd9d5af8edcd98da16a3ea: CDI devices from CRI Config.CDIDevices: []" Dec 16 13:07:32.592394 containerd[1969]: time="2025-12-16T13:07:32.592282718Z" level=info msg="CreateContainer within sandbox \"17942d88f74f10496e0b20f55738589cff68ee34cd764cf5e69002e439422333\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"a6ac8c41d79c1b4cd810d84b99cc37c088cb9a3fd7cd9d5af8edcd98da16a3ea\"" Dec 16 13:07:32.594592 containerd[1969]: time="2025-12-16T13:07:32.593336695Z" level=info msg="StartContainer for \"a6ac8c41d79c1b4cd810d84b99cc37c088cb9a3fd7cd9d5af8edcd98da16a3ea\"" Dec 16 13:07:32.596027 containerd[1969]: time="2025-12-16T13:07:32.595976346Z" level=info msg="Container 7eec761e253c7a9a6543c49a0937eb40f70d759ba0b3100a8861a5371f8dfd84: CDI devices from CRI Config.CDIDevices: []" Dec 16 13:07:32.601720 containerd[1969]: time="2025-12-16T13:07:32.601655023Z" level=info msg="connecting to shim a6ac8c41d79c1b4cd810d84b99cc37c088cb9a3fd7cd9d5af8edcd98da16a3ea" address="unix:///run/containerd/s/2d0cbd9ac796a86ff1d15f49230387514cf3c0925d39ec226c2baac7390d0b62" protocol=ttrpc version=3 Dec 16 13:07:32.616849 systemd[1]: Started cri-containerd-a9336930e8964e5fc3f1507907df79056c3b5a5fb11ab34df92a37fe0b237de1.scope - libcontainer container a9336930e8964e5fc3f1507907df79056c3b5a5fb11ab34df92a37fe0b237de1. Dec 16 13:07:32.627420 containerd[1969]: time="2025-12-16T13:07:32.627213242Z" level=info msg="CreateContainer within sandbox \"c1a9e38de2a9788a11f7d2af36536b8b38648736793982f56d5d89a5ec006699\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"7eec761e253c7a9a6543c49a0937eb40f70d759ba0b3100a8861a5371f8dfd84\"" Dec 16 13:07:32.629378 containerd[1969]: time="2025-12-16T13:07:32.629330064Z" level=info msg="StartContainer for \"7eec761e253c7a9a6543c49a0937eb40f70d759ba0b3100a8861a5371f8dfd84\"" Dec 16 13:07:32.635281 containerd[1969]: time="2025-12-16T13:07:32.635229953Z" level=info msg="connecting to shim 7eec761e253c7a9a6543c49a0937eb40f70d759ba0b3100a8861a5371f8dfd84" address="unix:///run/containerd/s/1a12de6008f564d8f5007f887466ff78bc4ef8abe82954ed7d95e775aca38b05" protocol=ttrpc version=3 Dec 16 13:07:32.649000 audit: BPF prog-id=105 op=LOAD Dec 16 13:07:32.652000 audit: BPF prog-id=106 op=LOAD Dec 16 13:07:32.652000 audit[3091]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000178238 a2=98 a3=0 items=0 ppid=2978 pid=3091 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:07:32.652000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6139333336393330653839363465356663336631353037393037646637 Dec 16 13:07:32.653000 audit: BPF prog-id=106 op=UNLOAD Dec 16 13:07:32.653000 audit[3091]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2978 pid=3091 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:07:32.653000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6139333336393330653839363465356663336631353037393037646637 Dec 16 13:07:32.655000 audit: BPF prog-id=107 op=LOAD Dec 16 13:07:32.655000 audit[3091]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000178488 a2=98 a3=0 items=0 ppid=2978 pid=3091 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:07:32.655000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6139333336393330653839363465356663336631353037393037646637 Dec 16 13:07:32.656000 audit: BPF prog-id=108 op=LOAD Dec 16 13:07:32.656000 audit[3091]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000178218 a2=98 a3=0 items=0 ppid=2978 pid=3091 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:07:32.656000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6139333336393330653839363465356663336631353037393037646637 Dec 16 13:07:32.659000 audit: BPF prog-id=108 op=UNLOAD Dec 16 13:07:32.659000 audit[3091]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2978 pid=3091 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:07:32.659000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6139333336393330653839363465356663336631353037393037646637 Dec 16 13:07:32.659000 audit: BPF prog-id=107 op=UNLOAD Dec 16 13:07:32.659000 audit[3091]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2978 pid=3091 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:07:32.659000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6139333336393330653839363465356663336631353037393037646637 Dec 16 13:07:32.659000 audit: BPF prog-id=109 op=LOAD Dec 16 13:07:32.659000 audit[3091]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001786e8 a2=98 a3=0 items=0 ppid=2978 pid=3091 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:07:32.659000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6139333336393330653839363465356663336631353037393037646637 Dec 16 13:07:32.663394 systemd[1]: Started cri-containerd-a6ac8c41d79c1b4cd810d84b99cc37c088cb9a3fd7cd9d5af8edcd98da16a3ea.scope - libcontainer container a6ac8c41d79c1b4cd810d84b99cc37c088cb9a3fd7cd9d5af8edcd98da16a3ea. Dec 16 13:07:32.677197 systemd[1]: Started cri-containerd-7eec761e253c7a9a6543c49a0937eb40f70d759ba0b3100a8861a5371f8dfd84.scope - libcontainer container 7eec761e253c7a9a6543c49a0937eb40f70d759ba0b3100a8861a5371f8dfd84. Dec 16 13:07:32.702000 audit: BPF prog-id=110 op=LOAD Dec 16 13:07:32.704000 audit: BPF prog-id=111 op=LOAD Dec 16 13:07:32.704000 audit[3102]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=2974 pid=3102 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:07:32.704000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6136616338633431643739633162346364383130643834623939636333 Dec 16 13:07:32.705000 audit: BPF prog-id=111 op=UNLOAD Dec 16 13:07:32.705000 audit[3102]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2974 pid=3102 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:07:32.705000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6136616338633431643739633162346364383130643834623939636333 Dec 16 13:07:32.706000 audit: BPF prog-id=112 op=LOAD Dec 16 13:07:32.706000 audit[3102]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=2974 pid=3102 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:07:32.706000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6136616338633431643739633162346364383130643834623939636333 Dec 16 13:07:32.708000 audit: BPF prog-id=113 op=LOAD Dec 16 13:07:32.706000 audit: BPF prog-id=114 op=LOAD Dec 16 13:07:32.706000 audit[3102]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=2974 pid=3102 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:07:32.706000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6136616338633431643739633162346364383130643834623939636333 Dec 16 13:07:32.709000 audit: BPF prog-id=114 op=UNLOAD Dec 16 13:07:32.709000 audit[3102]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2974 pid=3102 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:07:32.709000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6136616338633431643739633162346364383130643834623939636333 Dec 16 13:07:32.709000 audit: BPF prog-id=112 op=UNLOAD Dec 16 13:07:32.709000 audit[3102]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2974 pid=3102 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:07:32.709000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6136616338633431643739633162346364383130643834623939636333 Dec 16 13:07:32.710000 audit: BPF prog-id=115 op=LOAD Dec 16 13:07:32.710000 audit: BPF prog-id=116 op=LOAD Dec 16 13:07:32.710000 audit[3118]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=2979 pid=3118 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:07:32.710000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3765656337363165323533633761396136353433633439613039333765 Dec 16 13:07:32.710000 audit: BPF prog-id=115 op=UNLOAD Dec 16 13:07:32.710000 audit[3118]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2979 pid=3118 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:07:32.710000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3765656337363165323533633761396136353433633439613039333765 Dec 16 13:07:32.711000 audit: BPF prog-id=117 op=LOAD Dec 16 13:07:32.710000 audit[3102]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=2974 pid=3102 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:07:32.711000 audit[3118]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=2979 pid=3118 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:07:32.711000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3765656337363165323533633761396136353433633439613039333765 Dec 16 13:07:32.710000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6136616338633431643739633162346364383130643834623939636333 Dec 16 13:07:32.711000 audit: BPF prog-id=118 op=LOAD Dec 16 13:07:32.711000 audit[3118]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=2979 pid=3118 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:07:32.711000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3765656337363165323533633761396136353433633439613039333765 Dec 16 13:07:32.711000 audit: BPF prog-id=118 op=UNLOAD Dec 16 13:07:32.711000 audit[3118]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2979 pid=3118 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:07:32.711000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3765656337363165323533633761396136353433633439613039333765 Dec 16 13:07:32.712000 audit: BPF prog-id=117 op=UNLOAD Dec 16 13:07:32.712000 audit[3118]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2979 pid=3118 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:07:32.712000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3765656337363165323533633761396136353433633439613039333765 Dec 16 13:07:32.712000 audit: BPF prog-id=119 op=LOAD Dec 16 13:07:32.712000 audit[3118]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=2979 pid=3118 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:07:32.712000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3765656337363165323533633761396136353433633439613039333765 Dec 16 13:07:32.752724 kubelet[2917]: E1216 13:07:32.752509 2917 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://172.31.28.98:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.28.98:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Dec 16 13:07:32.784002 containerd[1969]: time="2025-12-16T13:07:32.783659839Z" level=info msg="StartContainer for \"a9336930e8964e5fc3f1507907df79056c3b5a5fb11ab34df92a37fe0b237de1\" returns successfully" Dec 16 13:07:32.806839 containerd[1969]: time="2025-12-16T13:07:32.806760575Z" level=info msg="StartContainer for \"a6ac8c41d79c1b4cd810d84b99cc37c088cb9a3fd7cd9d5af8edcd98da16a3ea\" returns successfully" Dec 16 13:07:32.812680 containerd[1969]: time="2025-12-16T13:07:32.812542964Z" level=info msg="StartContainer for \"7eec761e253c7a9a6543c49a0937eb40f70d759ba0b3100a8861a5371f8dfd84\" returns successfully" Dec 16 13:07:32.907043 kubelet[2917]: E1216 13:07:32.906818 2917 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.28.98:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-28-98?timeout=10s\": dial tcp 172.31.28.98:6443: connect: connection refused" interval="1.6s" Dec 16 13:07:33.131478 kubelet[2917]: I1216 13:07:33.131408 2917 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-28-98" Dec 16 13:07:33.132345 kubelet[2917]: E1216 13:07:33.132303 2917 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.28.98:6443/api/v1/nodes\": dial tcp 172.31.28.98:6443: connect: connection refused" node="ip-172-31-28-98" Dec 16 13:07:33.570021 kubelet[2917]: E1216 13:07:33.569843 2917 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-28-98\" not found" node="ip-172-31-28-98" Dec 16 13:07:33.577474 kubelet[2917]: E1216 13:07:33.576440 2917 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-28-98\" not found" node="ip-172-31-28-98" Dec 16 13:07:33.580116 kubelet[2917]: E1216 13:07:33.580089 2917 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-28-98\" not found" node="ip-172-31-28-98" Dec 16 13:07:34.583311 kubelet[2917]: E1216 13:07:34.582676 2917 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-28-98\" not found" node="ip-172-31-28-98" Dec 16 13:07:34.586370 kubelet[2917]: E1216 13:07:34.585524 2917 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-28-98\" not found" node="ip-172-31-28-98" Dec 16 13:07:34.586370 kubelet[2917]: E1216 13:07:34.585732 2917 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-28-98\" not found" node="ip-172-31-28-98" Dec 16 13:07:34.735149 kubelet[2917]: I1216 13:07:34.734845 2917 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-28-98" Dec 16 13:07:35.589625 kubelet[2917]: E1216 13:07:35.589591 2917 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-28-98\" not found" node="ip-172-31-28-98" Dec 16 13:07:35.591550 kubelet[2917]: E1216 13:07:35.591466 2917 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-28-98\" not found" node="ip-172-31-28-98" Dec 16 13:07:35.593316 kubelet[2917]: E1216 13:07:35.593224 2917 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-28-98\" not found" node="ip-172-31-28-98" Dec 16 13:07:36.535465 kubelet[2917]: E1216 13:07:36.535423 2917 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-28-98\" not found" node="ip-172-31-28-98" Dec 16 13:07:36.588597 kubelet[2917]: I1216 13:07:36.588136 2917 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-28-98" Dec 16 13:07:36.588597 kubelet[2917]: E1216 13:07:36.588183 2917 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"ip-172-31-28-98\": node \"ip-172-31-28-98\" not found" Dec 16 13:07:36.688049 kubelet[2917]: I1216 13:07:36.685436 2917 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-28-98" Dec 16 13:07:36.701754 kubelet[2917]: E1216 13:07:36.701708 2917 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ip-172-31-28-98\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ip-172-31-28-98" Dec 16 13:07:36.701754 kubelet[2917]: I1216 13:07:36.701756 2917 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-28-98" Dec 16 13:07:36.705095 kubelet[2917]: E1216 13:07:36.705042 2917 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-28-98\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ip-172-31-28-98" Dec 16 13:07:36.705095 kubelet[2917]: I1216 13:07:36.705095 2917 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-28-98" Dec 16 13:07:36.710227 kubelet[2917]: E1216 13:07:36.710180 2917 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ip-172-31-28-98\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ip-172-31-28-98" Dec 16 13:07:37.429416 kubelet[2917]: I1216 13:07:37.429155 2917 apiserver.go:52] "Watching apiserver" Dec 16 13:07:37.485234 kubelet[2917]: I1216 13:07:37.485159 2917 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Dec 16 13:07:39.187697 systemd[1]: Reload requested from client PID 3199 ('systemctl') (unit session-7.scope)... Dec 16 13:07:39.187721 systemd[1]: Reloading... Dec 16 13:07:39.435177 zram_generator::config[3255]: No configuration found. Dec 16 13:07:39.904981 systemd[1]: Reloading finished in 716 ms. Dec 16 13:07:39.947314 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 13:07:39.970479 kernel: kauditd_printk_skb: 158 callbacks suppressed Dec 16 13:07:39.970632 kernel: audit: type=1131 audit(1765890459.966:407): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:07:39.966000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:07:39.966556 systemd[1]: kubelet.service: Deactivated successfully. Dec 16 13:07:39.966836 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 13:07:39.966915 systemd[1]: kubelet.service: Consumed 1.268s CPU time, 128.8M memory peak. Dec 16 13:07:39.974181 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 13:07:39.975000 audit: BPF prog-id=120 op=LOAD Dec 16 13:07:39.978149 kernel: audit: type=1334 audit(1765890459.975:408): prog-id=120 op=LOAD Dec 16 13:07:39.975000 audit: BPF prog-id=121 op=LOAD Dec 16 13:07:39.984667 kernel: audit: type=1334 audit(1765890459.975:409): prog-id=121 op=LOAD Dec 16 13:07:39.984800 kernel: audit: type=1334 audit(1765890459.975:410): prog-id=84 op=UNLOAD Dec 16 13:07:39.975000 audit: BPF prog-id=84 op=UNLOAD Dec 16 13:07:39.975000 audit: BPF prog-id=85 op=UNLOAD Dec 16 13:07:39.989114 kernel: audit: type=1334 audit(1765890459.975:411): prog-id=85 op=UNLOAD Dec 16 13:07:39.975000 audit: BPF prog-id=122 op=LOAD Dec 16 13:07:39.992116 kernel: audit: type=1334 audit(1765890459.975:412): prog-id=122 op=LOAD Dec 16 13:07:39.992213 kernel: audit: type=1334 audit(1765890459.975:413): prog-id=81 op=UNLOAD Dec 16 13:07:39.975000 audit: BPF prog-id=81 op=UNLOAD Dec 16 13:07:39.975000 audit: BPF prog-id=123 op=LOAD Dec 16 13:07:39.975000 audit: BPF prog-id=124 op=LOAD Dec 16 13:07:39.996874 kernel: audit: type=1334 audit(1765890459.975:414): prog-id=123 op=LOAD Dec 16 13:07:39.996968 kernel: audit: type=1334 audit(1765890459.975:415): prog-id=124 op=LOAD Dec 16 13:07:39.997005 kernel: audit: type=1334 audit(1765890459.975:416): prog-id=82 op=UNLOAD Dec 16 13:07:39.975000 audit: BPF prog-id=82 op=UNLOAD Dec 16 13:07:39.975000 audit: BPF prog-id=83 op=UNLOAD Dec 16 13:07:39.978000 audit: BPF prog-id=125 op=LOAD Dec 16 13:07:39.978000 audit: BPF prog-id=79 op=UNLOAD Dec 16 13:07:39.981000 audit: BPF prog-id=126 op=LOAD Dec 16 13:07:39.981000 audit: BPF prog-id=80 op=UNLOAD Dec 16 13:07:39.983000 audit: BPF prog-id=127 op=LOAD Dec 16 13:07:39.983000 audit: BPF prog-id=73 op=UNLOAD Dec 16 13:07:39.983000 audit: BPF prog-id=128 op=LOAD Dec 16 13:07:39.983000 audit: BPF prog-id=129 op=LOAD Dec 16 13:07:39.983000 audit: BPF prog-id=74 op=UNLOAD Dec 16 13:07:39.983000 audit: BPF prog-id=75 op=UNLOAD Dec 16 13:07:39.988000 audit: BPF prog-id=130 op=LOAD Dec 16 13:07:39.988000 audit: BPF prog-id=86 op=UNLOAD Dec 16 13:07:39.988000 audit: BPF prog-id=131 op=LOAD Dec 16 13:07:39.988000 audit: BPF prog-id=132 op=LOAD Dec 16 13:07:39.988000 audit: BPF prog-id=87 op=UNLOAD Dec 16 13:07:39.988000 audit: BPF prog-id=88 op=UNLOAD Dec 16 13:07:39.990000 audit: BPF prog-id=133 op=LOAD Dec 16 13:07:39.990000 audit: BPF prog-id=89 op=UNLOAD Dec 16 13:07:39.991000 audit: BPF prog-id=134 op=LOAD Dec 16 13:07:39.991000 audit: BPF prog-id=76 op=UNLOAD Dec 16 13:07:39.991000 audit: BPF prog-id=135 op=LOAD Dec 16 13:07:39.991000 audit: BPF prog-id=136 op=LOAD Dec 16 13:07:39.991000 audit: BPF prog-id=77 op=UNLOAD Dec 16 13:07:39.991000 audit: BPF prog-id=78 op=UNLOAD Dec 16 13:07:39.994000 audit: BPF prog-id=137 op=LOAD Dec 16 13:07:39.994000 audit: BPF prog-id=70 op=UNLOAD Dec 16 13:07:39.995000 audit: BPF prog-id=138 op=LOAD Dec 16 13:07:39.995000 audit: BPF prog-id=139 op=LOAD Dec 16 13:07:39.995000 audit: BPF prog-id=71 op=UNLOAD Dec 16 13:07:39.995000 audit: BPF prog-id=72 op=UNLOAD Dec 16 13:07:40.337972 update_engine[1942]: I20251216 13:07:40.337141 1942 update_attempter.cc:509] Updating boot flags... Dec 16 13:07:40.352000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:07:40.351709 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 13:07:40.371818 (kubelet)[3309]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 16 13:07:40.510459 kubelet[3309]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 16 13:07:40.510459 kubelet[3309]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Dec 16 13:07:40.510459 kubelet[3309]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 16 13:07:40.510459 kubelet[3309]: I1216 13:07:40.509971 3309 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 16 13:07:40.526507 kubelet[3309]: I1216 13:07:40.525208 3309 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Dec 16 13:07:40.526507 kubelet[3309]: I1216 13:07:40.525244 3309 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 16 13:07:40.526507 kubelet[3309]: I1216 13:07:40.525577 3309 server.go:956] "Client rotation is on, will bootstrap in background" Dec 16 13:07:40.530108 kubelet[3309]: I1216 13:07:40.529128 3309 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Dec 16 13:07:40.544088 kubelet[3309]: I1216 13:07:40.543313 3309 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 16 13:07:40.562083 kubelet[3309]: I1216 13:07:40.562036 3309 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Dec 16 13:07:40.566523 kubelet[3309]: I1216 13:07:40.566480 3309 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 16 13:07:40.566822 kubelet[3309]: I1216 13:07:40.566769 3309 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 16 13:07:40.567036 kubelet[3309]: I1216 13:07:40.566817 3309 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-28-98","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 16 13:07:40.567237 kubelet[3309]: I1216 13:07:40.567039 3309 topology_manager.go:138] "Creating topology manager with none policy" Dec 16 13:07:40.567237 kubelet[3309]: I1216 13:07:40.567054 3309 container_manager_linux.go:303] "Creating device plugin manager" Dec 16 13:07:40.569352 kubelet[3309]: I1216 13:07:40.569212 3309 state_mem.go:36] "Initialized new in-memory state store" Dec 16 13:07:40.574811 kubelet[3309]: I1216 13:07:40.574605 3309 kubelet.go:480] "Attempting to sync node with API server" Dec 16 13:07:40.574811 kubelet[3309]: I1216 13:07:40.574664 3309 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 16 13:07:40.575014 kubelet[3309]: I1216 13:07:40.574862 3309 kubelet.go:386] "Adding apiserver pod source" Dec 16 13:07:40.575014 kubelet[3309]: I1216 13:07:40.574902 3309 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 16 13:07:40.612788 kubelet[3309]: I1216 13:07:40.612432 3309 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.1.5" apiVersion="v1" Dec 16 13:07:40.616459 kubelet[3309]: I1216 13:07:40.615840 3309 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Dec 16 13:07:40.631556 kubelet[3309]: I1216 13:07:40.631289 3309 watchdog_linux.go:99] "Systemd watchdog is not enabled" Dec 16 13:07:40.632276 kubelet[3309]: I1216 13:07:40.632034 3309 server.go:1289] "Started kubelet" Dec 16 13:07:40.635437 kubelet[3309]: I1216 13:07:40.634748 3309 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 16 13:07:40.638843 kubelet[3309]: I1216 13:07:40.638651 3309 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 16 13:07:40.641590 kubelet[3309]: I1216 13:07:40.640700 3309 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 16 13:07:40.644348 kubelet[3309]: I1216 13:07:40.644292 3309 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Dec 16 13:07:40.650085 kubelet[3309]: I1216 13:07:40.649960 3309 server.go:317] "Adding debug handlers to kubelet server" Dec 16 13:07:40.662627 kubelet[3309]: I1216 13:07:40.662584 3309 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Dec 16 13:07:40.667195 kubelet[3309]: I1216 13:07:40.667157 3309 volume_manager.go:297] "Starting Kubelet Volume Manager" Dec 16 13:07:40.667388 kubelet[3309]: I1216 13:07:40.667373 3309 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Dec 16 13:07:40.670107 kubelet[3309]: I1216 13:07:40.667566 3309 reconciler.go:26] "Reconciler: start to sync state" Dec 16 13:07:40.674532 kubelet[3309]: I1216 13:07:40.674499 3309 factory.go:223] Registration of the systemd container factory successfully Dec 16 13:07:40.676494 kubelet[3309]: I1216 13:07:40.676453 3309 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 16 13:07:40.678307 kubelet[3309]: E1216 13:07:40.678279 3309 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 16 13:07:40.695089 kubelet[3309]: I1216 13:07:40.693820 3309 factory.go:223] Registration of the containerd container factory successfully Dec 16 13:07:40.747890 kubelet[3309]: I1216 13:07:40.747821 3309 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Dec 16 13:07:40.753237 kubelet[3309]: I1216 13:07:40.753198 3309 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Dec 16 13:07:40.753237 kubelet[3309]: I1216 13:07:40.753238 3309 status_manager.go:230] "Starting to sync pod status with apiserver" Dec 16 13:07:40.753466 kubelet[3309]: I1216 13:07:40.753405 3309 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Dec 16 13:07:40.753466 kubelet[3309]: I1216 13:07:40.753417 3309 kubelet.go:2436] "Starting kubelet main sync loop" Dec 16 13:07:40.753539 kubelet[3309]: E1216 13:07:40.753486 3309 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 16 13:07:40.856327 kubelet[3309]: E1216 13:07:40.856120 3309 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Dec 16 13:07:41.056583 kubelet[3309]: E1216 13:07:41.056202 3309 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Dec 16 13:07:41.075332 kubelet[3309]: I1216 13:07:41.075304 3309 cpu_manager.go:221] "Starting CPU manager" policy="none" Dec 16 13:07:41.075580 kubelet[3309]: I1216 13:07:41.075561 3309 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Dec 16 13:07:41.075787 kubelet[3309]: I1216 13:07:41.075772 3309 state_mem.go:36] "Initialized new in-memory state store" Dec 16 13:07:41.076480 kubelet[3309]: I1216 13:07:41.076450 3309 state_mem.go:88] "Updated default CPUSet" cpuSet="" Dec 16 13:07:41.077476 kubelet[3309]: I1216 13:07:41.077100 3309 state_mem.go:96] "Updated CPUSet assignments" assignments={} Dec 16 13:07:41.077476 kubelet[3309]: I1216 13:07:41.077172 3309 policy_none.go:49] "None policy: Start" Dec 16 13:07:41.077476 kubelet[3309]: I1216 13:07:41.077190 3309 memory_manager.go:186] "Starting memorymanager" policy="None" Dec 16 13:07:41.077476 kubelet[3309]: I1216 13:07:41.077208 3309 state_mem.go:35] "Initializing new in-memory state store" Dec 16 13:07:41.077476 kubelet[3309]: I1216 13:07:41.077422 3309 state_mem.go:75] "Updated machine memory state" Dec 16 13:07:41.099487 kubelet[3309]: E1216 13:07:41.099460 3309 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Dec 16 13:07:41.101954 kubelet[3309]: I1216 13:07:41.101664 3309 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 16 13:07:41.101954 kubelet[3309]: I1216 13:07:41.101691 3309 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 16 13:07:41.108515 kubelet[3309]: I1216 13:07:41.106500 3309 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 16 13:07:41.121120 kubelet[3309]: E1216 13:07:41.120143 3309 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Dec 16 13:07:41.259641 kubelet[3309]: I1216 13:07:41.254544 3309 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-28-98" Dec 16 13:07:41.287737 kubelet[3309]: I1216 13:07:41.287696 3309 kubelet_node_status.go:124] "Node was previously registered" node="ip-172-31-28-98" Dec 16 13:07:41.293786 kubelet[3309]: I1216 13:07:41.293586 3309 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-28-98" Dec 16 13:07:41.474861 kubelet[3309]: I1216 13:07:41.474613 3309 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-28-98" Dec 16 13:07:41.478334 kubelet[3309]: I1216 13:07:41.475761 3309 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-28-98" Dec 16 13:07:41.481329 kubelet[3309]: I1216 13:07:41.476186 3309 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-28-98" Dec 16 13:07:41.487432 kubelet[3309]: I1216 13:07:41.486176 3309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/bba725e2030968353d2961278a9d032d-kubeconfig\") pod \"kube-scheduler-ip-172-31-28-98\" (UID: \"bba725e2030968353d2961278a9d032d\") " pod="kube-system/kube-scheduler-ip-172-31-28-98" Dec 16 13:07:41.487432 kubelet[3309]: I1216 13:07:41.486277 3309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/49c6171303d12176a924fb9651f45750-k8s-certs\") pod \"kube-apiserver-ip-172-31-28-98\" (UID: \"49c6171303d12176a924fb9651f45750\") " pod="kube-system/kube-apiserver-ip-172-31-28-98" Dec 16 13:07:41.487432 kubelet[3309]: I1216 13:07:41.486360 3309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/49c6171303d12176a924fb9651f45750-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-28-98\" (UID: \"49c6171303d12176a924fb9651f45750\") " pod="kube-system/kube-apiserver-ip-172-31-28-98" Dec 16 13:07:41.487432 kubelet[3309]: I1216 13:07:41.486933 3309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/bcd51d23e6e4d60161206988639c1794-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-28-98\" (UID: \"bcd51d23e6e4d60161206988639c1794\") " pod="kube-system/kube-controller-manager-ip-172-31-28-98" Dec 16 13:07:41.487432 kubelet[3309]: I1216 13:07:41.487375 3309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/bcd51d23e6e4d60161206988639c1794-k8s-certs\") pod \"kube-controller-manager-ip-172-31-28-98\" (UID: \"bcd51d23e6e4d60161206988639c1794\") " pod="kube-system/kube-controller-manager-ip-172-31-28-98" Dec 16 13:07:41.487818 kubelet[3309]: I1216 13:07:41.487443 3309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/bcd51d23e6e4d60161206988639c1794-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-28-98\" (UID: \"bcd51d23e6e4d60161206988639c1794\") " pod="kube-system/kube-controller-manager-ip-172-31-28-98" Dec 16 13:07:41.487818 kubelet[3309]: I1216 13:07:41.487503 3309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/49c6171303d12176a924fb9651f45750-ca-certs\") pod \"kube-apiserver-ip-172-31-28-98\" (UID: \"49c6171303d12176a924fb9651f45750\") " pod="kube-system/kube-apiserver-ip-172-31-28-98" Dec 16 13:07:41.487818 kubelet[3309]: I1216 13:07:41.487528 3309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/bcd51d23e6e4d60161206988639c1794-ca-certs\") pod \"kube-controller-manager-ip-172-31-28-98\" (UID: \"bcd51d23e6e4d60161206988639c1794\") " pod="kube-system/kube-controller-manager-ip-172-31-28-98" Dec 16 13:07:41.487818 kubelet[3309]: I1216 13:07:41.487552 3309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/bcd51d23e6e4d60161206988639c1794-kubeconfig\") pod \"kube-controller-manager-ip-172-31-28-98\" (UID: \"bcd51d23e6e4d60161206988639c1794\") " pod="kube-system/kube-controller-manager-ip-172-31-28-98" Dec 16 13:07:41.585315 kubelet[3309]: I1216 13:07:41.584191 3309 apiserver.go:52] "Watching apiserver" Dec 16 13:07:41.668015 kubelet[3309]: I1216 13:07:41.667935 3309 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Dec 16 13:07:41.870874 kubelet[3309]: I1216 13:07:41.870625 3309 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-28-98" podStartSLOduration=0.870600686 podStartE2EDuration="870.600686ms" podCreationTimestamp="2025-12-16 13:07:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 13:07:41.854119099 +0000 UTC m=+1.461235153" watchObservedRunningTime="2025-12-16 13:07:41.870600686 +0000 UTC m=+1.477716739" Dec 16 13:07:41.887496 kubelet[3309]: I1216 13:07:41.886170 3309 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-28-98" podStartSLOduration=0.886143584 podStartE2EDuration="886.143584ms" podCreationTimestamp="2025-12-16 13:07:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 13:07:41.872018819 +0000 UTC m=+1.479134902" watchObservedRunningTime="2025-12-16 13:07:41.886143584 +0000 UTC m=+1.493259641" Dec 16 13:07:41.908589 kubelet[3309]: I1216 13:07:41.907806 3309 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-28-98" podStartSLOduration=0.907782866 podStartE2EDuration="907.782866ms" podCreationTimestamp="2025-12-16 13:07:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 13:07:41.887394574 +0000 UTC m=+1.494510629" watchObservedRunningTime="2025-12-16 13:07:41.907782866 +0000 UTC m=+1.514898922" Dec 16 13:07:43.916665 kubelet[3309]: I1216 13:07:43.916631 3309 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Dec 16 13:07:43.922576 containerd[1969]: time="2025-12-16T13:07:43.922500886Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Dec 16 13:07:43.923455 kubelet[3309]: I1216 13:07:43.922956 3309 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Dec 16 13:07:44.887946 systemd[1]: Created slice kubepods-besteffort-pod9606f677_8759_4c58_a456_b208070c8f64.slice - libcontainer container kubepods-besteffort-pod9606f677_8759_4c58_a456_b208070c8f64.slice. Dec 16 13:07:44.918634 kubelet[3309]: I1216 13:07:44.918584 3309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9606f677-8759-4c58-a456-b208070c8f64-lib-modules\") pod \"kube-proxy-4dxsc\" (UID: \"9606f677-8759-4c58-a456-b208070c8f64\") " pod="kube-system/kube-proxy-4dxsc" Dec 16 13:07:44.918634 kubelet[3309]: I1216 13:07:44.918774 3309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/9606f677-8759-4c58-a456-b208070c8f64-kube-proxy\") pod \"kube-proxy-4dxsc\" (UID: \"9606f677-8759-4c58-a456-b208070c8f64\") " pod="kube-system/kube-proxy-4dxsc" Dec 16 13:07:44.918634 kubelet[3309]: I1216 13:07:44.918804 3309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9606f677-8759-4c58-a456-b208070c8f64-xtables-lock\") pod \"kube-proxy-4dxsc\" (UID: \"9606f677-8759-4c58-a456-b208070c8f64\") " pod="kube-system/kube-proxy-4dxsc" Dec 16 13:07:44.918634 kubelet[3309]: I1216 13:07:44.918824 3309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jwfx9\" (UniqueName: \"kubernetes.io/projected/9606f677-8759-4c58-a456-b208070c8f64-kube-api-access-jwfx9\") pod \"kube-proxy-4dxsc\" (UID: \"9606f677-8759-4c58-a456-b208070c8f64\") " pod="kube-system/kube-proxy-4dxsc" Dec 16 13:07:45.016766 systemd[1]: Created slice kubepods-besteffort-pod7a4792f6_b125_4976_aefd_49f96ccab0c9.slice - libcontainer container kubepods-besteffort-pod7a4792f6_b125_4976_aefd_49f96ccab0c9.slice. Dec 16 13:07:45.120316 kubelet[3309]: I1216 13:07:45.120254 3309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/7a4792f6-b125-4976-aefd-49f96ccab0c9-var-lib-calico\") pod \"tigera-operator-7dcd859c48-mtmv7\" (UID: \"7a4792f6-b125-4976-aefd-49f96ccab0c9\") " pod="tigera-operator/tigera-operator-7dcd859c48-mtmv7" Dec 16 13:07:45.120569 kubelet[3309]: I1216 13:07:45.120334 3309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z7tzh\" (UniqueName: \"kubernetes.io/projected/7a4792f6-b125-4976-aefd-49f96ccab0c9-kube-api-access-z7tzh\") pod \"tigera-operator-7dcd859c48-mtmv7\" (UID: \"7a4792f6-b125-4976-aefd-49f96ccab0c9\") " pod="tigera-operator/tigera-operator-7dcd859c48-mtmv7" Dec 16 13:07:45.197980 containerd[1969]: time="2025-12-16T13:07:45.197824229Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-4dxsc,Uid:9606f677-8759-4c58-a456-b208070c8f64,Namespace:kube-system,Attempt:0,}" Dec 16 13:07:45.261082 containerd[1969]: time="2025-12-16T13:07:45.259802985Z" level=info msg="connecting to shim 93f818ae387cef82948ff2fcec1a993e016f97ee2fb2d1f587e04f19f3d84da0" address="unix:///run/containerd/s/98841fd8c440a738d64bed110e4a697357dbcad97db37d169aaa3573e7f06a82" namespace=k8s.io protocol=ttrpc version=3 Dec 16 13:07:45.328660 systemd[1]: Started cri-containerd-93f818ae387cef82948ff2fcec1a993e016f97ee2fb2d1f587e04f19f3d84da0.scope - libcontainer container 93f818ae387cef82948ff2fcec1a993e016f97ee2fb2d1f587e04f19f3d84da0. Dec 16 13:07:45.362248 kernel: kauditd_printk_skb: 32 callbacks suppressed Dec 16 13:07:45.362375 kernel: audit: type=1334 audit(1765890465.359:449): prog-id=140 op=LOAD Dec 16 13:07:45.359000 audit: BPF prog-id=140 op=LOAD Dec 16 13:07:45.364000 audit: BPF prog-id=141 op=LOAD Dec 16 13:07:45.364000 audit[3561]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=3550 pid=3561 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:07:45.368532 kernel: audit: type=1334 audit(1765890465.364:450): prog-id=141 op=LOAD Dec 16 13:07:45.368633 kernel: audit: type=1300 audit(1765890465.364:450): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=3550 pid=3561 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:07:45.364000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3933663831386165333837636566383239343866663266636563316139 Dec 16 13:07:45.376352 kernel: audit: type=1327 audit(1765890465.364:450): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3933663831386165333837636566383239343866663266636563316139 Dec 16 13:07:45.364000 audit: BPF prog-id=141 op=UNLOAD Dec 16 13:07:45.381716 kernel: audit: type=1334 audit(1765890465.364:451): prog-id=141 op=UNLOAD Dec 16 13:07:45.381848 kernel: audit: type=1300 audit(1765890465.364:451): arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3550 pid=3561 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:07:45.364000 audit[3561]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3550 pid=3561 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:07:45.387175 kernel: audit: type=1327 audit(1765890465.364:451): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3933663831386165333837636566383239343866663266636563316139 Dec 16 13:07:45.364000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3933663831386165333837636566383239343866663266636563316139 Dec 16 13:07:45.395568 containerd[1969]: time="2025-12-16T13:07:45.395523517Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-mtmv7,Uid:7a4792f6-b125-4976-aefd-49f96ccab0c9,Namespace:tigera-operator,Attempt:0,}" Dec 16 13:07:45.365000 audit: BPF prog-id=142 op=LOAD Dec 16 13:07:45.365000 audit[3561]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=3550 pid=3561 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:07:45.401502 kernel: audit: type=1334 audit(1765890465.365:452): prog-id=142 op=LOAD Dec 16 13:07:45.401592 kernel: audit: type=1300 audit(1765890465.365:452): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=3550 pid=3561 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:07:45.365000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3933663831386165333837636566383239343866663266636563316139 Dec 16 13:07:45.415099 kernel: audit: type=1327 audit(1765890465.365:452): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3933663831386165333837636566383239343866663266636563316139 Dec 16 13:07:45.366000 audit: BPF prog-id=143 op=LOAD Dec 16 13:07:45.366000 audit[3561]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=3550 pid=3561 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:07:45.366000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3933663831386165333837636566383239343866663266636563316139 Dec 16 13:07:45.366000 audit: BPF prog-id=143 op=UNLOAD Dec 16 13:07:45.366000 audit[3561]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3550 pid=3561 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:07:45.366000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3933663831386165333837636566383239343866663266636563316139 Dec 16 13:07:45.366000 audit: BPF prog-id=142 op=UNLOAD Dec 16 13:07:45.366000 audit[3561]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3550 pid=3561 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:07:45.366000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3933663831386165333837636566383239343866663266636563316139 Dec 16 13:07:45.366000 audit: BPF prog-id=144 op=LOAD Dec 16 13:07:45.366000 audit[3561]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=3550 pid=3561 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:07:45.366000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3933663831386165333837636566383239343866663266636563316139 Dec 16 13:07:45.426827 containerd[1969]: time="2025-12-16T13:07:45.426782189Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-4dxsc,Uid:9606f677-8759-4c58-a456-b208070c8f64,Namespace:kube-system,Attempt:0,} returns sandbox id \"93f818ae387cef82948ff2fcec1a993e016f97ee2fb2d1f587e04f19f3d84da0\"" Dec 16 13:07:45.444192 containerd[1969]: time="2025-12-16T13:07:45.443821356Z" level=info msg="CreateContainer within sandbox \"93f818ae387cef82948ff2fcec1a993e016f97ee2fb2d1f587e04f19f3d84da0\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Dec 16 13:07:45.454044 containerd[1969]: time="2025-12-16T13:07:45.453836795Z" level=info msg="connecting to shim 303b963e9f3a9454df4c233abd6e61ce073c66a319a3d021d2947670f2aae156" address="unix:///run/containerd/s/3ffc3b25b520b353944ec4494e1e1dc1078f08281c0bd0f328649cef92868711" namespace=k8s.io protocol=ttrpc version=3 Dec 16 13:07:45.472300 containerd[1969]: time="2025-12-16T13:07:45.472236808Z" level=info msg="Container 6dd080ab27ecbf9b1cbd590b7adb41d4214fd005b1974d38e91d5c9979d293d3: CDI devices from CRI Config.CDIDevices: []" Dec 16 13:07:45.505383 containerd[1969]: time="2025-12-16T13:07:45.505216665Z" level=info msg="CreateContainer within sandbox \"93f818ae387cef82948ff2fcec1a993e016f97ee2fb2d1f587e04f19f3d84da0\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"6dd080ab27ecbf9b1cbd590b7adb41d4214fd005b1974d38e91d5c9979d293d3\"" Dec 16 13:07:45.508391 systemd[1]: Started cri-containerd-303b963e9f3a9454df4c233abd6e61ce073c66a319a3d021d2947670f2aae156.scope - libcontainer container 303b963e9f3a9454df4c233abd6e61ce073c66a319a3d021d2947670f2aae156. Dec 16 13:07:45.513768 containerd[1969]: time="2025-12-16T13:07:45.513449842Z" level=info msg="StartContainer for \"6dd080ab27ecbf9b1cbd590b7adb41d4214fd005b1974d38e91d5c9979d293d3\"" Dec 16 13:07:45.529803 containerd[1969]: time="2025-12-16T13:07:45.529748280Z" level=info msg="connecting to shim 6dd080ab27ecbf9b1cbd590b7adb41d4214fd005b1974d38e91d5c9979d293d3" address="unix:///run/containerd/s/98841fd8c440a738d64bed110e4a697357dbcad97db37d169aaa3573e7f06a82" protocol=ttrpc version=3 Dec 16 13:07:45.545000 audit: BPF prog-id=145 op=LOAD Dec 16 13:07:45.546000 audit: BPF prog-id=146 op=LOAD Dec 16 13:07:45.546000 audit[3605]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a8238 a2=98 a3=0 items=0 ppid=3594 pid=3605 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:07:45.546000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3330336239363365396633613934353464663463323333616264366536 Dec 16 13:07:45.546000 audit: BPF prog-id=146 op=UNLOAD Dec 16 13:07:45.546000 audit[3605]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3594 pid=3605 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:07:45.546000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3330336239363365396633613934353464663463323333616264366536 Dec 16 13:07:45.546000 audit: BPF prog-id=147 op=LOAD Dec 16 13:07:45.546000 audit[3605]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a8488 a2=98 a3=0 items=0 ppid=3594 pid=3605 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:07:45.546000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3330336239363365396633613934353464663463323333616264366536 Dec 16 13:07:45.546000 audit: BPF prog-id=148 op=LOAD Dec 16 13:07:45.546000 audit[3605]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001a8218 a2=98 a3=0 items=0 ppid=3594 pid=3605 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:07:45.546000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3330336239363365396633613934353464663463323333616264366536 Dec 16 13:07:45.546000 audit: BPF prog-id=148 op=UNLOAD Dec 16 13:07:45.546000 audit[3605]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3594 pid=3605 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:07:45.546000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3330336239363365396633613934353464663463323333616264366536 Dec 16 13:07:45.546000 audit: BPF prog-id=147 op=UNLOAD Dec 16 13:07:45.546000 audit[3605]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3594 pid=3605 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:07:45.546000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3330336239363365396633613934353464663463323333616264366536 Dec 16 13:07:45.547000 audit: BPF prog-id=149 op=LOAD Dec 16 13:07:45.547000 audit[3605]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a86e8 a2=98 a3=0 items=0 ppid=3594 pid=3605 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:07:45.547000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3330336239363365396633613934353464663463323333616264366536 Dec 16 13:07:45.565780 systemd[1]: Started cri-containerd-6dd080ab27ecbf9b1cbd590b7adb41d4214fd005b1974d38e91d5c9979d293d3.scope - libcontainer container 6dd080ab27ecbf9b1cbd590b7adb41d4214fd005b1974d38e91d5c9979d293d3. Dec 16 13:07:45.623660 containerd[1969]: time="2025-12-16T13:07:45.623594001Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-mtmv7,Uid:7a4792f6-b125-4976-aefd-49f96ccab0c9,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"303b963e9f3a9454df4c233abd6e61ce073c66a319a3d021d2947670f2aae156\"" Dec 16 13:07:45.627664 containerd[1969]: time="2025-12-16T13:07:45.627605082Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Dec 16 13:07:45.650000 audit: BPF prog-id=150 op=LOAD Dec 16 13:07:45.650000 audit[3623]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001a0488 a2=98 a3=0 items=0 ppid=3550 pid=3623 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:07:45.650000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3664643038306162323765636266396231636264353930623761646234 Dec 16 13:07:45.650000 audit: BPF prog-id=151 op=LOAD Dec 16 13:07:45.650000 audit[3623]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c0001a0218 a2=98 a3=0 items=0 ppid=3550 pid=3623 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:07:45.650000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3664643038306162323765636266396231636264353930623761646234 Dec 16 13:07:45.650000 audit: BPF prog-id=151 op=UNLOAD Dec 16 13:07:45.650000 audit[3623]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3550 pid=3623 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:07:45.650000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3664643038306162323765636266396231636264353930623761646234 Dec 16 13:07:45.650000 audit: BPF prog-id=150 op=UNLOAD Dec 16 13:07:45.650000 audit[3623]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3550 pid=3623 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:07:45.650000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3664643038306162323765636266396231636264353930623761646234 Dec 16 13:07:45.650000 audit: BPF prog-id=152 op=LOAD Dec 16 13:07:45.650000 audit[3623]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001a06e8 a2=98 a3=0 items=0 ppid=3550 pid=3623 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:07:45.650000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3664643038306162323765636266396231636264353930623761646234 Dec 16 13:07:45.689207 containerd[1969]: time="2025-12-16T13:07:45.689106522Z" level=info msg="StartContainer for \"6dd080ab27ecbf9b1cbd590b7adb41d4214fd005b1974d38e91d5c9979d293d3\" returns successfully" Dec 16 13:07:47.357792 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3790190091.mount: Deactivated successfully. Dec 16 13:07:48.521750 containerd[1969]: time="2025-12-16T13:07:48.519513588Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:07:48.521750 containerd[1969]: time="2025-12-16T13:07:48.520547096Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=23558205" Dec 16 13:07:48.522705 containerd[1969]: time="2025-12-16T13:07:48.522668124Z" level=info msg="ImageCreate event name:\"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:07:48.530876 containerd[1969]: time="2025-12-16T13:07:48.530822795Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:07:48.532196 containerd[1969]: time="2025-12-16T13:07:48.532146515Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"25057686\" in 2.904206831s" Dec 16 13:07:48.532366 containerd[1969]: time="2025-12-16T13:07:48.532207976Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\"" Dec 16 13:07:48.538266 containerd[1969]: time="2025-12-16T13:07:48.538213982Z" level=info msg="CreateContainer within sandbox \"303b963e9f3a9454df4c233abd6e61ce073c66a319a3d021d2947670f2aae156\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Dec 16 13:07:48.559446 containerd[1969]: time="2025-12-16T13:07:48.559399379Z" level=info msg="Container 34c183f686846855558f02b4e4c1917c88cf0d46bbe335097b745799edc963ab: CDI devices from CRI Config.CDIDevices: []" Dec 16 13:07:48.564007 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount95782928.mount: Deactivated successfully. Dec 16 13:07:48.573656 containerd[1969]: time="2025-12-16T13:07:48.573592974Z" level=info msg="CreateContainer within sandbox \"303b963e9f3a9454df4c233abd6e61ce073c66a319a3d021d2947670f2aae156\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"34c183f686846855558f02b4e4c1917c88cf0d46bbe335097b745799edc963ab\"" Dec 16 13:07:48.574524 containerd[1969]: time="2025-12-16T13:07:48.574473612Z" level=info msg="StartContainer for \"34c183f686846855558f02b4e4c1917c88cf0d46bbe335097b745799edc963ab\"" Dec 16 13:07:48.577538 containerd[1969]: time="2025-12-16T13:07:48.577414233Z" level=info msg="connecting to shim 34c183f686846855558f02b4e4c1917c88cf0d46bbe335097b745799edc963ab" address="unix:///run/containerd/s/3ffc3b25b520b353944ec4494e1e1dc1078f08281c0bd0f328649cef92868711" protocol=ttrpc version=3 Dec 16 13:07:48.613779 systemd[1]: Started cri-containerd-34c183f686846855558f02b4e4c1917c88cf0d46bbe335097b745799edc963ab.scope - libcontainer container 34c183f686846855558f02b4e4c1917c88cf0d46bbe335097b745799edc963ab. Dec 16 13:07:48.638000 audit: BPF prog-id=153 op=LOAD Dec 16 13:07:48.639000 audit: BPF prog-id=154 op=LOAD Dec 16 13:07:48.639000 audit[3669]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0238 a2=98 a3=0 items=0 ppid=3594 pid=3669 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:07:48.639000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3334633138336636383638343638353535353866303262346534633139 Dec 16 13:07:48.640000 audit: BPF prog-id=154 op=UNLOAD Dec 16 13:07:48.640000 audit[3669]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3594 pid=3669 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:07:48.640000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3334633138336636383638343638353535353866303262346534633139 Dec 16 13:07:48.640000 audit: BPF prog-id=155 op=LOAD Dec 16 13:07:48.640000 audit[3669]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0488 a2=98 a3=0 items=0 ppid=3594 pid=3669 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:07:48.640000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3334633138336636383638343638353535353866303262346534633139 Dec 16 13:07:48.643000 audit: BPF prog-id=156 op=LOAD Dec 16 13:07:48.643000 audit[3669]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001a0218 a2=98 a3=0 items=0 ppid=3594 pid=3669 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:07:48.643000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3334633138336636383638343638353535353866303262346534633139 Dec 16 13:07:48.643000 audit: BPF prog-id=156 op=UNLOAD Dec 16 13:07:48.643000 audit[3669]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3594 pid=3669 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:07:48.643000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3334633138336636383638343638353535353866303262346534633139 Dec 16 13:07:48.643000 audit: BPF prog-id=155 op=UNLOAD Dec 16 13:07:48.643000 audit[3669]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3594 pid=3669 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:07:48.643000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3334633138336636383638343638353535353866303262346534633139 Dec 16 13:07:48.643000 audit: BPF prog-id=157 op=LOAD Dec 16 13:07:48.643000 audit[3669]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a06e8 a2=98 a3=0 items=0 ppid=3594 pid=3669 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:07:48.643000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3334633138336636383638343638353535353866303262346534633139 Dec 16 13:07:48.691082 containerd[1969]: time="2025-12-16T13:07:48.691022618Z" level=info msg="StartContainer for \"34c183f686846855558f02b4e4c1917c88cf0d46bbe335097b745799edc963ab\" returns successfully" Dec 16 13:07:48.705763 kubelet[3309]: I1216 13:07:48.705034 3309 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-4dxsc" podStartSLOduration=4.704774465 podStartE2EDuration="4.704774465s" podCreationTimestamp="2025-12-16 13:07:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 13:07:45.938687747 +0000 UTC m=+5.545803804" watchObservedRunningTime="2025-12-16 13:07:48.704774465 +0000 UTC m=+8.311890522" Dec 16 13:07:48.942736 kubelet[3309]: I1216 13:07:48.941981 3309 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7dcd859c48-mtmv7" podStartSLOduration=2.0350126 podStartE2EDuration="4.94190181s" podCreationTimestamp="2025-12-16 13:07:44 +0000 UTC" firstStartedPulling="2025-12-16 13:07:45.6264628 +0000 UTC m=+5.233578847" lastFinishedPulling="2025-12-16 13:07:48.533352006 +0000 UTC m=+8.140468057" observedRunningTime="2025-12-16 13:07:48.941024028 +0000 UTC m=+8.548140083" watchObservedRunningTime="2025-12-16 13:07:48.94190181 +0000 UTC m=+8.549017846" Dec 16 13:07:49.950000 audit[3737]: NETFILTER_CFG table=mangle:54 family=2 entries=1 op=nft_register_chain pid=3737 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 13:07:49.950000 audit[3737]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe257644c0 a2=0 a3=7ffe257644ac items=0 ppid=3636 pid=3737 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:07:49.950000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Dec 16 13:07:49.954000 audit[3736]: NETFILTER_CFG table=mangle:55 family=10 entries=1 op=nft_register_chain pid=3736 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 13:07:49.954000 audit[3736]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffd3f0e59d0 a2=0 a3=7ffd3f0e59bc items=0 ppid=3636 pid=3736 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:07:49.954000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Dec 16 13:07:49.958000 audit[3738]: NETFILTER_CFG table=nat:56 family=2 entries=1 op=nft_register_chain pid=3738 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 13:07:49.958000 audit[3738]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd94fa2370 a2=0 a3=7ffd94fa235c items=0 ppid=3636 pid=3738 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:07:49.958000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Dec 16 13:07:49.960000 audit[3740]: NETFILTER_CFG table=nat:57 family=10 entries=1 op=nft_register_chain pid=3740 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 13:07:49.960000 audit[3740]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe24420ef0 a2=0 a3=7ffe24420edc items=0 ppid=3636 pid=3740 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:07:49.960000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Dec 16 13:07:49.962000 audit[3741]: NETFILTER_CFG table=filter:58 family=2 entries=1 op=nft_register_chain pid=3741 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 13:07:49.962000 audit[3741]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc3f423d80 a2=0 a3=7ffc3f423d6c items=0 ppid=3636 pid=3741 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:07:49.962000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Dec 16 13:07:49.963000 audit[3742]: NETFILTER_CFG table=filter:59 family=10 entries=1 op=nft_register_chain pid=3742 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 13:07:49.963000 audit[3742]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffcff30e200 a2=0 a3=7ffcff30e1ec items=0 ppid=3636 pid=3742 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:07:49.963000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Dec 16 13:07:50.112000 audit[3745]: NETFILTER_CFG table=filter:60 family=2 entries=1 op=nft_register_chain pid=3745 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 13:07:50.112000 audit[3745]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7ffd0ae9ff70 a2=0 a3=7ffd0ae9ff5c items=0 ppid=3636 pid=3745 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:07:50.112000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Dec 16 13:07:50.128000 audit[3747]: NETFILTER_CFG table=filter:61 family=2 entries=1 op=nft_register_rule pid=3747 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 13:07:50.128000 audit[3747]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffdbeb246d0 a2=0 a3=7ffdbeb246bc items=0 ppid=3636 pid=3747 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:07:50.128000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276696365 Dec 16 13:07:50.145000 audit[3750]: NETFILTER_CFG table=filter:62 family=2 entries=1 op=nft_register_rule pid=3750 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 13:07:50.145000 audit[3750]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffcbca4f2f0 a2=0 a3=7ffcbca4f2dc items=0 ppid=3636 pid=3750 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:07:50.145000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669 Dec 16 13:07:50.147000 audit[3751]: NETFILTER_CFG table=filter:63 family=2 entries=1 op=nft_register_chain pid=3751 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 13:07:50.147000 audit[3751]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd8d75f890 a2=0 a3=7ffd8d75f87c items=0 ppid=3636 pid=3751 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:07:50.147000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Dec 16 13:07:50.151000 audit[3753]: NETFILTER_CFG table=filter:64 family=2 entries=1 op=nft_register_rule pid=3753 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 13:07:50.151000 audit[3753]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffcbf1cd8e0 a2=0 a3=7ffcbf1cd8cc items=0 ppid=3636 pid=3753 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:07:50.151000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Dec 16 13:07:50.152000 audit[3754]: NETFILTER_CFG table=filter:65 family=2 entries=1 op=nft_register_chain pid=3754 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 13:07:50.152000 audit[3754]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd6ff47040 a2=0 a3=7ffd6ff4702c items=0 ppid=3636 pid=3754 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:07:50.152000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Dec 16 13:07:50.157000 audit[3756]: NETFILTER_CFG table=filter:66 family=2 entries=1 op=nft_register_rule pid=3756 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 13:07:50.157000 audit[3756]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffd7a285a50 a2=0 a3=7ffd7a285a3c items=0 ppid=3636 pid=3756 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:07:50.157000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Dec 16 13:07:50.162000 audit[3759]: NETFILTER_CFG table=filter:67 family=2 entries=1 op=nft_register_rule pid=3759 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 13:07:50.162000 audit[3759]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffddbc09560 a2=0 a3=7ffddbc0954c items=0 ppid=3636 pid=3759 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:07:50.162000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D53 Dec 16 13:07:50.164000 audit[3760]: NETFILTER_CFG table=filter:68 family=2 entries=1 op=nft_register_chain pid=3760 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 13:07:50.164000 audit[3760]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fffb5f798b0 a2=0 a3=7fffb5f7989c items=0 ppid=3636 pid=3760 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:07:50.164000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Dec 16 13:07:50.167000 audit[3762]: NETFILTER_CFG table=filter:69 family=2 entries=1 op=nft_register_rule pid=3762 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 13:07:50.167000 audit[3762]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7fff125f7fe0 a2=0 a3=7fff125f7fcc items=0 ppid=3636 pid=3762 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:07:50.167000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Dec 16 13:07:50.169000 audit[3763]: NETFILTER_CFG table=filter:70 family=2 entries=1 op=nft_register_chain pid=3763 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 13:07:50.169000 audit[3763]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe4d4df950 a2=0 a3=7ffe4d4df93c items=0 ppid=3636 pid=3763 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:07:50.169000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Dec 16 13:07:50.173000 audit[3765]: NETFILTER_CFG table=filter:71 family=2 entries=1 op=nft_register_rule pid=3765 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 13:07:50.173000 audit[3765]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffd60bdbde0 a2=0 a3=7ffd60bdbdcc items=0 ppid=3636 pid=3765 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:07:50.173000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Dec 16 13:07:50.178000 audit[3768]: NETFILTER_CFG table=filter:72 family=2 entries=1 op=nft_register_rule pid=3768 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 13:07:50.178000 audit[3768]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffc2a527e70 a2=0 a3=7ffc2a527e5c items=0 ppid=3636 pid=3768 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:07:50.178000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Dec 16 13:07:50.185000 audit[3771]: NETFILTER_CFG table=filter:73 family=2 entries=1 op=nft_register_rule pid=3771 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 13:07:50.185000 audit[3771]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffc22ceafa0 a2=0 a3=7ffc22ceaf8c items=0 ppid=3636 pid=3771 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:07:50.185000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Dec 16 13:07:50.188000 audit[3772]: NETFILTER_CFG table=nat:74 family=2 entries=1 op=nft_register_chain pid=3772 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 13:07:50.188000 audit[3772]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7fff3591f000 a2=0 a3=7fff3591efec items=0 ppid=3636 pid=3772 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:07:50.188000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Dec 16 13:07:50.193000 audit[3774]: NETFILTER_CFG table=nat:75 family=2 entries=1 op=nft_register_rule pid=3774 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 13:07:50.193000 audit[3774]: SYSCALL arch=c000003e syscall=46 success=yes exit=524 a0=3 a1=7fffb183ee40 a2=0 a3=7fffb183ee2c items=0 ppid=3636 pid=3774 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:07:50.193000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Dec 16 13:07:50.200000 audit[3777]: NETFILTER_CFG table=nat:76 family=2 entries=1 op=nft_register_rule pid=3777 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 13:07:50.200000 audit[3777]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffd075e0a90 a2=0 a3=7ffd075e0a7c items=0 ppid=3636 pid=3777 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:07:50.200000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Dec 16 13:07:50.203000 audit[3778]: NETFILTER_CFG table=nat:77 family=2 entries=1 op=nft_register_chain pid=3778 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 13:07:50.203000 audit[3778]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd46581700 a2=0 a3=7ffd465816ec items=0 ppid=3636 pid=3778 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:07:50.203000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Dec 16 13:07:50.210000 audit[3780]: NETFILTER_CFG table=nat:78 family=2 entries=1 op=nft_register_rule pid=3780 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 13:07:50.210000 audit[3780]: SYSCALL arch=c000003e syscall=46 success=yes exit=532 a0=3 a1=7ffc598a6ec0 a2=0 a3=7ffc598a6eac items=0 ppid=3636 pid=3780 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:07:50.210000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Dec 16 13:07:50.263000 audit[3786]: NETFILTER_CFG table=filter:79 family=2 entries=8 op=nft_register_rule pid=3786 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 13:07:50.263000 audit[3786]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffc8bda3fd0 a2=0 a3=7ffc8bda3fbc items=0 ppid=3636 pid=3786 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:07:50.263000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 13:07:50.279000 audit[3786]: NETFILTER_CFG table=nat:80 family=2 entries=14 op=nft_register_chain pid=3786 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 13:07:50.279000 audit[3786]: SYSCALL arch=c000003e syscall=46 success=yes exit=5508 a0=3 a1=7ffc8bda3fd0 a2=0 a3=7ffc8bda3fbc items=0 ppid=3636 pid=3786 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:07:50.279000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 13:07:50.282000 audit[3791]: NETFILTER_CFG table=filter:81 family=10 entries=1 op=nft_register_chain pid=3791 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 13:07:50.282000 audit[3791]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7ffebd2bcb80 a2=0 a3=7ffebd2bcb6c items=0 ppid=3636 pid=3791 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:07:50.282000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Dec 16 13:07:50.286000 audit[3793]: NETFILTER_CFG table=filter:82 family=10 entries=2 op=nft_register_chain pid=3793 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 13:07:50.286000 audit[3793]: SYSCALL arch=c000003e syscall=46 success=yes exit=836 a0=3 a1=7ffef6fac340 a2=0 a3=7ffef6fac32c items=0 ppid=3636 pid=3793 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:07:50.286000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C6520736572766963 Dec 16 13:07:50.293000 audit[3796]: NETFILTER_CFG table=filter:83 family=10 entries=1 op=nft_register_rule pid=3796 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 13:07:50.293000 audit[3796]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffcd428bc60 a2=0 a3=7ffcd428bc4c items=0 ppid=3636 pid=3796 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:07:50.293000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276 Dec 16 13:07:50.296000 audit[3797]: NETFILTER_CFG table=filter:84 family=10 entries=1 op=nft_register_chain pid=3797 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 13:07:50.296000 audit[3797]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fffe44e73b0 a2=0 a3=7fffe44e739c items=0 ppid=3636 pid=3797 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:07:50.296000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Dec 16 13:07:50.300000 audit[3799]: NETFILTER_CFG table=filter:85 family=10 entries=1 op=nft_register_rule pid=3799 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 13:07:50.300000 audit[3799]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffe871dc030 a2=0 a3=7ffe871dc01c items=0 ppid=3636 pid=3799 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:07:50.300000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Dec 16 13:07:50.302000 audit[3800]: NETFILTER_CFG table=filter:86 family=10 entries=1 op=nft_register_chain pid=3800 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 13:07:50.302000 audit[3800]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffec7d58e20 a2=0 a3=7ffec7d58e0c items=0 ppid=3636 pid=3800 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:07:50.302000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Dec 16 13:07:50.306000 audit[3802]: NETFILTER_CFG table=filter:87 family=10 entries=1 op=nft_register_rule pid=3802 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 13:07:50.306000 audit[3802]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffdab820510 a2=0 a3=7ffdab8204fc items=0 ppid=3636 pid=3802 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:07:50.306000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B554245 Dec 16 13:07:50.312000 audit[3805]: NETFILTER_CFG table=filter:88 family=10 entries=2 op=nft_register_chain pid=3805 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 13:07:50.312000 audit[3805]: SYSCALL arch=c000003e syscall=46 success=yes exit=828 a0=3 a1=7ffdebc85780 a2=0 a3=7ffdebc8576c items=0 ppid=3636 pid=3805 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:07:50.312000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Dec 16 13:07:50.313000 audit[3806]: NETFILTER_CFG table=filter:89 family=10 entries=1 op=nft_register_chain pid=3806 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 13:07:50.313000 audit[3806]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fffd0206e30 a2=0 a3=7fffd0206e1c items=0 ppid=3636 pid=3806 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:07:50.313000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Dec 16 13:07:50.317000 audit[3808]: NETFILTER_CFG table=filter:90 family=10 entries=1 op=nft_register_rule pid=3808 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 13:07:50.317000 audit[3808]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffed5b1b2f0 a2=0 a3=7ffed5b1b2dc items=0 ppid=3636 pid=3808 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:07:50.317000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Dec 16 13:07:50.319000 audit[3809]: NETFILTER_CFG table=filter:91 family=10 entries=1 op=nft_register_chain pid=3809 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 13:07:50.319000 audit[3809]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe879b7f60 a2=0 a3=7ffe879b7f4c items=0 ppid=3636 pid=3809 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:07:50.319000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Dec 16 13:07:50.323000 audit[3811]: NETFILTER_CFG table=filter:92 family=10 entries=1 op=nft_register_rule pid=3811 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 13:07:50.323000 audit[3811]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffd3397a3c0 a2=0 a3=7ffd3397a3ac items=0 ppid=3636 pid=3811 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:07:50.323000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Dec 16 13:07:50.328000 audit[3814]: NETFILTER_CFG table=filter:93 family=10 entries=1 op=nft_register_rule pid=3814 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 13:07:50.328000 audit[3814]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffc69c5dd00 a2=0 a3=7ffc69c5dcec items=0 ppid=3636 pid=3814 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:07:50.328000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Dec 16 13:07:50.333000 audit[3817]: NETFILTER_CFG table=filter:94 family=10 entries=1 op=nft_register_rule pid=3817 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 13:07:50.333000 audit[3817]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffe4c477520 a2=0 a3=7ffe4c47750c items=0 ppid=3636 pid=3817 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:07:50.333000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C Dec 16 13:07:50.335000 audit[3818]: NETFILTER_CFG table=nat:95 family=10 entries=1 op=nft_register_chain pid=3818 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 13:07:50.335000 audit[3818]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffd7685b3b0 a2=0 a3=7ffd7685b39c items=0 ppid=3636 pid=3818 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:07:50.335000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Dec 16 13:07:50.339000 audit[3820]: NETFILTER_CFG table=nat:96 family=10 entries=1 op=nft_register_rule pid=3820 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 13:07:50.339000 audit[3820]: SYSCALL arch=c000003e syscall=46 success=yes exit=524 a0=3 a1=7ffe1b178100 a2=0 a3=7ffe1b1780ec items=0 ppid=3636 pid=3820 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:07:50.339000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Dec 16 13:07:50.344000 audit[3823]: NETFILTER_CFG table=nat:97 family=10 entries=1 op=nft_register_rule pid=3823 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 13:07:50.344000 audit[3823]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7fffa028ce00 a2=0 a3=7fffa028cdec items=0 ppid=3636 pid=3823 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:07:50.344000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Dec 16 13:07:50.346000 audit[3824]: NETFILTER_CFG table=nat:98 family=10 entries=1 op=nft_register_chain pid=3824 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 13:07:50.346000 audit[3824]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd0877db80 a2=0 a3=7ffd0877db6c items=0 ppid=3636 pid=3824 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:07:50.346000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Dec 16 13:07:50.351000 audit[3826]: NETFILTER_CFG table=nat:99 family=10 entries=2 op=nft_register_chain pid=3826 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 13:07:50.351000 audit[3826]: SYSCALL arch=c000003e syscall=46 success=yes exit=612 a0=3 a1=7fff4d0d8db0 a2=0 a3=7fff4d0d8d9c items=0 ppid=3636 pid=3826 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:07:50.351000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Dec 16 13:07:50.353000 audit[3827]: NETFILTER_CFG table=filter:100 family=10 entries=1 op=nft_register_chain pid=3827 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 13:07:50.353000 audit[3827]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe69980ac0 a2=0 a3=7ffe69980aac items=0 ppid=3636 pid=3827 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:07:50.353000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Dec 16 13:07:50.359000 audit[3829]: NETFILTER_CFG table=filter:101 family=10 entries=1 op=nft_register_rule pid=3829 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 13:07:50.359000 audit[3829]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7fff1b5525e0 a2=0 a3=7fff1b5525cc items=0 ppid=3636 pid=3829 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:07:50.359000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Dec 16 13:07:50.369190 kernel: kauditd_printk_skb: 215 callbacks suppressed Dec 16 13:07:50.369336 kernel: audit: type=1325 audit(1765890470.368:526): table=filter:102 family=10 entries=1 op=nft_register_rule pid=3832 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 13:07:50.368000 audit[3832]: NETFILTER_CFG table=filter:102 family=10 entries=1 op=nft_register_rule pid=3832 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 13:07:50.379127 kernel: audit: type=1300 audit(1765890470.368:526): arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffd8d2ec170 a2=0 a3=7ffd8d2ec15c items=0 ppid=3636 pid=3832 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:07:50.368000 audit[3832]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffd8d2ec170 a2=0 a3=7ffd8d2ec15c items=0 ppid=3636 pid=3832 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:07:50.368000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Dec 16 13:07:50.381633 kernel: audit: type=1327 audit(1765890470.368:526): proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Dec 16 13:07:50.375000 audit[3834]: NETFILTER_CFG table=filter:103 family=10 entries=3 op=nft_register_rule pid=3834 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Dec 16 13:07:50.384888 kernel: audit: type=1325 audit(1765890470.375:527): table=filter:103 family=10 entries=3 op=nft_register_rule pid=3834 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Dec 16 13:07:50.375000 audit[3834]: SYSCALL arch=c000003e syscall=46 success=yes exit=2088 a0=3 a1=7ffe14de84d0 a2=0 a3=7ffe14de84bc items=0 ppid=3636 pid=3834 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:07:50.375000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 13:07:50.394502 kernel: audit: type=1300 audit(1765890470.375:527): arch=c000003e syscall=46 success=yes exit=2088 a0=3 a1=7ffe14de84d0 a2=0 a3=7ffe14de84bc items=0 ppid=3636 pid=3834 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:07:50.394575 kernel: audit: type=1327 audit(1765890470.375:527): proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 13:07:50.399991 kernel: audit: type=1325 audit(1765890470.375:528): table=nat:104 family=10 entries=7 op=nft_register_chain pid=3834 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Dec 16 13:07:50.375000 audit[3834]: NETFILTER_CFG table=nat:104 family=10 entries=7 op=nft_register_chain pid=3834 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Dec 16 13:07:50.375000 audit[3834]: SYSCALL arch=c000003e syscall=46 success=yes exit=2056 a0=3 a1=7ffe14de84d0 a2=0 a3=7ffe14de84bc items=0 ppid=3636 pid=3834 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:07:50.375000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 13:07:50.408694 kernel: audit: type=1300 audit(1765890470.375:528): arch=c000003e syscall=46 success=yes exit=2056 a0=3 a1=7ffe14de84d0 a2=0 a3=7ffe14de84bc items=0 ppid=3636 pid=3834 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:07:50.408806 kernel: audit: type=1327 audit(1765890470.375:528): proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 13:08:26.262000 audit[2331]: USER_END pid=2331 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 16 13:08:26.262521 sudo[2331]: pam_unix(sudo:session): session closed for user root Dec 16 13:08:26.262000 audit[2331]: CRED_DISP pid=2331 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 16 13:08:26.269899 kernel: audit: type=1106 audit(1765890506.262:529): pid=2331 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 16 13:08:26.270038 kernel: audit: type=1104 audit(1765890506.262:530): pid=2331 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 16 13:08:26.289704 sshd[2330]: Connection closed by 139.178.89.65 port 57644 Dec 16 13:08:26.288833 sshd-session[2327]: pam_unix(sshd:session): session closed for user core Dec 16 13:08:26.291000 audit[2327]: USER_END pid=2327 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 13:08:26.298110 kernel: audit: type=1106 audit(1765890506.291:531): pid=2327 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 13:08:26.302302 systemd[1]: sshd@6-172.31.28.98:22-139.178.89.65:57644.service: Deactivated successfully. Dec 16 13:08:26.298000 audit[2327]: CRED_DISP pid=2327 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 13:08:26.310173 kernel: audit: type=1104 audit(1765890506.298:532): pid=2327 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 13:08:26.314401 systemd[1]: session-7.scope: Deactivated successfully. Dec 16 13:08:26.314777 systemd[1]: session-7.scope: Consumed 6.797s CPU time, 153.4M memory peak. Dec 16 13:08:26.302000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-172.31.28.98:22-139.178.89.65:57644 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:08:26.321218 kernel: audit: type=1131 audit(1765890506.302:533): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-172.31.28.98:22-139.178.89.65:57644 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:08:26.324352 systemd-logind[1939]: Session 7 logged out. Waiting for processes to exit. Dec 16 13:08:26.327358 systemd-logind[1939]: Removed session 7. Dec 16 13:08:26.908000 audit[3890]: NETFILTER_CFG table=filter:105 family=2 entries=15 op=nft_register_rule pid=3890 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 13:08:26.913172 kernel: audit: type=1325 audit(1765890506.908:534): table=filter:105 family=2 entries=15 op=nft_register_rule pid=3890 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 13:08:26.908000 audit[3890]: SYSCALL arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7fffb607d5b0 a2=0 a3=7fffb607d59c items=0 ppid=3636 pid=3890 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:08:26.922114 kernel: audit: type=1300 audit(1765890506.908:534): arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7fffb607d5b0 a2=0 a3=7fffb607d59c items=0 ppid=3636 pid=3890 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:08:26.908000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 13:08:26.929121 kernel: audit: type=1327 audit(1765890506.908:534): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 13:08:26.916000 audit[3890]: NETFILTER_CFG table=nat:106 family=2 entries=12 op=nft_register_rule pid=3890 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 13:08:26.934092 kernel: audit: type=1325 audit(1765890506.916:535): table=nat:106 family=2 entries=12 op=nft_register_rule pid=3890 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 13:08:26.916000 audit[3890]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7fffb607d5b0 a2=0 a3=0 items=0 ppid=3636 pid=3890 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:08:26.942090 kernel: audit: type=1300 audit(1765890506.916:535): arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7fffb607d5b0 a2=0 a3=0 items=0 ppid=3636 pid=3890 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:08:26.916000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 13:08:26.950000 audit[3892]: NETFILTER_CFG table=filter:107 family=2 entries=16 op=nft_register_rule pid=3892 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 13:08:26.950000 audit[3892]: SYSCALL arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7fffa23fe370 a2=0 a3=7fffa23fe35c items=0 ppid=3636 pid=3892 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:08:26.950000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 13:08:26.955000 audit[3892]: NETFILTER_CFG table=nat:108 family=2 entries=12 op=nft_register_rule pid=3892 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 13:08:26.955000 audit[3892]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7fffa23fe370 a2=0 a3=0 items=0 ppid=3636 pid=3892 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:08:26.955000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 13:08:31.701000 audit[3894]: NETFILTER_CFG table=filter:109 family=2 entries=17 op=nft_register_rule pid=3894 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 13:08:31.702612 kernel: kauditd_printk_skb: 7 callbacks suppressed Dec 16 13:08:31.702752 kernel: audit: type=1325 audit(1765890511.701:538): table=filter:109 family=2 entries=17 op=nft_register_rule pid=3894 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 13:08:31.701000 audit[3894]: SYSCALL arch=c000003e syscall=46 success=yes exit=6736 a0=3 a1=7ffcbe656130 a2=0 a3=7ffcbe65611c items=0 ppid=3636 pid=3894 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:08:31.717102 kernel: audit: type=1300 audit(1765890511.701:538): arch=c000003e syscall=46 success=yes exit=6736 a0=3 a1=7ffcbe656130 a2=0 a3=7ffcbe65611c items=0 ppid=3636 pid=3894 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:08:31.701000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 13:08:31.722113 kernel: audit: type=1327 audit(1765890511.701:538): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 13:08:31.709000 audit[3894]: NETFILTER_CFG table=nat:110 family=2 entries=12 op=nft_register_rule pid=3894 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 13:08:31.726103 kernel: audit: type=1325 audit(1765890511.709:539): table=nat:110 family=2 entries=12 op=nft_register_rule pid=3894 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 13:08:31.709000 audit[3894]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffcbe656130 a2=0 a3=0 items=0 ppid=3636 pid=3894 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:08:31.734141 kernel: audit: type=1300 audit(1765890511.709:539): arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffcbe656130 a2=0 a3=0 items=0 ppid=3636 pid=3894 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:08:31.709000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 13:08:31.740662 kernel: audit: type=1327 audit(1765890511.709:539): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 13:08:31.747000 audit[3896]: NETFILTER_CFG table=filter:111 family=2 entries=18 op=nft_register_rule pid=3896 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 13:08:31.752109 kernel: audit: type=1325 audit(1765890511.747:540): table=filter:111 family=2 entries=18 op=nft_register_rule pid=3896 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 13:08:31.747000 audit[3896]: SYSCALL arch=c000003e syscall=46 success=yes exit=6736 a0=3 a1=7ffe8cb47f50 a2=0 a3=7ffe8cb47f3c items=0 ppid=3636 pid=3896 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:08:31.761181 kernel: audit: type=1300 audit(1765890511.747:540): arch=c000003e syscall=46 success=yes exit=6736 a0=3 a1=7ffe8cb47f50 a2=0 a3=7ffe8cb47f3c items=0 ppid=3636 pid=3896 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:08:31.747000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 13:08:31.754000 audit[3896]: NETFILTER_CFG table=nat:112 family=2 entries=12 op=nft_register_rule pid=3896 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 13:08:31.766961 kernel: audit: type=1327 audit(1765890511.747:540): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 13:08:31.767186 kernel: audit: type=1325 audit(1765890511.754:541): table=nat:112 family=2 entries=12 op=nft_register_rule pid=3896 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 13:08:31.754000 audit[3896]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffe8cb47f50 a2=0 a3=0 items=0 ppid=3636 pid=3896 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:08:31.754000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 13:08:32.869000 audit[3900]: NETFILTER_CFG table=filter:113 family=2 entries=19 op=nft_register_rule pid=3900 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 13:08:32.869000 audit[3900]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffd1ccb8f50 a2=0 a3=7ffd1ccb8f3c items=0 ppid=3636 pid=3900 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:08:32.869000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 13:08:32.874000 audit[3900]: NETFILTER_CFG table=nat:114 family=2 entries=12 op=nft_register_rule pid=3900 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 13:08:32.874000 audit[3900]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffd1ccb8f50 a2=0 a3=0 items=0 ppid=3636 pid=3900 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:08:32.874000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 13:08:33.845000 audit[3902]: NETFILTER_CFG table=filter:115 family=2 entries=21 op=nft_register_rule pid=3902 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 13:08:33.845000 audit[3902]: SYSCALL arch=c000003e syscall=46 success=yes exit=8224 a0=3 a1=7fff3e930690 a2=0 a3=7fff3e93067c items=0 ppid=3636 pid=3902 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:08:33.845000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 13:08:33.854000 audit[3902]: NETFILTER_CFG table=nat:116 family=2 entries=12 op=nft_register_rule pid=3902 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 13:08:33.883345 systemd[1]: Created slice kubepods-besteffort-pod6906d951_6fa8_47bc_86fc_6acf0fe72741.slice - libcontainer container kubepods-besteffort-pod6906d951_6fa8_47bc_86fc_6acf0fe72741.slice. Dec 16 13:08:33.854000 audit[3902]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7fff3e930690 a2=0 a3=0 items=0 ppid=3636 pid=3902 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:08:33.854000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 13:08:33.907293 kubelet[3309]: I1216 13:08:33.907248 3309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/6906d951-6fa8-47bc-86fc-6acf0fe72741-typha-certs\") pod \"calico-typha-6c55bc7669-f8p4b\" (UID: \"6906d951-6fa8-47bc-86fc-6acf0fe72741\") " pod="calico-system/calico-typha-6c55bc7669-f8p4b" Dec 16 13:08:33.908371 kubelet[3309]: I1216 13:08:33.907926 3309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vn84p\" (UniqueName: \"kubernetes.io/projected/6906d951-6fa8-47bc-86fc-6acf0fe72741-kube-api-access-vn84p\") pod \"calico-typha-6c55bc7669-f8p4b\" (UID: \"6906d951-6fa8-47bc-86fc-6acf0fe72741\") " pod="calico-system/calico-typha-6c55bc7669-f8p4b" Dec 16 13:08:33.908371 kubelet[3309]: I1216 13:08:33.907989 3309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6906d951-6fa8-47bc-86fc-6acf0fe72741-tigera-ca-bundle\") pod \"calico-typha-6c55bc7669-f8p4b\" (UID: \"6906d951-6fa8-47bc-86fc-6acf0fe72741\") " pod="calico-system/calico-typha-6c55bc7669-f8p4b" Dec 16 13:08:34.045337 systemd[1]: Created slice kubepods-besteffort-podde32fcce_0e61_4341_99cc_88a327be459b.slice - libcontainer container kubepods-besteffort-podde32fcce_0e61_4341_99cc_88a327be459b.slice. Dec 16 13:08:34.109561 kubelet[3309]: I1216 13:08:34.109411 3309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/de32fcce-0e61-4341-99cc-88a327be459b-policysync\") pod \"calico-node-csw6z\" (UID: \"de32fcce-0e61-4341-99cc-88a327be459b\") " pod="calico-system/calico-node-csw6z" Dec 16 13:08:34.109561 kubelet[3309]: I1216 13:08:34.109466 3309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/de32fcce-0e61-4341-99cc-88a327be459b-tigera-ca-bundle\") pod \"calico-node-csw6z\" (UID: \"de32fcce-0e61-4341-99cc-88a327be459b\") " pod="calico-system/calico-node-csw6z" Dec 16 13:08:34.109561 kubelet[3309]: I1216 13:08:34.109491 3309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/de32fcce-0e61-4341-99cc-88a327be459b-xtables-lock\") pod \"calico-node-csw6z\" (UID: \"de32fcce-0e61-4341-99cc-88a327be459b\") " pod="calico-system/calico-node-csw6z" Dec 16 13:08:34.109561 kubelet[3309]: I1216 13:08:34.109515 3309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/de32fcce-0e61-4341-99cc-88a327be459b-cni-net-dir\") pod \"calico-node-csw6z\" (UID: \"de32fcce-0e61-4341-99cc-88a327be459b\") " pod="calico-system/calico-node-csw6z" Dec 16 13:08:34.109561 kubelet[3309]: I1216 13:08:34.109539 3309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/de32fcce-0e61-4341-99cc-88a327be459b-lib-modules\") pod \"calico-node-csw6z\" (UID: \"de32fcce-0e61-4341-99cc-88a327be459b\") " pod="calico-system/calico-node-csw6z" Dec 16 13:08:34.109879 kubelet[3309]: I1216 13:08:34.109583 3309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/de32fcce-0e61-4341-99cc-88a327be459b-node-certs\") pod \"calico-node-csw6z\" (UID: \"de32fcce-0e61-4341-99cc-88a327be459b\") " pod="calico-system/calico-node-csw6z" Dec 16 13:08:34.109879 kubelet[3309]: I1216 13:08:34.109632 3309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/de32fcce-0e61-4341-99cc-88a327be459b-cni-log-dir\") pod \"calico-node-csw6z\" (UID: \"de32fcce-0e61-4341-99cc-88a327be459b\") " pod="calico-system/calico-node-csw6z" Dec 16 13:08:34.109879 kubelet[3309]: I1216 13:08:34.109653 3309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/de32fcce-0e61-4341-99cc-88a327be459b-var-lib-calico\") pod \"calico-node-csw6z\" (UID: \"de32fcce-0e61-4341-99cc-88a327be459b\") " pod="calico-system/calico-node-csw6z" Dec 16 13:08:34.109879 kubelet[3309]: I1216 13:08:34.109676 3309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/de32fcce-0e61-4341-99cc-88a327be459b-var-run-calico\") pod \"calico-node-csw6z\" (UID: \"de32fcce-0e61-4341-99cc-88a327be459b\") " pod="calico-system/calico-node-csw6z" Dec 16 13:08:34.109879 kubelet[3309]: I1216 13:08:34.109699 3309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hd8hb\" (UniqueName: \"kubernetes.io/projected/de32fcce-0e61-4341-99cc-88a327be459b-kube-api-access-hd8hb\") pod \"calico-node-csw6z\" (UID: \"de32fcce-0e61-4341-99cc-88a327be459b\") " pod="calico-system/calico-node-csw6z" Dec 16 13:08:34.110436 kubelet[3309]: I1216 13:08:34.109726 3309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/de32fcce-0e61-4341-99cc-88a327be459b-cni-bin-dir\") pod \"calico-node-csw6z\" (UID: \"de32fcce-0e61-4341-99cc-88a327be459b\") " pod="calico-system/calico-node-csw6z" Dec 16 13:08:34.110436 kubelet[3309]: I1216 13:08:34.109755 3309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/de32fcce-0e61-4341-99cc-88a327be459b-flexvol-driver-host\") pod \"calico-node-csw6z\" (UID: \"de32fcce-0e61-4341-99cc-88a327be459b\") " pod="calico-system/calico-node-csw6z" Dec 16 13:08:34.169533 kubelet[3309]: E1216 13:08:34.169297 3309 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-h272q" podUID="c808a4b9-6eee-4490-92c6-5f208009c5e7" Dec 16 13:08:34.210456 kubelet[3309]: I1216 13:08:34.210393 3309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/c808a4b9-6eee-4490-92c6-5f208009c5e7-varrun\") pod \"csi-node-driver-h272q\" (UID: \"c808a4b9-6eee-4490-92c6-5f208009c5e7\") " pod="calico-system/csi-node-driver-h272q" Dec 16 13:08:34.210717 kubelet[3309]: I1216 13:08:34.210471 3309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/c808a4b9-6eee-4490-92c6-5f208009c5e7-socket-dir\") pod \"csi-node-driver-h272q\" (UID: \"c808a4b9-6eee-4490-92c6-5f208009c5e7\") " pod="calico-system/csi-node-driver-h272q" Dec 16 13:08:34.210717 kubelet[3309]: I1216 13:08:34.210511 3309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/c808a4b9-6eee-4490-92c6-5f208009c5e7-registration-dir\") pod \"csi-node-driver-h272q\" (UID: \"c808a4b9-6eee-4490-92c6-5f208009c5e7\") " pod="calico-system/csi-node-driver-h272q" Dec 16 13:08:34.210717 kubelet[3309]: I1216 13:08:34.210544 3309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c808a4b9-6eee-4490-92c6-5f208009c5e7-kubelet-dir\") pod \"csi-node-driver-h272q\" (UID: \"c808a4b9-6eee-4490-92c6-5f208009c5e7\") " pod="calico-system/csi-node-driver-h272q" Dec 16 13:08:34.210717 kubelet[3309]: I1216 13:08:34.210653 3309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m7qnn\" (UniqueName: \"kubernetes.io/projected/c808a4b9-6eee-4490-92c6-5f208009c5e7-kube-api-access-m7qnn\") pod \"csi-node-driver-h272q\" (UID: \"c808a4b9-6eee-4490-92c6-5f208009c5e7\") " pod="calico-system/csi-node-driver-h272q" Dec 16 13:08:34.221813 containerd[1969]: time="2025-12-16T13:08:34.221743333Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6c55bc7669-f8p4b,Uid:6906d951-6fa8-47bc-86fc-6acf0fe72741,Namespace:calico-system,Attempt:0,}" Dec 16 13:08:34.225240 kubelet[3309]: E1216 13:08:34.225209 3309 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:08:34.225366 kubelet[3309]: W1216 13:08:34.225263 3309 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:08:34.225366 kubelet[3309]: E1216 13:08:34.225301 3309 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:08:34.251088 kubelet[3309]: E1216 13:08:34.250385 3309 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:08:34.251088 kubelet[3309]: W1216 13:08:34.250420 3309 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:08:34.251088 kubelet[3309]: E1216 13:08:34.250448 3309 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:08:34.290106 containerd[1969]: time="2025-12-16T13:08:34.290019445Z" level=info msg="connecting to shim 44b3949476a48278b3546e9a6d1396e71dd5ebd4f4d67a8050ac88a4d48b41d7" address="unix:///run/containerd/s/71e60508dbcb0cc9ba84615c83ca8924f0353692fd3b68d681b6b2ab6482c3e6" namespace=k8s.io protocol=ttrpc version=3 Dec 16 13:08:34.313230 kubelet[3309]: E1216 13:08:34.313198 3309 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:08:34.315829 kubelet[3309]: W1216 13:08:34.313410 3309 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:08:34.315829 kubelet[3309]: E1216 13:08:34.313441 3309 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:08:34.315829 kubelet[3309]: E1216 13:08:34.315434 3309 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:08:34.315829 kubelet[3309]: W1216 13:08:34.315454 3309 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:08:34.315829 kubelet[3309]: E1216 13:08:34.315512 3309 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:08:34.316400 kubelet[3309]: E1216 13:08:34.316319 3309 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:08:34.316400 kubelet[3309]: W1216 13:08:34.316337 3309 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:08:34.316400 kubelet[3309]: E1216 13:08:34.316357 3309 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:08:34.317344 kubelet[3309]: E1216 13:08:34.317108 3309 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:08:34.317344 kubelet[3309]: W1216 13:08:34.317126 3309 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:08:34.317344 kubelet[3309]: E1216 13:08:34.317145 3309 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:08:34.317761 kubelet[3309]: E1216 13:08:34.317747 3309 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:08:34.317984 kubelet[3309]: W1216 13:08:34.317829 3309 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:08:34.317984 kubelet[3309]: E1216 13:08:34.317848 3309 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:08:34.318410 kubelet[3309]: E1216 13:08:34.318394 3309 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:08:34.318498 kubelet[3309]: W1216 13:08:34.318486 3309 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:08:34.318574 kubelet[3309]: E1216 13:08:34.318562 3309 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:08:34.319368 kubelet[3309]: E1216 13:08:34.319352 3309 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:08:34.319543 kubelet[3309]: W1216 13:08:34.319477 3309 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:08:34.319639 kubelet[3309]: E1216 13:08:34.319625 3309 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:08:34.321094 kubelet[3309]: E1216 13:08:34.320478 3309 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:08:34.321480 kubelet[3309]: W1216 13:08:34.321212 3309 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:08:34.321480 kubelet[3309]: E1216 13:08:34.321238 3309 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:08:34.322090 kubelet[3309]: E1216 13:08:34.321988 3309 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:08:34.322090 kubelet[3309]: W1216 13:08:34.322031 3309 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:08:34.322090 kubelet[3309]: E1216 13:08:34.322049 3309 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:08:34.322629 kubelet[3309]: E1216 13:08:34.322567 3309 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:08:34.322629 kubelet[3309]: W1216 13:08:34.322599 3309 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:08:34.322629 kubelet[3309]: E1216 13:08:34.322614 3309 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:08:34.323256 kubelet[3309]: E1216 13:08:34.323196 3309 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:08:34.323256 kubelet[3309]: W1216 13:08:34.323211 3309 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:08:34.324159 kubelet[3309]: E1216 13:08:34.323362 3309 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:08:34.324550 kubelet[3309]: E1216 13:08:34.324505 3309 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:08:34.324550 kubelet[3309]: W1216 13:08:34.324521 3309 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:08:34.324762 kubelet[3309]: E1216 13:08:34.324635 3309 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:08:34.325157 kubelet[3309]: E1216 13:08:34.325127 3309 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:08:34.325354 kubelet[3309]: W1216 13:08:34.325238 3309 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:08:34.325354 kubelet[3309]: E1216 13:08:34.325259 3309 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:08:34.325831 kubelet[3309]: E1216 13:08:34.325784 3309 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:08:34.325831 kubelet[3309]: W1216 13:08:34.325797 3309 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:08:34.326023 kubelet[3309]: E1216 13:08:34.325812 3309 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:08:34.326421 kubelet[3309]: E1216 13:08:34.326391 3309 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:08:34.326611 kubelet[3309]: W1216 13:08:34.326509 3309 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:08:34.326611 kubelet[3309]: E1216 13:08:34.326542 3309 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:08:34.327163 kubelet[3309]: E1216 13:08:34.327134 3309 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:08:34.327372 kubelet[3309]: W1216 13:08:34.327252 3309 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:08:34.327372 kubelet[3309]: E1216 13:08:34.327274 3309 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:08:34.327949 kubelet[3309]: E1216 13:08:34.327878 3309 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:08:34.327949 kubelet[3309]: W1216 13:08:34.327893 3309 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:08:34.328325 kubelet[3309]: E1216 13:08:34.327908 3309 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:08:34.329091 kubelet[3309]: E1216 13:08:34.328976 3309 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:08:34.329091 kubelet[3309]: W1216 13:08:34.328992 3309 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:08:34.329091 kubelet[3309]: E1216 13:08:34.329018 3309 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:08:34.331865 kubelet[3309]: E1216 13:08:34.331840 3309 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:08:34.332049 kubelet[3309]: W1216 13:08:34.331961 3309 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:08:34.332049 kubelet[3309]: E1216 13:08:34.331981 3309 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:08:34.332784 kubelet[3309]: E1216 13:08:34.332736 3309 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:08:34.332784 kubelet[3309]: W1216 13:08:34.332750 3309 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:08:34.332784 kubelet[3309]: E1216 13:08:34.332766 3309 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:08:34.333396 kubelet[3309]: E1216 13:08:34.333337 3309 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:08:34.333396 kubelet[3309]: W1216 13:08:34.333353 3309 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:08:34.333396 kubelet[3309]: E1216 13:08:34.333368 3309 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:08:34.334303 kubelet[3309]: E1216 13:08:34.334257 3309 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:08:34.334303 kubelet[3309]: W1216 13:08:34.334273 3309 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:08:34.334303 kubelet[3309]: E1216 13:08:34.334287 3309 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:08:34.334912 kubelet[3309]: E1216 13:08:34.334859 3309 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:08:34.334912 kubelet[3309]: W1216 13:08:34.334874 3309 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:08:34.334912 kubelet[3309]: E1216 13:08:34.334888 3309 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:08:34.335578 kubelet[3309]: E1216 13:08:34.335533 3309 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:08:34.335578 kubelet[3309]: W1216 13:08:34.335547 3309 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:08:34.335578 kubelet[3309]: E1216 13:08:34.335563 3309 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:08:34.336439 kubelet[3309]: E1216 13:08:34.336352 3309 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:08:34.336439 kubelet[3309]: W1216 13:08:34.336369 3309 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:08:34.336439 kubelet[3309]: E1216 13:08:34.336385 3309 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:08:34.361629 systemd[1]: Started cri-containerd-44b3949476a48278b3546e9a6d1396e71dd5ebd4f4d67a8050ac88a4d48b41d7.scope - libcontainer container 44b3949476a48278b3546e9a6d1396e71dd5ebd4f4d67a8050ac88a4d48b41d7. Dec 16 13:08:34.367279 containerd[1969]: time="2025-12-16T13:08:34.367233224Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-csw6z,Uid:de32fcce-0e61-4341-99cc-88a327be459b,Namespace:calico-system,Attempt:0,}" Dec 16 13:08:34.379681 kubelet[3309]: E1216 13:08:34.379613 3309 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:08:34.379681 kubelet[3309]: W1216 13:08:34.379675 3309 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:08:34.379900 kubelet[3309]: E1216 13:08:34.379708 3309 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:08:34.439448 containerd[1969]: time="2025-12-16T13:08:34.439361714Z" level=info msg="connecting to shim 1a0f77a87b61b208a1296788b16b84f9a09f2d5ab12836c8b67e0413a1d5c188" address="unix:///run/containerd/s/e9b62a8373bc5ada7a35b5809675abb6ffb16f65dfc92a9fb41ad7c7be53db9c" namespace=k8s.io protocol=ttrpc version=3 Dec 16 13:08:34.484000 audit: BPF prog-id=158 op=LOAD Dec 16 13:08:34.489000 audit: BPF prog-id=159 op=LOAD Dec 16 13:08:34.489000 audit[3929]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001fe238 a2=98 a3=0 items=0 ppid=3917 pid=3929 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:08:34.489000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3434623339343934373661343832373862333534366539613664313339 Dec 16 13:08:34.489000 audit: BPF prog-id=159 op=UNLOAD Dec 16 13:08:34.489000 audit[3929]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3917 pid=3929 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:08:34.489000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3434623339343934373661343832373862333534366539613664313339 Dec 16 13:08:34.489000 audit: BPF prog-id=160 op=LOAD Dec 16 13:08:34.489000 audit[3929]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001fe488 a2=98 a3=0 items=0 ppid=3917 pid=3929 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:08:34.489000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3434623339343934373661343832373862333534366539613664313339 Dec 16 13:08:34.489000 audit: BPF prog-id=161 op=LOAD Dec 16 13:08:34.489000 audit[3929]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001fe218 a2=98 a3=0 items=0 ppid=3917 pid=3929 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:08:34.489000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3434623339343934373661343832373862333534366539613664313339 Dec 16 13:08:34.489000 audit: BPF prog-id=161 op=UNLOAD Dec 16 13:08:34.489000 audit[3929]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3917 pid=3929 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:08:34.489000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3434623339343934373661343832373862333534366539613664313339 Dec 16 13:08:34.489000 audit: BPF prog-id=160 op=UNLOAD Dec 16 13:08:34.489000 audit[3929]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3917 pid=3929 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:08:34.489000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3434623339343934373661343832373862333534366539613664313339 Dec 16 13:08:34.489000 audit: BPF prog-id=162 op=LOAD Dec 16 13:08:34.489000 audit[3929]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001fe6e8 a2=98 a3=0 items=0 ppid=3917 pid=3929 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:08:34.489000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3434623339343934373661343832373862333534366539613664313339 Dec 16 13:08:34.505870 systemd[1]: Started cri-containerd-1a0f77a87b61b208a1296788b16b84f9a09f2d5ab12836c8b67e0413a1d5c188.scope - libcontainer container 1a0f77a87b61b208a1296788b16b84f9a09f2d5ab12836c8b67e0413a1d5c188. Dec 16 13:08:34.546000 audit: BPF prog-id=163 op=LOAD Dec 16 13:08:34.547000 audit: BPF prog-id=164 op=LOAD Dec 16 13:08:34.547000 audit[3994]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000220238 a2=98 a3=0 items=0 ppid=3983 pid=3994 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:08:34.547000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3161306637376138376236316232303861313239363738386231366238 Dec 16 13:08:34.547000 audit: BPF prog-id=164 op=UNLOAD Dec 16 13:08:34.547000 audit[3994]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3983 pid=3994 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:08:34.547000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3161306637376138376236316232303861313239363738386231366238 Dec 16 13:08:34.548000 audit: BPF prog-id=165 op=LOAD Dec 16 13:08:34.548000 audit[3994]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000220488 a2=98 a3=0 items=0 ppid=3983 pid=3994 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:08:34.548000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3161306637376138376236316232303861313239363738386231366238 Dec 16 13:08:34.548000 audit: BPF prog-id=166 op=LOAD Dec 16 13:08:34.548000 audit[3994]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000220218 a2=98 a3=0 items=0 ppid=3983 pid=3994 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:08:34.548000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3161306637376138376236316232303861313239363738386231366238 Dec 16 13:08:34.548000 audit: BPF prog-id=166 op=UNLOAD Dec 16 13:08:34.548000 audit[3994]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3983 pid=3994 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:08:34.548000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3161306637376138376236316232303861313239363738386231366238 Dec 16 13:08:34.549000 audit: BPF prog-id=165 op=UNLOAD Dec 16 13:08:34.549000 audit[3994]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3983 pid=3994 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:08:34.549000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3161306637376138376236316232303861313239363738386231366238 Dec 16 13:08:34.549000 audit: BPF prog-id=167 op=LOAD Dec 16 13:08:34.549000 audit[3994]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0002206e8 a2=98 a3=0 items=0 ppid=3983 pid=3994 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:08:34.549000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3161306637376138376236316232303861313239363738386231366238 Dec 16 13:08:34.581041 containerd[1969]: time="2025-12-16T13:08:34.580872426Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-csw6z,Uid:de32fcce-0e61-4341-99cc-88a327be459b,Namespace:calico-system,Attempt:0,} returns sandbox id \"1a0f77a87b61b208a1296788b16b84f9a09f2d5ab12836c8b67e0413a1d5c188\"" Dec 16 13:08:34.585036 containerd[1969]: time="2025-12-16T13:08:34.584979110Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Dec 16 13:08:34.671571 containerd[1969]: time="2025-12-16T13:08:34.671375531Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6c55bc7669-f8p4b,Uid:6906d951-6fa8-47bc-86fc-6acf0fe72741,Namespace:calico-system,Attempt:0,} returns sandbox id \"44b3949476a48278b3546e9a6d1396e71dd5ebd4f4d67a8050ac88a4d48b41d7\"" Dec 16 13:08:34.900000 audit[4029]: NETFILTER_CFG table=filter:117 family=2 entries=22 op=nft_register_rule pid=4029 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 13:08:34.900000 audit[4029]: SYSCALL arch=c000003e syscall=46 success=yes exit=8224 a0=3 a1=7ffecb4d45b0 a2=0 a3=7ffecb4d459c items=0 ppid=3636 pid=4029 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:08:34.900000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 13:08:34.905000 audit[4029]: NETFILTER_CFG table=nat:118 family=2 entries=12 op=nft_register_rule pid=4029 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 13:08:34.905000 audit[4029]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffecb4d45b0 a2=0 a3=0 items=0 ppid=3636 pid=4029 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:08:34.905000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 13:08:35.754507 kubelet[3309]: E1216 13:08:35.754412 3309 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-h272q" podUID="c808a4b9-6eee-4490-92c6-5f208009c5e7" Dec 16 13:08:36.015833 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount62013077.mount: Deactivated successfully. Dec 16 13:08:36.157628 containerd[1969]: time="2025-12-16T13:08:36.157550853Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:08:36.159968 containerd[1969]: time="2025-12-16T13:08:36.159558990Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=0" Dec 16 13:08:36.161654 containerd[1969]: time="2025-12-16T13:08:36.161578512Z" level=info msg="ImageCreate event name:\"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:08:36.166090 containerd[1969]: time="2025-12-16T13:08:36.165780286Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:08:36.166090 containerd[1969]: time="2025-12-16T13:08:36.165895208Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5941314\" in 1.580616568s" Dec 16 13:08:36.166090 containerd[1969]: time="2025-12-16T13:08:36.165926173Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\"" Dec 16 13:08:36.169584 containerd[1969]: time="2025-12-16T13:08:36.169543287Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Dec 16 13:08:36.188864 containerd[1969]: time="2025-12-16T13:08:36.188205124Z" level=info msg="CreateContainer within sandbox \"1a0f77a87b61b208a1296788b16b84f9a09f2d5ab12836c8b67e0413a1d5c188\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Dec 16 13:08:36.222497 containerd[1969]: time="2025-12-16T13:08:36.222445282Z" level=info msg="Container 7c8c5f0f02ed383f76e82742c82431cb8f9d4b97e9be0a85a482280dc2d7a09d: CDI devices from CRI Config.CDIDevices: []" Dec 16 13:08:36.233685 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2529963006.mount: Deactivated successfully. Dec 16 13:08:36.246852 containerd[1969]: time="2025-12-16T13:08:36.246792649Z" level=info msg="CreateContainer within sandbox \"1a0f77a87b61b208a1296788b16b84f9a09f2d5ab12836c8b67e0413a1d5c188\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"7c8c5f0f02ed383f76e82742c82431cb8f9d4b97e9be0a85a482280dc2d7a09d\"" Dec 16 13:08:36.247461 containerd[1969]: time="2025-12-16T13:08:36.247423717Z" level=info msg="StartContainer for \"7c8c5f0f02ed383f76e82742c82431cb8f9d4b97e9be0a85a482280dc2d7a09d\"" Dec 16 13:08:36.248999 containerd[1969]: time="2025-12-16T13:08:36.248954244Z" level=info msg="connecting to shim 7c8c5f0f02ed383f76e82742c82431cb8f9d4b97e9be0a85a482280dc2d7a09d" address="unix:///run/containerd/s/e9b62a8373bc5ada7a35b5809675abb6ffb16f65dfc92a9fb41ad7c7be53db9c" protocol=ttrpc version=3 Dec 16 13:08:36.279837 systemd[1]: Started cri-containerd-7c8c5f0f02ed383f76e82742c82431cb8f9d4b97e9be0a85a482280dc2d7a09d.scope - libcontainer container 7c8c5f0f02ed383f76e82742c82431cb8f9d4b97e9be0a85a482280dc2d7a09d. Dec 16 13:08:36.342000 audit: BPF prog-id=168 op=LOAD Dec 16 13:08:36.342000 audit[4038]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000178488 a2=98 a3=0 items=0 ppid=3983 pid=4038 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:08:36.342000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3763386335663066303265643338336637366538323734326338323433 Dec 16 13:08:36.342000 audit: BPF prog-id=169 op=LOAD Dec 16 13:08:36.342000 audit[4038]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000178218 a2=98 a3=0 items=0 ppid=3983 pid=4038 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:08:36.342000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3763386335663066303265643338336637366538323734326338323433 Dec 16 13:08:36.343000 audit: BPF prog-id=169 op=UNLOAD Dec 16 13:08:36.343000 audit[4038]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3983 pid=4038 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:08:36.343000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3763386335663066303265643338336637366538323734326338323433 Dec 16 13:08:36.343000 audit: BPF prog-id=168 op=UNLOAD Dec 16 13:08:36.343000 audit[4038]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3983 pid=4038 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:08:36.343000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3763386335663066303265643338336637366538323734326338323433 Dec 16 13:08:36.343000 audit: BPF prog-id=170 op=LOAD Dec 16 13:08:36.343000 audit[4038]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001786e8 a2=98 a3=0 items=0 ppid=3983 pid=4038 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:08:36.343000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3763386335663066303265643338336637366538323734326338323433 Dec 16 13:08:36.376843 containerd[1969]: time="2025-12-16T13:08:36.376775883Z" level=info msg="StartContainer for \"7c8c5f0f02ed383f76e82742c82431cb8f9d4b97e9be0a85a482280dc2d7a09d\" returns successfully" Dec 16 13:08:36.396213 systemd[1]: cri-containerd-7c8c5f0f02ed383f76e82742c82431cb8f9d4b97e9be0a85a482280dc2d7a09d.scope: Deactivated successfully. Dec 16 13:08:36.399000 audit: BPF prog-id=170 op=UNLOAD Dec 16 13:08:36.414503 containerd[1969]: time="2025-12-16T13:08:36.414444387Z" level=info msg="received container exit event container_id:\"7c8c5f0f02ed383f76e82742c82431cb8f9d4b97e9be0a85a482280dc2d7a09d\" id:\"7c8c5f0f02ed383f76e82742c82431cb8f9d4b97e9be0a85a482280dc2d7a09d\" pid:4052 exited_at:{seconds:1765890516 nanos:400756666}" Dec 16 13:08:36.495425 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7c8c5f0f02ed383f76e82742c82431cb8f9d4b97e9be0a85a482280dc2d7a09d-rootfs.mount: Deactivated successfully. Dec 16 13:08:37.753868 kubelet[3309]: E1216 13:08:37.753776 3309 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-h272q" podUID="c808a4b9-6eee-4490-92c6-5f208009c5e7" Dec 16 13:08:38.689225 containerd[1969]: time="2025-12-16T13:08:38.689165230Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:08:38.690741 containerd[1969]: time="2025-12-16T13:08:38.690566670Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=33735893" Dec 16 13:08:38.692723 containerd[1969]: time="2025-12-16T13:08:38.692671055Z" level=info msg="ImageCreate event name:\"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:08:38.696491 containerd[1969]: time="2025-12-16T13:08:38.696417422Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:08:38.698080 containerd[1969]: time="2025-12-16T13:08:38.697317054Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"35234482\" in 2.527721894s" Dec 16 13:08:38.698080 containerd[1969]: time="2025-12-16T13:08:38.697360066Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\"" Dec 16 13:08:38.710821 containerd[1969]: time="2025-12-16T13:08:38.710742662Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Dec 16 13:08:38.760312 containerd[1969]: time="2025-12-16T13:08:38.760102172Z" level=info msg="CreateContainer within sandbox \"44b3949476a48278b3546e9a6d1396e71dd5ebd4f4d67a8050ac88a4d48b41d7\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Dec 16 13:08:38.793032 containerd[1969]: time="2025-12-16T13:08:38.792962433Z" level=info msg="Container bd2b27bd7814f02fb95454a71e740913d79f5bd77cf2878452c7a41fcf19087f: CDI devices from CRI Config.CDIDevices: []" Dec 16 13:08:38.829858 containerd[1969]: time="2025-12-16T13:08:38.829805548Z" level=info msg="CreateContainer within sandbox \"44b3949476a48278b3546e9a6d1396e71dd5ebd4f4d67a8050ac88a4d48b41d7\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"bd2b27bd7814f02fb95454a71e740913d79f5bd77cf2878452c7a41fcf19087f\"" Dec 16 13:08:38.830758 containerd[1969]: time="2025-12-16T13:08:38.830719475Z" level=info msg="StartContainer for \"bd2b27bd7814f02fb95454a71e740913d79f5bd77cf2878452c7a41fcf19087f\"" Dec 16 13:08:38.854670 containerd[1969]: time="2025-12-16T13:08:38.832699709Z" level=info msg="connecting to shim bd2b27bd7814f02fb95454a71e740913d79f5bd77cf2878452c7a41fcf19087f" address="unix:///run/containerd/s/71e60508dbcb0cc9ba84615c83ca8924f0353692fd3b68d681b6b2ab6482c3e6" protocol=ttrpc version=3 Dec 16 13:08:38.887953 systemd[1]: Started cri-containerd-bd2b27bd7814f02fb95454a71e740913d79f5bd77cf2878452c7a41fcf19087f.scope - libcontainer container bd2b27bd7814f02fb95454a71e740913d79f5bd77cf2878452c7a41fcf19087f. Dec 16 13:08:38.908000 audit: BPF prog-id=171 op=LOAD Dec 16 13:08:38.936242 kernel: kauditd_printk_skb: 80 callbacks suppressed Dec 16 13:08:38.936343 kernel: audit: type=1334 audit(1765890518.908:570): prog-id=171 op=LOAD Dec 16 13:08:38.936385 kernel: audit: type=1334 audit(1765890518.912:571): prog-id=172 op=LOAD Dec 16 13:08:38.936425 kernel: audit: type=1300 audit(1765890518.912:571): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0238 a2=98 a3=0 items=0 ppid=3917 pid=4094 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:08:38.936463 kernel: audit: type=1327 audit(1765890518.912:571): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6264326232376264373831346630326662393534353461373165373430 Dec 16 13:08:38.936496 kernel: audit: type=1334 audit(1765890518.912:572): prog-id=172 op=UNLOAD Dec 16 13:08:38.936527 kernel: audit: type=1300 audit(1765890518.912:572): arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3917 pid=4094 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:08:38.936559 kernel: audit: type=1327 audit(1765890518.912:572): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6264326232376264373831346630326662393534353461373165373430 Dec 16 13:08:38.912000 audit: BPF prog-id=172 op=LOAD Dec 16 13:08:38.912000 audit[4094]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0238 a2=98 a3=0 items=0 ppid=3917 pid=4094 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:08:38.912000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6264326232376264373831346630326662393534353461373165373430 Dec 16 13:08:38.912000 audit: BPF prog-id=172 op=UNLOAD Dec 16 13:08:38.912000 audit[4094]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3917 pid=4094 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:08:38.912000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6264326232376264373831346630326662393534353461373165373430 Dec 16 13:08:38.943191 kernel: audit: type=1334 audit(1765890518.912:573): prog-id=173 op=LOAD Dec 16 13:08:38.912000 audit: BPF prog-id=173 op=LOAD Dec 16 13:08:38.950120 kernel: audit: type=1300 audit(1765890518.912:573): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0488 a2=98 a3=0 items=0 ppid=3917 pid=4094 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:08:38.950779 kernel: audit: type=1327 audit(1765890518.912:573): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6264326232376264373831346630326662393534353461373165373430 Dec 16 13:08:38.912000 audit[4094]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0488 a2=98 a3=0 items=0 ppid=3917 pid=4094 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:08:38.912000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6264326232376264373831346630326662393534353461373165373430 Dec 16 13:08:38.912000 audit: BPF prog-id=174 op=LOAD Dec 16 13:08:38.912000 audit[4094]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001a0218 a2=98 a3=0 items=0 ppid=3917 pid=4094 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:08:38.912000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6264326232376264373831346630326662393534353461373165373430 Dec 16 13:08:38.912000 audit: BPF prog-id=174 op=UNLOAD Dec 16 13:08:38.912000 audit[4094]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3917 pid=4094 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:08:38.912000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6264326232376264373831346630326662393534353461373165373430 Dec 16 13:08:38.912000 audit: BPF prog-id=173 op=UNLOAD Dec 16 13:08:38.912000 audit[4094]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3917 pid=4094 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:08:38.912000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6264326232376264373831346630326662393534353461373165373430 Dec 16 13:08:38.912000 audit: BPF prog-id=175 op=LOAD Dec 16 13:08:38.912000 audit[4094]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a06e8 a2=98 a3=0 items=0 ppid=3917 pid=4094 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:08:38.912000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6264326232376264373831346630326662393534353461373165373430 Dec 16 13:08:39.011311 containerd[1969]: time="2025-12-16T13:08:39.011254728Z" level=info msg="StartContainer for \"bd2b27bd7814f02fb95454a71e740913d79f5bd77cf2878452c7a41fcf19087f\" returns successfully" Dec 16 13:08:39.754419 kubelet[3309]: E1216 13:08:39.754080 3309 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-h272q" podUID="c808a4b9-6eee-4490-92c6-5f208009c5e7" Dec 16 13:08:40.484170 kubelet[3309]: I1216 13:08:40.483090 3309 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-6c55bc7669-f8p4b" podStartSLOduration=3.45844945 podStartE2EDuration="7.481388478s" podCreationTimestamp="2025-12-16 13:08:33 +0000 UTC" firstStartedPulling="2025-12-16 13:08:34.682905995 +0000 UTC m=+54.290022031" lastFinishedPulling="2025-12-16 13:08:38.705845013 +0000 UTC m=+58.312961059" observedRunningTime="2025-12-16 13:08:39.487156953 +0000 UTC m=+59.094273012" watchObservedRunningTime="2025-12-16 13:08:40.481388478 +0000 UTC m=+60.088504533" Dec 16 13:08:40.536000 audit[4132]: NETFILTER_CFG table=filter:119 family=2 entries=21 op=nft_register_rule pid=4132 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 13:08:40.536000 audit[4132]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffdf26961c0 a2=0 a3=7ffdf26961ac items=0 ppid=3636 pid=4132 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:08:40.536000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 13:08:40.538000 audit[4132]: NETFILTER_CFG table=nat:120 family=2 entries=19 op=nft_register_chain pid=4132 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 13:08:40.538000 audit[4132]: SYSCALL arch=c000003e syscall=46 success=yes exit=6276 a0=3 a1=7ffdf26961c0 a2=0 a3=7ffdf26961ac items=0 ppid=3636 pid=4132 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:08:40.538000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 13:08:41.753884 kubelet[3309]: E1216 13:08:41.753824 3309 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-h272q" podUID="c808a4b9-6eee-4490-92c6-5f208009c5e7" Dec 16 13:08:43.757654 kubelet[3309]: E1216 13:08:43.757586 3309 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-h272q" podUID="c808a4b9-6eee-4490-92c6-5f208009c5e7" Dec 16 13:08:44.449566 containerd[1969]: time="2025-12-16T13:08:44.449458232Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:08:44.451704 containerd[1969]: time="2025-12-16T13:08:44.451568206Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=70442291" Dec 16 13:08:44.454455 containerd[1969]: time="2025-12-16T13:08:44.454200662Z" level=info msg="ImageCreate event name:\"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:08:44.458193 containerd[1969]: time="2025-12-16T13:08:44.458114485Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:08:44.459281 containerd[1969]: time="2025-12-16T13:08:44.459030701Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"71941459\" in 5.747987438s" Dec 16 13:08:44.459281 containerd[1969]: time="2025-12-16T13:08:44.459093046Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\"" Dec 16 13:08:44.515098 containerd[1969]: time="2025-12-16T13:08:44.515028888Z" level=info msg="CreateContainer within sandbox \"1a0f77a87b61b208a1296788b16b84f9a09f2d5ab12836c8b67e0413a1d5c188\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Dec 16 13:08:44.605099 containerd[1969]: time="2025-12-16T13:08:44.603761853Z" level=info msg="Container f365e2ba0cabb3361b51d59b38f0bf7a08d6a15391bad5755aa84787be07735e: CDI devices from CRI Config.CDIDevices: []" Dec 16 13:08:44.613336 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2181288821.mount: Deactivated successfully. Dec 16 13:08:44.647087 containerd[1969]: time="2025-12-16T13:08:44.645696262Z" level=info msg="CreateContainer within sandbox \"1a0f77a87b61b208a1296788b16b84f9a09f2d5ab12836c8b67e0413a1d5c188\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"f365e2ba0cabb3361b51d59b38f0bf7a08d6a15391bad5755aa84787be07735e\"" Dec 16 13:08:44.650827 containerd[1969]: time="2025-12-16T13:08:44.647980736Z" level=info msg="StartContainer for \"f365e2ba0cabb3361b51d59b38f0bf7a08d6a15391bad5755aa84787be07735e\"" Dec 16 13:08:44.652482 containerd[1969]: time="2025-12-16T13:08:44.652159183Z" level=info msg="connecting to shim f365e2ba0cabb3361b51d59b38f0bf7a08d6a15391bad5755aa84787be07735e" address="unix:///run/containerd/s/e9b62a8373bc5ada7a35b5809675abb6ffb16f65dfc92a9fb41ad7c7be53db9c" protocol=ttrpc version=3 Dec 16 13:08:44.706372 systemd[1]: Started cri-containerd-f365e2ba0cabb3361b51d59b38f0bf7a08d6a15391bad5755aa84787be07735e.scope - libcontainer container f365e2ba0cabb3361b51d59b38f0bf7a08d6a15391bad5755aa84787be07735e. Dec 16 13:08:44.807531 kernel: kauditd_printk_skb: 18 callbacks suppressed Dec 16 13:08:44.807710 kernel: audit: type=1334 audit(1765890524.803:580): prog-id=176 op=LOAD Dec 16 13:08:44.803000 audit: BPF prog-id=176 op=LOAD Dec 16 13:08:44.803000 audit[4143]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=3983 pid=4143 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:08:44.820172 kernel: audit: type=1300 audit(1765890524.803:580): arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=3983 pid=4143 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:08:44.803000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6633363565326261306361626233333631623531643539623338663062 Dec 16 13:08:44.832878 kernel: audit: type=1327 audit(1765890524.803:580): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6633363565326261306361626233333631623531643539623338663062 Dec 16 13:08:44.803000 audit: BPF prog-id=177 op=LOAD Dec 16 13:08:44.836167 kernel: audit: type=1334 audit(1765890524.803:581): prog-id=177 op=LOAD Dec 16 13:08:44.803000 audit[4143]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=3983 pid=4143 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:08:44.849666 kernel: audit: type=1300 audit(1765890524.803:581): arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=3983 pid=4143 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:08:44.849806 kernel: audit: type=1327 audit(1765890524.803:581): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6633363565326261306361626233333631623531643539623338663062 Dec 16 13:08:44.803000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6633363565326261306361626233333631623531643539623338663062 Dec 16 13:08:44.851648 kernel: audit: type=1334 audit(1765890524.803:582): prog-id=177 op=UNLOAD Dec 16 13:08:44.803000 audit: BPF prog-id=177 op=UNLOAD Dec 16 13:08:44.856176 kernel: audit: type=1300 audit(1765890524.803:582): arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3983 pid=4143 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:08:44.803000 audit[4143]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3983 pid=4143 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:08:44.803000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6633363565326261306361626233333631623531643539623338663062 Dec 16 13:08:44.860145 kernel: audit: type=1327 audit(1765890524.803:582): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6633363565326261306361626233333631623531643539623338663062 Dec 16 13:08:44.803000 audit: BPF prog-id=176 op=UNLOAD Dec 16 13:08:44.866013 kernel: audit: type=1334 audit(1765890524.803:583): prog-id=176 op=UNLOAD Dec 16 13:08:44.803000 audit[4143]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3983 pid=4143 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:08:44.803000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6633363565326261306361626233333631623531643539623338663062 Dec 16 13:08:44.803000 audit: BPF prog-id=178 op=LOAD Dec 16 13:08:44.803000 audit[4143]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=3983 pid=4143 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:08:44.803000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6633363565326261306361626233333631623531643539623338663062 Dec 16 13:08:44.875486 containerd[1969]: time="2025-12-16T13:08:44.875381835Z" level=info msg="StartContainer for \"f365e2ba0cabb3361b51d59b38f0bf7a08d6a15391bad5755aa84787be07735e\" returns successfully" Dec 16 13:08:45.756037 kubelet[3309]: E1216 13:08:45.754541 3309 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-h272q" podUID="c808a4b9-6eee-4490-92c6-5f208009c5e7" Dec 16 13:08:46.357217 systemd[1]: cri-containerd-f365e2ba0cabb3361b51d59b38f0bf7a08d6a15391bad5755aa84787be07735e.scope: Deactivated successfully. Dec 16 13:08:46.357657 systemd[1]: cri-containerd-f365e2ba0cabb3361b51d59b38f0bf7a08d6a15391bad5755aa84787be07735e.scope: Consumed 739ms CPU time, 167.1M memory peak, 6.6M read from disk, 171.3M written to disk. Dec 16 13:08:46.363000 audit: BPF prog-id=178 op=UNLOAD Dec 16 13:08:46.422207 containerd[1969]: time="2025-12-16T13:08:46.421043813Z" level=info msg="received container exit event container_id:\"f365e2ba0cabb3361b51d59b38f0bf7a08d6a15391bad5755aa84787be07735e\" id:\"f365e2ba0cabb3361b51d59b38f0bf7a08d6a15391bad5755aa84787be07735e\" pid:4156 exited_at:{seconds:1765890526 nanos:408482699}" Dec 16 13:08:46.449786 kubelet[3309]: I1216 13:08:46.449748 3309 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Dec 16 13:08:46.478707 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f365e2ba0cabb3361b51d59b38f0bf7a08d6a15391bad5755aa84787be07735e-rootfs.mount: Deactivated successfully. Dec 16 13:08:46.529930 kubelet[3309]: I1216 13:08:46.529890 3309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f8d6f\" (UniqueName: \"kubernetes.io/projected/4667e186-7669-4eee-8c92-538a1a091f5e-kube-api-access-f8d6f\") pod \"coredns-674b8bbfcf-xp9r7\" (UID: \"4667e186-7669-4eee-8c92-538a1a091f5e\") " pod="kube-system/coredns-674b8bbfcf-xp9r7" Dec 16 13:08:46.530206 kubelet[3309]: I1216 13:08:46.529979 3309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4667e186-7669-4eee-8c92-538a1a091f5e-config-volume\") pod \"coredns-674b8bbfcf-xp9r7\" (UID: \"4667e186-7669-4eee-8c92-538a1a091f5e\") " pod="kube-system/coredns-674b8bbfcf-xp9r7" Dec 16 13:08:46.538905 systemd[1]: Created slice kubepods-burstable-pod4667e186_7669_4eee_8c92_538a1a091f5e.slice - libcontainer container kubepods-burstable-pod4667e186_7669_4eee_8c92_538a1a091f5e.slice. Dec 16 13:08:46.569775 systemd[1]: Created slice kubepods-burstable-pod6fbf2e8b_b432_4b20_866b_c50e77db1d45.slice - libcontainer container kubepods-burstable-pod6fbf2e8b_b432_4b20_866b_c50e77db1d45.slice. Dec 16 13:08:46.611519 systemd[1]: Created slice kubepods-besteffort-pod17fc83ee_aaa8_428d_ba14_4fb4545cfe65.slice - libcontainer container kubepods-besteffort-pod17fc83ee_aaa8_428d_ba14_4fb4545cfe65.slice. Dec 16 13:08:46.622923 systemd[1]: Created slice kubepods-besteffort-podea48f51b_a248_4d71_8caa_ed889e7f5fac.slice - libcontainer container kubepods-besteffort-podea48f51b_a248_4d71_8caa_ed889e7f5fac.slice. Dec 16 13:08:46.630544 kubelet[3309]: I1216 13:08:46.630510 3309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ea48f51b-a248-4d71-8caa-ed889e7f5fac-config\") pod \"goldmane-666569f655-wpbz6\" (UID: \"ea48f51b-a248-4d71-8caa-ed889e7f5fac\") " pod="calico-system/goldmane-666569f655-wpbz6" Dec 16 13:08:46.630544 kubelet[3309]: I1216 13:08:46.630552 3309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ea48f51b-a248-4d71-8caa-ed889e7f5fac-goldmane-ca-bundle\") pod \"goldmane-666569f655-wpbz6\" (UID: \"ea48f51b-a248-4d71-8caa-ed889e7f5fac\") " pod="calico-system/goldmane-666569f655-wpbz6" Dec 16 13:08:46.630751 kubelet[3309]: I1216 13:08:46.630587 3309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/eef40561-fc3a-47f4-ab5c-0482b5980a8d-tigera-ca-bundle\") pod \"calico-kube-controllers-7bcdd655bc-b4pqw\" (UID: \"eef40561-fc3a-47f4-ab5c-0482b5980a8d\") " pod="calico-system/calico-kube-controllers-7bcdd655bc-b4pqw" Dec 16 13:08:46.630751 kubelet[3309]: I1216 13:08:46.630615 3309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6gntm\" (UniqueName: \"kubernetes.io/projected/6fbf2e8b-b432-4b20-866b-c50e77db1d45-kube-api-access-6gntm\") pod \"coredns-674b8bbfcf-hdxm2\" (UID: \"6fbf2e8b-b432-4b20-866b-c50e77db1d45\") " pod="kube-system/coredns-674b8bbfcf-hdxm2" Dec 16 13:08:46.630751 kubelet[3309]: I1216 13:08:46.630639 3309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xldnt\" (UniqueName: \"kubernetes.io/projected/4af40c2b-1380-4c89-9267-7128376e55dc-kube-api-access-xldnt\") pod \"whisker-7794599587-24nsl\" (UID: \"4af40c2b-1380-4c89-9267-7128376e55dc\") " pod="calico-system/whisker-7794599587-24nsl" Dec 16 13:08:46.630751 kubelet[3309]: I1216 13:08:46.630664 3309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/402c8f91-f505-4b31-ab8d-437df33aba9f-calico-apiserver-certs\") pod \"calico-apiserver-6d7fb6ffdb-t947q\" (UID: \"402c8f91-f505-4b31-ab8d-437df33aba9f\") " pod="calico-apiserver/calico-apiserver-6d7fb6ffdb-t947q" Dec 16 13:08:46.630751 kubelet[3309]: I1216 13:08:46.630685 3309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vqwxt\" (UniqueName: \"kubernetes.io/projected/402c8f91-f505-4b31-ab8d-437df33aba9f-kube-api-access-vqwxt\") pod \"calico-apiserver-6d7fb6ffdb-t947q\" (UID: \"402c8f91-f505-4b31-ab8d-437df33aba9f\") " pod="calico-apiserver/calico-apiserver-6d7fb6ffdb-t947q" Dec 16 13:08:46.630978 kubelet[3309]: I1216 13:08:46.630708 3309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/17fc83ee-aaa8-428d-ba14-4fb4545cfe65-calico-apiserver-certs\") pod \"calico-apiserver-6d7fb6ffdb-x9w4j\" (UID: \"17fc83ee-aaa8-428d-ba14-4fb4545cfe65\") " pod="calico-apiserver/calico-apiserver-6d7fb6ffdb-x9w4j" Dec 16 13:08:46.630978 kubelet[3309]: I1216 13:08:46.630762 3309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q6qrs\" (UniqueName: \"kubernetes.io/projected/eef40561-fc3a-47f4-ab5c-0482b5980a8d-kube-api-access-q6qrs\") pod \"calico-kube-controllers-7bcdd655bc-b4pqw\" (UID: \"eef40561-fc3a-47f4-ab5c-0482b5980a8d\") " pod="calico-system/calico-kube-controllers-7bcdd655bc-b4pqw" Dec 16 13:08:46.630978 kubelet[3309]: I1216 13:08:46.630789 3309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/ea48f51b-a248-4d71-8caa-ed889e7f5fac-goldmane-key-pair\") pod \"goldmane-666569f655-wpbz6\" (UID: \"ea48f51b-a248-4d71-8caa-ed889e7f5fac\") " pod="calico-system/goldmane-666569f655-wpbz6" Dec 16 13:08:46.630978 kubelet[3309]: I1216 13:08:46.630839 3309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/4af40c2b-1380-4c89-9267-7128376e55dc-whisker-backend-key-pair\") pod \"whisker-7794599587-24nsl\" (UID: \"4af40c2b-1380-4c89-9267-7128376e55dc\") " pod="calico-system/whisker-7794599587-24nsl" Dec 16 13:08:46.630978 kubelet[3309]: I1216 13:08:46.630866 3309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wps7z\" (UniqueName: \"kubernetes.io/projected/ea48f51b-a248-4d71-8caa-ed889e7f5fac-kube-api-access-wps7z\") pod \"goldmane-666569f655-wpbz6\" (UID: \"ea48f51b-a248-4d71-8caa-ed889e7f5fac\") " pod="calico-system/goldmane-666569f655-wpbz6" Dec 16 13:08:46.633536 kubelet[3309]: I1216 13:08:46.630899 3309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4af40c2b-1380-4c89-9267-7128376e55dc-whisker-ca-bundle\") pod \"whisker-7794599587-24nsl\" (UID: \"4af40c2b-1380-4c89-9267-7128376e55dc\") " pod="calico-system/whisker-7794599587-24nsl" Dec 16 13:08:46.633536 kubelet[3309]: I1216 13:08:46.630931 3309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6fbf2e8b-b432-4b20-866b-c50e77db1d45-config-volume\") pod \"coredns-674b8bbfcf-hdxm2\" (UID: \"6fbf2e8b-b432-4b20-866b-c50e77db1d45\") " pod="kube-system/coredns-674b8bbfcf-hdxm2" Dec 16 13:08:46.633536 kubelet[3309]: I1216 13:08:46.630958 3309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6h55z\" (UniqueName: \"kubernetes.io/projected/17fc83ee-aaa8-428d-ba14-4fb4545cfe65-kube-api-access-6h55z\") pod \"calico-apiserver-6d7fb6ffdb-x9w4j\" (UID: \"17fc83ee-aaa8-428d-ba14-4fb4545cfe65\") " pod="calico-apiserver/calico-apiserver-6d7fb6ffdb-x9w4j" Dec 16 13:08:46.641942 systemd[1]: Created slice kubepods-besteffort-pod4af40c2b_1380_4c89_9267_7128376e55dc.slice - libcontainer container kubepods-besteffort-pod4af40c2b_1380_4c89_9267_7128376e55dc.slice. Dec 16 13:08:46.668340 systemd[1]: Created slice kubepods-besteffort-podeef40561_fc3a_47f4_ab5c_0482b5980a8d.slice - libcontainer container kubepods-besteffort-podeef40561_fc3a_47f4_ab5c_0482b5980a8d.slice. Dec 16 13:08:46.677615 systemd[1]: Created slice kubepods-besteffort-pod402c8f91_f505_4b31_ab8d_437df33aba9f.slice - libcontainer container kubepods-besteffort-pod402c8f91_f505_4b31_ab8d_437df33aba9f.slice. Dec 16 13:08:46.866438 containerd[1969]: time="2025-12-16T13:08:46.865445462Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-xp9r7,Uid:4667e186-7669-4eee-8c92-538a1a091f5e,Namespace:kube-system,Attempt:0,}" Dec 16 13:08:46.903306 containerd[1969]: time="2025-12-16T13:08:46.901709210Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-hdxm2,Uid:6fbf2e8b-b432-4b20-866b-c50e77db1d45,Namespace:kube-system,Attempt:0,}" Dec 16 13:08:46.924542 containerd[1969]: time="2025-12-16T13:08:46.924480446Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6d7fb6ffdb-x9w4j,Uid:17fc83ee-aaa8-428d-ba14-4fb4545cfe65,Namespace:calico-apiserver,Attempt:0,}" Dec 16 13:08:46.940201 containerd[1969]: time="2025-12-16T13:08:46.940153570Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-wpbz6,Uid:ea48f51b-a248-4d71-8caa-ed889e7f5fac,Namespace:calico-system,Attempt:0,}" Dec 16 13:08:46.957201 containerd[1969]: time="2025-12-16T13:08:46.955953807Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7794599587-24nsl,Uid:4af40c2b-1380-4c89-9267-7128376e55dc,Namespace:calico-system,Attempt:0,}" Dec 16 13:08:46.976104 containerd[1969]: time="2025-12-16T13:08:46.976025157Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7bcdd655bc-b4pqw,Uid:eef40561-fc3a-47f4-ab5c-0482b5980a8d,Namespace:calico-system,Attempt:0,}" Dec 16 13:08:46.983225 containerd[1969]: time="2025-12-16T13:08:46.981932753Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6d7fb6ffdb-t947q,Uid:402c8f91-f505-4b31-ab8d-437df33aba9f,Namespace:calico-apiserver,Attempt:0,}" Dec 16 13:08:47.510656 containerd[1969]: time="2025-12-16T13:08:47.510343103Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Dec 16 13:08:47.761574 systemd[1]: Created slice kubepods-besteffort-podc808a4b9_6eee_4490_92c6_5f208009c5e7.slice - libcontainer container kubepods-besteffort-podc808a4b9_6eee_4490_92c6_5f208009c5e7.slice. Dec 16 13:08:47.765191 containerd[1969]: time="2025-12-16T13:08:47.765130767Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-h272q,Uid:c808a4b9-6eee-4490-92c6-5f208009c5e7,Namespace:calico-system,Attempt:0,}" Dec 16 13:08:49.224093 containerd[1969]: time="2025-12-16T13:08:49.222763663Z" level=error msg="Failed to destroy network for sandbox \"8e57d6c9d7fc05c2ed8026f37d07c6b807355eac9ceaf4e3a2ecc76be2d49167\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 13:08:49.226878 systemd[1]: run-netns-cni\x2dbb9e2e2d\x2d16b4\x2d928d\x2d2336\x2dbad309a7b1a9.mount: Deactivated successfully. Dec 16 13:08:49.230511 containerd[1969]: time="2025-12-16T13:08:49.230465274Z" level=error msg="Failed to destroy network for sandbox \"1adb49f6ac2e319e0f3c43490a7c4d7bfe531288dca60027cbb9f8ac9bf18367\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 13:08:49.237319 systemd[1]: run-netns-cni\x2dfdccec45\x2d82cc\x2dd84b\x2dcdf0\x2d2edd896ebd31.mount: Deactivated successfully. Dec 16 13:08:49.241515 containerd[1969]: time="2025-12-16T13:08:49.241387765Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6d7fb6ffdb-x9w4j,Uid:17fc83ee-aaa8-428d-ba14-4fb4545cfe65,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"8e57d6c9d7fc05c2ed8026f37d07c6b807355eac9ceaf4e3a2ecc76be2d49167\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 13:08:49.257763 containerd[1969]: time="2025-12-16T13:08:49.255638494Z" level=error msg="Failed to destroy network for sandbox \"57ef8d151565f094535c3965b3d49db0eafbb4c989cc2e515ff4c9c8e970dfb2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 13:08:49.261444 containerd[1969]: time="2025-12-16T13:08:49.259997519Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7794599587-24nsl,Uid:4af40c2b-1380-4c89-9267-7128376e55dc,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"57ef8d151565f094535c3965b3d49db0eafbb4c989cc2e515ff4c9c8e970dfb2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 13:08:49.263237 systemd[1]: run-netns-cni\x2d65718e04\x2d71be\x2d389b\x2d00e7\x2d41d2b74df042.mount: Deactivated successfully. Dec 16 13:08:49.285094 containerd[1969]: time="2025-12-16T13:08:49.283335627Z" level=error msg="Failed to destroy network for sandbox \"4d52de6330598b8e38eaf5b0a556e08acd3c5b65e4a5c07b64ef5ca31ee11f30\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 13:08:49.288455 systemd[1]: run-netns-cni\x2d95b87ef6\x2dd9d7\x2da4f8\x2df613\x2d0fa4dfecedcb.mount: Deactivated successfully. Dec 16 13:08:49.291987 containerd[1969]: time="2025-12-16T13:08:49.291927286Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-wpbz6,Uid:ea48f51b-a248-4d71-8caa-ed889e7f5fac,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"1adb49f6ac2e319e0f3c43490a7c4d7bfe531288dca60027cbb9f8ac9bf18367\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 13:08:49.295572 containerd[1969]: time="2025-12-16T13:08:49.295508406Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7bcdd655bc-b4pqw,Uid:eef40561-fc3a-47f4-ab5c-0482b5980a8d,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"4d52de6330598b8e38eaf5b0a556e08acd3c5b65e4a5c07b64ef5ca31ee11f30\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 13:08:49.296936 kubelet[3309]: E1216 13:08:49.296548 3309 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8e57d6c9d7fc05c2ed8026f37d07c6b807355eac9ceaf4e3a2ecc76be2d49167\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 13:08:49.296936 kubelet[3309]: E1216 13:08:49.296651 3309 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8e57d6c9d7fc05c2ed8026f37d07c6b807355eac9ceaf4e3a2ecc76be2d49167\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6d7fb6ffdb-x9w4j" Dec 16 13:08:49.296936 kubelet[3309]: E1216 13:08:49.296685 3309 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8e57d6c9d7fc05c2ed8026f37d07c6b807355eac9ceaf4e3a2ecc76be2d49167\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6d7fb6ffdb-x9w4j" Dec 16 13:08:49.296936 kubelet[3309]: E1216 13:08:49.296810 3309 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4d52de6330598b8e38eaf5b0a556e08acd3c5b65e4a5c07b64ef5ca31ee11f30\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 13:08:49.298383 kubelet[3309]: E1216 13:08:49.296855 3309 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4d52de6330598b8e38eaf5b0a556e08acd3c5b65e4a5c07b64ef5ca31ee11f30\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7bcdd655bc-b4pqw" Dec 16 13:08:49.298383 kubelet[3309]: E1216 13:08:49.296878 3309 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4d52de6330598b8e38eaf5b0a556e08acd3c5b65e4a5c07b64ef5ca31ee11f30\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7bcdd655bc-b4pqw" Dec 16 13:08:49.299279 kubelet[3309]: E1216 13:08:49.299227 3309 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-7bcdd655bc-b4pqw_calico-system(eef40561-fc3a-47f4-ab5c-0482b5980a8d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-7bcdd655bc-b4pqw_calico-system(eef40561-fc3a-47f4-ab5c-0482b5980a8d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4d52de6330598b8e38eaf5b0a556e08acd3c5b65e4a5c07b64ef5ca31ee11f30\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7bcdd655bc-b4pqw" podUID="eef40561-fc3a-47f4-ab5c-0482b5980a8d" Dec 16 13:08:49.303084 kubelet[3309]: E1216 13:08:49.301028 3309 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6d7fb6ffdb-x9w4j_calico-apiserver(17fc83ee-aaa8-428d-ba14-4fb4545cfe65)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6d7fb6ffdb-x9w4j_calico-apiserver(17fc83ee-aaa8-428d-ba14-4fb4545cfe65)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8e57d6c9d7fc05c2ed8026f37d07c6b807355eac9ceaf4e3a2ecc76be2d49167\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6d7fb6ffdb-x9w4j" podUID="17fc83ee-aaa8-428d-ba14-4fb4545cfe65" Dec 16 13:08:49.303084 kubelet[3309]: E1216 13:08:49.301213 3309 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1adb49f6ac2e319e0f3c43490a7c4d7bfe531288dca60027cbb9f8ac9bf18367\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 13:08:49.303084 kubelet[3309]: E1216 13:08:49.301265 3309 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1adb49f6ac2e319e0f3c43490a7c4d7bfe531288dca60027cbb9f8ac9bf18367\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-wpbz6" Dec 16 13:08:49.303388 kubelet[3309]: E1216 13:08:49.301290 3309 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1adb49f6ac2e319e0f3c43490a7c4d7bfe531288dca60027cbb9f8ac9bf18367\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-wpbz6" Dec 16 13:08:49.303388 kubelet[3309]: E1216 13:08:49.301338 3309 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-wpbz6_calico-system(ea48f51b-a248-4d71-8caa-ed889e7f5fac)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-wpbz6_calico-system(ea48f51b-a248-4d71-8caa-ed889e7f5fac)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1adb49f6ac2e319e0f3c43490a7c4d7bfe531288dca60027cbb9f8ac9bf18367\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-wpbz6" podUID="ea48f51b-a248-4d71-8caa-ed889e7f5fac" Dec 16 13:08:49.303388 kubelet[3309]: E1216 13:08:49.301376 3309 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"57ef8d151565f094535c3965b3d49db0eafbb4c989cc2e515ff4c9c8e970dfb2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 13:08:49.303793 kubelet[3309]: E1216 13:08:49.301400 3309 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"57ef8d151565f094535c3965b3d49db0eafbb4c989cc2e515ff4c9c8e970dfb2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-7794599587-24nsl" Dec 16 13:08:49.303793 kubelet[3309]: E1216 13:08:49.303663 3309 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"57ef8d151565f094535c3965b3d49db0eafbb4c989cc2e515ff4c9c8e970dfb2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-7794599587-24nsl" Dec 16 13:08:49.303793 kubelet[3309]: E1216 13:08:49.303733 3309 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-7794599587-24nsl_calico-system(4af40c2b-1380-4c89-9267-7128376e55dc)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-7794599587-24nsl_calico-system(4af40c2b-1380-4c89-9267-7128376e55dc)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"57ef8d151565f094535c3965b3d49db0eafbb4c989cc2e515ff4c9c8e970dfb2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-7794599587-24nsl" podUID="4af40c2b-1380-4c89-9267-7128376e55dc" Dec 16 13:08:49.304288 containerd[1969]: time="2025-12-16T13:08:49.304243174Z" level=error msg="Failed to destroy network for sandbox \"a99096392224ea6761bfba1126f1d76ddd6cffbac96a1a3b032b779aaecb37ba\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 13:08:49.311455 systemd[1]: run-netns-cni\x2dabb30b91\x2d8a18\x2dcb67\x2d6ce7\x2d432d8102dfaa.mount: Deactivated successfully. Dec 16 13:08:49.317049 containerd[1969]: time="2025-12-16T13:08:49.316991257Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6d7fb6ffdb-t947q,Uid:402c8f91-f505-4b31-ab8d-437df33aba9f,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"a99096392224ea6761bfba1126f1d76ddd6cffbac96a1a3b032b779aaecb37ba\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 13:08:49.318366 containerd[1969]: time="2025-12-16T13:08:49.317897736Z" level=error msg="Failed to destroy network for sandbox \"2eaba1d88014692167bffb95873220041ca49377f3cdbbf6aaf378d031094598\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 13:08:49.318495 kubelet[3309]: E1216 13:08:49.318196 3309 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a99096392224ea6761bfba1126f1d76ddd6cffbac96a1a3b032b779aaecb37ba\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 13:08:49.318495 kubelet[3309]: E1216 13:08:49.318267 3309 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a99096392224ea6761bfba1126f1d76ddd6cffbac96a1a3b032b779aaecb37ba\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6d7fb6ffdb-t947q" Dec 16 13:08:49.318495 kubelet[3309]: E1216 13:08:49.318308 3309 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a99096392224ea6761bfba1126f1d76ddd6cffbac96a1a3b032b779aaecb37ba\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6d7fb6ffdb-t947q" Dec 16 13:08:49.318980 kubelet[3309]: E1216 13:08:49.318904 3309 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6d7fb6ffdb-t947q_calico-apiserver(402c8f91-f505-4b31-ab8d-437df33aba9f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6d7fb6ffdb-t947q_calico-apiserver(402c8f91-f505-4b31-ab8d-437df33aba9f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a99096392224ea6761bfba1126f1d76ddd6cffbac96a1a3b032b779aaecb37ba\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6d7fb6ffdb-t947q" podUID="402c8f91-f505-4b31-ab8d-437df33aba9f" Dec 16 13:08:49.331290 containerd[1969]: time="2025-12-16T13:08:49.330484049Z" level=error msg="Failed to destroy network for sandbox \"b9518d8a959f53c6c0d3ed041bff7cbfd41eee34282c6f447d6a552bb83a0655\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 13:08:49.331290 containerd[1969]: time="2025-12-16T13:08:49.330744503Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-xp9r7,Uid:4667e186-7669-4eee-8c92-538a1a091f5e,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"2eaba1d88014692167bffb95873220041ca49377f3cdbbf6aaf378d031094598\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 13:08:49.332473 kubelet[3309]: E1216 13:08:49.332229 3309 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2eaba1d88014692167bffb95873220041ca49377f3cdbbf6aaf378d031094598\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 13:08:49.332473 kubelet[3309]: E1216 13:08:49.332307 3309 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2eaba1d88014692167bffb95873220041ca49377f3cdbbf6aaf378d031094598\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-xp9r7" Dec 16 13:08:49.332473 kubelet[3309]: E1216 13:08:49.332339 3309 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2eaba1d88014692167bffb95873220041ca49377f3cdbbf6aaf378d031094598\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-xp9r7" Dec 16 13:08:49.332705 kubelet[3309]: E1216 13:08:49.332408 3309 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-xp9r7_kube-system(4667e186-7669-4eee-8c92-538a1a091f5e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-xp9r7_kube-system(4667e186-7669-4eee-8c92-538a1a091f5e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2eaba1d88014692167bffb95873220041ca49377f3cdbbf6aaf378d031094598\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-xp9r7" podUID="4667e186-7669-4eee-8c92-538a1a091f5e" Dec 16 13:08:49.333082 containerd[1969]: time="2025-12-16T13:08:49.333012835Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-h272q,Uid:c808a4b9-6eee-4490-92c6-5f208009c5e7,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"b9518d8a959f53c6c0d3ed041bff7cbfd41eee34282c6f447d6a552bb83a0655\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 13:08:49.361129 containerd[1969]: time="2025-12-16T13:08:49.336997124Z" level=error msg="Failed to destroy network for sandbox \"ece324567884417bbbb9cf7bc04b90547d2c019c5c0ffb2cf0d92513d1ff6f60\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 13:08:49.361129 containerd[1969]: time="2025-12-16T13:08:49.340030119Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-hdxm2,Uid:6fbf2e8b-b432-4b20-866b-c50e77db1d45,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"ece324567884417bbbb9cf7bc04b90547d2c019c5c0ffb2cf0d92513d1ff6f60\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 13:08:49.361375 kubelet[3309]: E1216 13:08:49.333263 3309 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b9518d8a959f53c6c0d3ed041bff7cbfd41eee34282c6f447d6a552bb83a0655\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 13:08:49.361375 kubelet[3309]: E1216 13:08:49.333317 3309 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b9518d8a959f53c6c0d3ed041bff7cbfd41eee34282c6f447d6a552bb83a0655\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-h272q" Dec 16 13:08:49.361375 kubelet[3309]: E1216 13:08:49.333354 3309 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b9518d8a959f53c6c0d3ed041bff7cbfd41eee34282c6f447d6a552bb83a0655\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-h272q" Dec 16 13:08:49.361545 kubelet[3309]: E1216 13:08:49.333446 3309 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-h272q_calico-system(c808a4b9-6eee-4490-92c6-5f208009c5e7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-h272q_calico-system(c808a4b9-6eee-4490-92c6-5f208009c5e7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b9518d8a959f53c6c0d3ed041bff7cbfd41eee34282c6f447d6a552bb83a0655\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-h272q" podUID="c808a4b9-6eee-4490-92c6-5f208009c5e7" Dec 16 13:08:49.361545 kubelet[3309]: E1216 13:08:49.340425 3309 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ece324567884417bbbb9cf7bc04b90547d2c019c5c0ffb2cf0d92513d1ff6f60\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 13:08:49.361545 kubelet[3309]: E1216 13:08:49.340493 3309 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ece324567884417bbbb9cf7bc04b90547d2c019c5c0ffb2cf0d92513d1ff6f60\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-hdxm2" Dec 16 13:08:49.361720 kubelet[3309]: E1216 13:08:49.340519 3309 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ece324567884417bbbb9cf7bc04b90547d2c019c5c0ffb2cf0d92513d1ff6f60\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-hdxm2" Dec 16 13:08:49.361720 kubelet[3309]: E1216 13:08:49.340616 3309 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-hdxm2_kube-system(6fbf2e8b-b432-4b20-866b-c50e77db1d45)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-hdxm2_kube-system(6fbf2e8b-b432-4b20-866b-c50e77db1d45)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ece324567884417bbbb9cf7bc04b90547d2c019c5c0ffb2cf0d92513d1ff6f60\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-hdxm2" podUID="6fbf2e8b-b432-4b20-866b-c50e77db1d45" Dec 16 13:08:50.225565 systemd[1]: run-netns-cni\x2d5aeed8b2\x2d4b88\x2dc3ba\x2d68ad\x2d5fbef85e424c.mount: Deactivated successfully. Dec 16 13:08:50.225677 systemd[1]: run-netns-cni\x2d27416093\x2dbced\x2d5cfe\x2d9af4\x2d9c75452084f1.mount: Deactivated successfully. Dec 16 13:08:50.225735 systemd[1]: run-netns-cni\x2d7dbc2f26\x2d79de\x2d43c5\x2da5df\x2d9a6f64cebd0d.mount: Deactivated successfully. Dec 16 13:08:56.254495 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1303528671.mount: Deactivated successfully. Dec 16 13:08:56.380140 containerd[1969]: time="2025-12-16T13:08:56.380053863Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:08:56.390778 containerd[1969]: time="2025-12-16T13:08:56.390662213Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=156880025" Dec 16 13:08:56.423111 containerd[1969]: time="2025-12-16T13:08:56.422970723Z" level=info msg="ImageCreate event name:\"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:08:56.427175 containerd[1969]: time="2025-12-16T13:08:56.427112122Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:08:56.428213 containerd[1969]: time="2025-12-16T13:08:56.428098387Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"156883537\" in 8.917699652s" Dec 16 13:08:56.442207 containerd[1969]: time="2025-12-16T13:08:56.442143222Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\"" Dec 16 13:08:56.524134 containerd[1969]: time="2025-12-16T13:08:56.523032225Z" level=info msg="CreateContainer within sandbox \"1a0f77a87b61b208a1296788b16b84f9a09f2d5ab12836c8b67e0413a1d5c188\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Dec 16 13:08:56.624330 containerd[1969]: time="2025-12-16T13:08:56.624274504Z" level=info msg="Container 4c99a5c76a63d63bf00c90d647345e4593c205322abf5e89ebd3e0ec164a6b33: CDI devices from CRI Config.CDIDevices: []" Dec 16 13:08:56.627780 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1561453291.mount: Deactivated successfully. Dec 16 13:08:56.701441 containerd[1969]: time="2025-12-16T13:08:56.701388438Z" level=info msg="CreateContainer within sandbox \"1a0f77a87b61b208a1296788b16b84f9a09f2d5ab12836c8b67e0413a1d5c188\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"4c99a5c76a63d63bf00c90d647345e4593c205322abf5e89ebd3e0ec164a6b33\"" Dec 16 13:08:56.704781 containerd[1969]: time="2025-12-16T13:08:56.704690438Z" level=info msg="StartContainer for \"4c99a5c76a63d63bf00c90d647345e4593c205322abf5e89ebd3e0ec164a6b33\"" Dec 16 13:08:56.710489 containerd[1969]: time="2025-12-16T13:08:56.709817912Z" level=info msg="connecting to shim 4c99a5c76a63d63bf00c90d647345e4593c205322abf5e89ebd3e0ec164a6b33" address="unix:///run/containerd/s/e9b62a8373bc5ada7a35b5809675abb6ffb16f65dfc92a9fb41ad7c7be53db9c" protocol=ttrpc version=3 Dec 16 13:08:56.892304 systemd[1]: Started cri-containerd-4c99a5c76a63d63bf00c90d647345e4593c205322abf5e89ebd3e0ec164a6b33.scope - libcontainer container 4c99a5c76a63d63bf00c90d647345e4593c205322abf5e89ebd3e0ec164a6b33. Dec 16 13:08:56.972000 audit: BPF prog-id=179 op=LOAD Dec 16 13:08:57.009835 kernel: kauditd_printk_skb: 6 callbacks suppressed Dec 16 13:08:57.009954 kernel: audit: type=1334 audit(1765890536.972:586): prog-id=179 op=LOAD Dec 16 13:08:57.009997 kernel: audit: type=1300 audit(1765890536.972:586): arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0000aa488 a2=98 a3=0 items=0 ppid=3983 pid=4413 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:08:57.010031 kernel: audit: type=1327 audit(1765890536.972:586): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3463393961356337366136336436336266303063393064363437333435 Dec 16 13:08:57.010080 kernel: audit: type=1334 audit(1765890536.973:587): prog-id=180 op=LOAD Dec 16 13:08:57.010114 kernel: audit: type=1300 audit(1765890536.973:587): arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c0000aa218 a2=98 a3=0 items=0 ppid=3983 pid=4413 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:08:57.010165 kernel: audit: type=1327 audit(1765890536.973:587): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3463393961356337366136336436336266303063393064363437333435 Dec 16 13:08:57.010198 kernel: audit: type=1334 audit(1765890536.973:588): prog-id=180 op=UNLOAD Dec 16 13:08:57.010230 kernel: audit: type=1300 audit(1765890536.973:588): arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3983 pid=4413 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:08:57.010262 kernel: audit: type=1327 audit(1765890536.973:588): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3463393961356337366136336436336266303063393064363437333435 Dec 16 13:08:56.972000 audit[4413]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0000aa488 a2=98 a3=0 items=0 ppid=3983 pid=4413 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:08:56.972000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3463393961356337366136336436336266303063393064363437333435 Dec 16 13:08:56.973000 audit: BPF prog-id=180 op=LOAD Dec 16 13:08:56.973000 audit[4413]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c0000aa218 a2=98 a3=0 items=0 ppid=3983 pid=4413 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:08:56.973000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3463393961356337366136336436336266303063393064363437333435 Dec 16 13:08:56.973000 audit: BPF prog-id=180 op=UNLOAD Dec 16 13:08:56.973000 audit[4413]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3983 pid=4413 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:08:56.973000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3463393961356337366136336436336266303063393064363437333435 Dec 16 13:08:56.973000 audit: BPF prog-id=179 op=UNLOAD Dec 16 13:08:57.017046 kernel: audit: type=1334 audit(1765890536.973:589): prog-id=179 op=UNLOAD Dec 16 13:08:56.973000 audit[4413]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3983 pid=4413 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:08:56.973000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3463393961356337366136336436336266303063393064363437333435 Dec 16 13:08:56.973000 audit: BPF prog-id=181 op=LOAD Dec 16 13:08:56.973000 audit[4413]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0000aa6e8 a2=98 a3=0 items=0 ppid=3983 pid=4413 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:08:56.973000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3463393961356337366136336436336266303063393064363437333435 Dec 16 13:08:57.085312 containerd[1969]: time="2025-12-16T13:08:57.085272564Z" level=info msg="StartContainer for \"4c99a5c76a63d63bf00c90d647345e4593c205322abf5e89ebd3e0ec164a6b33\" returns successfully" Dec 16 13:08:57.869332 kubelet[3309]: I1216 13:08:57.866542 3309 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-csw6z" podStartSLOduration=3.00667052 podStartE2EDuration="24.866510981s" podCreationTimestamp="2025-12-16 13:08:33 +0000 UTC" firstStartedPulling="2025-12-16 13:08:34.584668966 +0000 UTC m=+54.191785007" lastFinishedPulling="2025-12-16 13:08:56.444509421 +0000 UTC m=+76.051625468" observedRunningTime="2025-12-16 13:08:57.752670181 +0000 UTC m=+77.359786238" watchObservedRunningTime="2025-12-16 13:08:57.866510981 +0000 UTC m=+77.473627036" Dec 16 13:08:58.043014 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Dec 16 13:08:58.043191 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Dec 16 13:08:58.468270 kubelet[3309]: I1216 13:08:58.468219 3309 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xldnt\" (UniqueName: \"kubernetes.io/projected/4af40c2b-1380-4c89-9267-7128376e55dc-kube-api-access-xldnt\") pod \"4af40c2b-1380-4c89-9267-7128376e55dc\" (UID: \"4af40c2b-1380-4c89-9267-7128376e55dc\") " Dec 16 13:08:58.468475 kubelet[3309]: I1216 13:08:58.468302 3309 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4af40c2b-1380-4c89-9267-7128376e55dc-whisker-ca-bundle\") pod \"4af40c2b-1380-4c89-9267-7128376e55dc\" (UID: \"4af40c2b-1380-4c89-9267-7128376e55dc\") " Dec 16 13:08:58.468475 kubelet[3309]: I1216 13:08:58.468335 3309 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/4af40c2b-1380-4c89-9267-7128376e55dc-whisker-backend-key-pair\") pod \"4af40c2b-1380-4c89-9267-7128376e55dc\" (UID: \"4af40c2b-1380-4c89-9267-7128376e55dc\") " Dec 16 13:08:58.484792 kubelet[3309]: I1216 13:08:58.484735 3309 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4af40c2b-1380-4c89-9267-7128376e55dc-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "4af40c2b-1380-4c89-9267-7128376e55dc" (UID: "4af40c2b-1380-4c89-9267-7128376e55dc"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 16 13:08:58.489368 kubelet[3309]: I1216 13:08:58.489259 3309 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4af40c2b-1380-4c89-9267-7128376e55dc-kube-api-access-xldnt" (OuterVolumeSpecName: "kube-api-access-xldnt") pod "4af40c2b-1380-4c89-9267-7128376e55dc" (UID: "4af40c2b-1380-4c89-9267-7128376e55dc"). InnerVolumeSpecName "kube-api-access-xldnt". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 16 13:08:58.495514 kubelet[3309]: I1216 13:08:58.495406 3309 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4af40c2b-1380-4c89-9267-7128376e55dc-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "4af40c2b-1380-4c89-9267-7128376e55dc" (UID: "4af40c2b-1380-4c89-9267-7128376e55dc"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 16 13:08:58.496600 systemd[1]: var-lib-kubelet-pods-4af40c2b\x2d1380\x2d4c89\x2d9267\x2d7128376e55dc-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dxldnt.mount: Deactivated successfully. Dec 16 13:08:58.507412 systemd[1]: var-lib-kubelet-pods-4af40c2b\x2d1380\x2d4c89\x2d9267\x2d7128376e55dc-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Dec 16 13:08:58.569407 kubelet[3309]: I1216 13:08:58.569203 3309 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/4af40c2b-1380-4c89-9267-7128376e55dc-whisker-backend-key-pair\") on node \"ip-172-31-28-98\" DevicePath \"\"" Dec 16 13:08:58.569866 kubelet[3309]: I1216 13:08:58.569565 3309 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-xldnt\" (UniqueName: \"kubernetes.io/projected/4af40c2b-1380-4c89-9267-7128376e55dc-kube-api-access-xldnt\") on node \"ip-172-31-28-98\" DevicePath \"\"" Dec 16 13:08:58.569866 kubelet[3309]: I1216 13:08:58.569742 3309 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4af40c2b-1380-4c89-9267-7128376e55dc-whisker-ca-bundle\") on node \"ip-172-31-28-98\" DevicePath \"\"" Dec 16 13:08:58.654470 systemd[1]: Removed slice kubepods-besteffort-pod4af40c2b_1380_4c89_9267_7128376e55dc.slice - libcontainer container kubepods-besteffort-pod4af40c2b_1380_4c89_9267_7128376e55dc.slice. Dec 16 13:08:58.779546 kubelet[3309]: I1216 13:08:58.775562 3309 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4af40c2b-1380-4c89-9267-7128376e55dc" path="/var/lib/kubelet/pods/4af40c2b-1380-4c89-9267-7128376e55dc/volumes" Dec 16 13:08:59.023243 systemd[1]: Created slice kubepods-besteffort-podf4a8c05f_aa26_454c_a381_75bd59548a78.slice - libcontainer container kubepods-besteffort-podf4a8c05f_aa26_454c_a381_75bd59548a78.slice. Dec 16 13:08:59.075154 kubelet[3309]: I1216 13:08:59.075043 3309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v44l9\" (UniqueName: \"kubernetes.io/projected/f4a8c05f-aa26-454c-a381-75bd59548a78-kube-api-access-v44l9\") pod \"whisker-58f99f576c-h7p64\" (UID: \"f4a8c05f-aa26-454c-a381-75bd59548a78\") " pod="calico-system/whisker-58f99f576c-h7p64" Dec 16 13:08:59.076692 kubelet[3309]: I1216 13:08:59.076655 3309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/f4a8c05f-aa26-454c-a381-75bd59548a78-whisker-backend-key-pair\") pod \"whisker-58f99f576c-h7p64\" (UID: \"f4a8c05f-aa26-454c-a381-75bd59548a78\") " pod="calico-system/whisker-58f99f576c-h7p64" Dec 16 13:08:59.076824 kubelet[3309]: I1216 13:08:59.076749 3309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f4a8c05f-aa26-454c-a381-75bd59548a78-whisker-ca-bundle\") pod \"whisker-58f99f576c-h7p64\" (UID: \"f4a8c05f-aa26-454c-a381-75bd59548a78\") " pod="calico-system/whisker-58f99f576c-h7p64" Dec 16 13:08:59.357022 containerd[1969]: time="2025-12-16T13:08:59.356884531Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-58f99f576c-h7p64,Uid:f4a8c05f-aa26-454c-a381-75bd59548a78,Namespace:calico-system,Attempt:0,}" Dec 16 13:09:00.775703 containerd[1969]: time="2025-12-16T13:09:00.775528606Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6d7fb6ffdb-x9w4j,Uid:17fc83ee-aaa8-428d-ba14-4fb4545cfe65,Namespace:calico-apiserver,Attempt:0,}" Dec 16 13:09:01.104000 audit: BPF prog-id=182 op=LOAD Dec 16 13:09:01.104000 audit[4668]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fffe26d21c0 a2=98 a3=1fffffffffffffff items=0 ppid=4567 pid=4668 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:09:01.104000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Dec 16 13:09:01.105000 audit: BPF prog-id=182 op=UNLOAD Dec 16 13:09:01.105000 audit[4668]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7fffe26d2190 a3=0 items=0 ppid=4567 pid=4668 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:09:01.105000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Dec 16 13:09:01.107000 audit: BPF prog-id=183 op=LOAD Dec 16 13:09:01.107000 audit[4668]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fffe26d20a0 a2=94 a3=3 items=0 ppid=4567 pid=4668 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:09:01.107000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Dec 16 13:09:01.107000 audit: BPF prog-id=183 op=UNLOAD Dec 16 13:09:01.107000 audit[4668]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7fffe26d20a0 a2=94 a3=3 items=0 ppid=4567 pid=4668 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:09:01.107000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Dec 16 13:09:01.107000 audit: BPF prog-id=184 op=LOAD Dec 16 13:09:01.107000 audit[4668]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fffe26d20e0 a2=94 a3=7fffe26d22c0 items=0 ppid=4567 pid=4668 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:09:01.107000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Dec 16 13:09:01.107000 audit: BPF prog-id=184 op=UNLOAD Dec 16 13:09:01.107000 audit[4668]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7fffe26d20e0 a2=94 a3=7fffe26d22c0 items=0 ppid=4567 pid=4668 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:09:01.107000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Dec 16 13:09:01.113000 audit: BPF prog-id=185 op=LOAD Dec 16 13:09:01.113000 audit[4669]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffe4d1dbbc0 a2=98 a3=3 items=0 ppid=4567 pid=4669 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:09:01.113000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 13:09:01.113000 audit: BPF prog-id=185 op=UNLOAD Dec 16 13:09:01.113000 audit[4669]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffe4d1dbb90 a3=0 items=0 ppid=4567 pid=4669 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:09:01.113000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 13:09:01.122000 audit: BPF prog-id=186 op=LOAD Dec 16 13:09:01.122000 audit[4669]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffe4d1db9b0 a2=94 a3=54428f items=0 ppid=4567 pid=4669 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:09:01.122000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 13:09:01.122000 audit: BPF prog-id=186 op=UNLOAD Dec 16 13:09:01.122000 audit[4669]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffe4d1db9b0 a2=94 a3=54428f items=0 ppid=4567 pid=4669 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:09:01.122000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 13:09:01.122000 audit: BPF prog-id=187 op=LOAD Dec 16 13:09:01.122000 audit[4669]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffe4d1db9e0 a2=94 a3=2 items=0 ppid=4567 pid=4669 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:09:01.122000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 13:09:01.122000 audit: BPF prog-id=187 op=UNLOAD Dec 16 13:09:01.122000 audit[4669]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffe4d1db9e0 a2=0 a3=2 items=0 ppid=4567 pid=4669 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:09:01.122000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 13:09:01.755760 containerd[1969]: time="2025-12-16T13:09:01.755708379Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7bcdd655bc-b4pqw,Uid:eef40561-fc3a-47f4-ab5c-0482b5980a8d,Namespace:calico-system,Attempt:0,}" Dec 16 13:09:01.755760 containerd[1969]: time="2025-12-16T13:09:01.755708686Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-wpbz6,Uid:ea48f51b-a248-4d71-8caa-ed889e7f5fac,Namespace:calico-system,Attempt:0,}" Dec 16 13:09:01.817000 audit: BPF prog-id=188 op=LOAD Dec 16 13:09:01.817000 audit[4669]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffe4d1db8a0 a2=94 a3=1 items=0 ppid=4567 pid=4669 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:09:01.817000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 13:09:01.817000 audit: BPF prog-id=188 op=UNLOAD Dec 16 13:09:01.817000 audit[4669]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffe4d1db8a0 a2=94 a3=1 items=0 ppid=4567 pid=4669 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:09:01.817000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 13:09:01.850000 audit: BPF prog-id=189 op=LOAD Dec 16 13:09:01.850000 audit[4669]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffe4d1db890 a2=94 a3=4 items=0 ppid=4567 pid=4669 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:09:01.850000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 13:09:01.850000 audit: BPF prog-id=189 op=UNLOAD Dec 16 13:09:01.850000 audit[4669]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7ffe4d1db890 a2=0 a3=4 items=0 ppid=4567 pid=4669 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:09:01.850000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 13:09:01.851000 audit: BPF prog-id=190 op=LOAD Dec 16 13:09:01.851000 audit[4669]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffe4d1db6f0 a2=94 a3=5 items=0 ppid=4567 pid=4669 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:09:01.851000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 13:09:01.851000 audit: BPF prog-id=190 op=UNLOAD Dec 16 13:09:01.851000 audit[4669]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffe4d1db6f0 a2=0 a3=5 items=0 ppid=4567 pid=4669 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:09:01.851000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 13:09:01.852000 audit: BPF prog-id=191 op=LOAD Dec 16 13:09:01.852000 audit[4669]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffe4d1db910 a2=94 a3=6 items=0 ppid=4567 pid=4669 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:09:01.852000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 13:09:01.854000 audit: BPF prog-id=191 op=UNLOAD Dec 16 13:09:01.854000 audit[4669]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7ffe4d1db910 a2=0 a3=6 items=0 ppid=4567 pid=4669 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:09:01.854000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 13:09:01.858000 audit: BPF prog-id=192 op=LOAD Dec 16 13:09:01.858000 audit[4669]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffe4d1db0c0 a2=94 a3=88 items=0 ppid=4567 pid=4669 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:09:01.858000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 13:09:01.860000 audit: BPF prog-id=193 op=LOAD Dec 16 13:09:01.860000 audit[4669]: SYSCALL arch=c000003e syscall=321 success=yes exit=7 a0=5 a1=7ffe4d1daf40 a2=94 a3=2 items=0 ppid=4567 pid=4669 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:09:01.860000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 13:09:01.860000 audit: BPF prog-id=193 op=UNLOAD Dec 16 13:09:01.860000 audit[4669]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=7 a1=7ffe4d1daf70 a2=0 a3=7ffe4d1db070 items=0 ppid=4567 pid=4669 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:09:01.860000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 13:09:01.860000 audit: BPF prog-id=192 op=UNLOAD Dec 16 13:09:01.860000 audit[4669]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=3835cd10 a2=0 a3=ffbef01cba8f2188 items=0 ppid=4567 pid=4669 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:09:01.860000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 13:09:02.125179 kernel: kauditd_printk_skb: 77 callbacks suppressed Dec 16 13:09:02.125362 kernel: audit: type=1334 audit(1765890542.115:615): prog-id=194 op=LOAD Dec 16 13:09:02.115000 audit: BPF prog-id=194 op=LOAD Dec 16 13:09:02.139167 kernel: audit: type=1300 audit(1765890542.115:615): arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffd1c93c460 a2=98 a3=1999999999999999 items=0 ppid=4567 pid=4678 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:09:02.115000 audit[4678]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffd1c93c460 a2=98 a3=1999999999999999 items=0 ppid=4567 pid=4678 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:09:02.178796 kernel: audit: type=1327 audit(1765890542.115:615): proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 16 13:09:02.178924 kernel: audit: type=1334 audit(1765890542.116:616): prog-id=194 op=UNLOAD Dec 16 13:09:02.115000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 16 13:09:02.116000 audit: BPF prog-id=194 op=UNLOAD Dec 16 13:09:02.173622 (udev-worker)[4682]: Network interface NamePolicy= disabled on kernel command line. Dec 16 13:09:02.185462 kernel: audit: type=1300 audit(1765890542.116:616): arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffd1c93c430 a3=0 items=0 ppid=4567 pid=4678 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:09:02.116000 audit[4678]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffd1c93c430 a3=0 items=0 ppid=4567 pid=4678 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:09:02.116000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 16 13:09:02.198539 kernel: audit: type=1327 audit(1765890542.116:616): proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 16 13:09:02.198666 kernel: audit: type=1334 audit(1765890542.116:617): prog-id=195 op=LOAD Dec 16 13:09:02.116000 audit: BPF prog-id=195 op=LOAD Dec 16 13:09:02.197864 systemd-networkd[1567]: califa3b1a61f1e: Link UP Dec 16 13:09:02.208787 kernel: audit: type=1300 audit(1765890542.116:617): arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffd1c93c340 a2=94 a3=ffff items=0 ppid=4567 pid=4678 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:09:02.208863 kernel: audit: type=1327 audit(1765890542.116:617): proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 16 13:09:02.116000 audit[4678]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffd1c93c340 a2=94 a3=ffff items=0 ppid=4567 pid=4678 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:09:02.116000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 16 13:09:02.116000 audit: BPF prog-id=195 op=UNLOAD Dec 16 13:09:02.214399 kernel: audit: type=1334 audit(1765890542.116:618): prog-id=195 op=UNLOAD Dec 16 13:09:02.116000 audit[4678]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffd1c93c340 a2=94 a3=ffff items=0 ppid=4567 pid=4678 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:09:02.116000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 16 13:09:02.116000 audit: BPF prog-id=196 op=LOAD Dec 16 13:09:02.116000 audit[4678]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffd1c93c380 a2=94 a3=7ffd1c93c560 items=0 ppid=4567 pid=4678 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:09:02.116000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 16 13:09:02.116000 audit: BPF prog-id=196 op=UNLOAD Dec 16 13:09:02.116000 audit[4678]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffd1c93c380 a2=94 a3=7ffd1c93c560 items=0 ppid=4567 pid=4678 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:09:02.116000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 16 13:09:02.221970 systemd-networkd[1567]: califa3b1a61f1e: Gained carrier Dec 16 13:09:02.338998 containerd[1969]: 2025-12-16 13:08:59.403 [INFO][4532] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Dec 16 13:09:02.338998 containerd[1969]: 2025-12-16 13:08:59.802 [INFO][4532] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--28--98-k8s-whisker--58f99f576c--h7p64-eth0 whisker-58f99f576c- calico-system f4a8c05f-aa26-454c-a381-75bd59548a78 990 0 2025-12-16 13:08:58 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:58f99f576c projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ip-172-31-28-98 whisker-58f99f576c-h7p64 eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] califa3b1a61f1e [] [] }} ContainerID="f3643248d93ea2553c094780899a204259f0eade3cd1f15cea6980c79dbe7942" Namespace="calico-system" Pod="whisker-58f99f576c-h7p64" WorkloadEndpoint="ip--172--31--28--98-k8s-whisker--58f99f576c--h7p64-" Dec 16 13:09:02.338998 containerd[1969]: 2025-12-16 13:08:59.802 [INFO][4532] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="f3643248d93ea2553c094780899a204259f0eade3cd1f15cea6980c79dbe7942" Namespace="calico-system" Pod="whisker-58f99f576c-h7p64" WorkloadEndpoint="ip--172--31--28--98-k8s-whisker--58f99f576c--h7p64-eth0" Dec 16 13:09:02.338998 containerd[1969]: 2025-12-16 13:09:01.682 [INFO][4541] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f3643248d93ea2553c094780899a204259f0eade3cd1f15cea6980c79dbe7942" HandleID="k8s-pod-network.f3643248d93ea2553c094780899a204259f0eade3cd1f15cea6980c79dbe7942" Workload="ip--172--31--28--98-k8s-whisker--58f99f576c--h7p64-eth0" Dec 16 13:09:02.339982 containerd[1969]: 2025-12-16 13:09:01.684 [INFO][4541] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="f3643248d93ea2553c094780899a204259f0eade3cd1f15cea6980c79dbe7942" HandleID="k8s-pod-network.f3643248d93ea2553c094780899a204259f0eade3cd1f15cea6980c79dbe7942" Workload="ip--172--31--28--98-k8s-whisker--58f99f576c--h7p64-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003923f0), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-28-98", "pod":"whisker-58f99f576c-h7p64", "timestamp":"2025-12-16 13:09:01.682456076 +0000 UTC"}, Hostname:"ip-172-31-28-98", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 13:09:02.339982 containerd[1969]: 2025-12-16 13:09:01.684 [INFO][4541] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 13:09:02.339982 containerd[1969]: 2025-12-16 13:09:01.687 [INFO][4541] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 13:09:02.339982 containerd[1969]: 2025-12-16 13:09:01.694 [INFO][4541] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-28-98' Dec 16 13:09:02.339982 containerd[1969]: 2025-12-16 13:09:01.730 [INFO][4541] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.f3643248d93ea2553c094780899a204259f0eade3cd1f15cea6980c79dbe7942" host="ip-172-31-28-98" Dec 16 13:09:02.339982 containerd[1969]: 2025-12-16 13:09:01.822 [INFO][4541] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-28-98" Dec 16 13:09:02.339982 containerd[1969]: 2025-12-16 13:09:01.850 [INFO][4541] ipam/ipam.go 511: Trying affinity for 192.168.44.0/26 host="ip-172-31-28-98" Dec 16 13:09:02.339982 containerd[1969]: 2025-12-16 13:09:01.859 [INFO][4541] ipam/ipam.go 158: Attempting to load block cidr=192.168.44.0/26 host="ip-172-31-28-98" Dec 16 13:09:02.339982 containerd[1969]: 2025-12-16 13:09:01.865 [INFO][4541] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.44.0/26 host="ip-172-31-28-98" Dec 16 13:09:02.339982 containerd[1969]: 2025-12-16 13:09:01.866 [INFO][4541] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.44.0/26 handle="k8s-pod-network.f3643248d93ea2553c094780899a204259f0eade3cd1f15cea6980c79dbe7942" host="ip-172-31-28-98" Dec 16 13:09:02.342167 containerd[1969]: 2025-12-16 13:09:01.873 [INFO][4541] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.f3643248d93ea2553c094780899a204259f0eade3cd1f15cea6980c79dbe7942 Dec 16 13:09:02.342167 containerd[1969]: 2025-12-16 13:09:01.942 [INFO][4541] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.44.0/26 handle="k8s-pod-network.f3643248d93ea2553c094780899a204259f0eade3cd1f15cea6980c79dbe7942" host="ip-172-31-28-98" Dec 16 13:09:02.342167 containerd[1969]: 2025-12-16 13:09:02.044 [INFO][4541] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.44.1/26] block=192.168.44.0/26 handle="k8s-pod-network.f3643248d93ea2553c094780899a204259f0eade3cd1f15cea6980c79dbe7942" host="ip-172-31-28-98" Dec 16 13:09:02.342167 containerd[1969]: 2025-12-16 13:09:02.045 [INFO][4541] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.44.1/26] handle="k8s-pod-network.f3643248d93ea2553c094780899a204259f0eade3cd1f15cea6980c79dbe7942" host="ip-172-31-28-98" Dec 16 13:09:02.342167 containerd[1969]: 2025-12-16 13:09:02.045 [INFO][4541] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 13:09:02.342167 containerd[1969]: 2025-12-16 13:09:02.045 [INFO][4541] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.44.1/26] IPv6=[] ContainerID="f3643248d93ea2553c094780899a204259f0eade3cd1f15cea6980c79dbe7942" HandleID="k8s-pod-network.f3643248d93ea2553c094780899a204259f0eade3cd1f15cea6980c79dbe7942" Workload="ip--172--31--28--98-k8s-whisker--58f99f576c--h7p64-eth0" Dec 16 13:09:02.342389 containerd[1969]: 2025-12-16 13:09:02.068 [INFO][4532] cni-plugin/k8s.go 418: Populated endpoint ContainerID="f3643248d93ea2553c094780899a204259f0eade3cd1f15cea6980c79dbe7942" Namespace="calico-system" Pod="whisker-58f99f576c-h7p64" WorkloadEndpoint="ip--172--31--28--98-k8s-whisker--58f99f576c--h7p64-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--28--98-k8s-whisker--58f99f576c--h7p64-eth0", GenerateName:"whisker-58f99f576c-", Namespace:"calico-system", SelfLink:"", UID:"f4a8c05f-aa26-454c-a381-75bd59548a78", ResourceVersion:"990", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 13, 8, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"58f99f576c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-28-98", ContainerID:"", Pod:"whisker-58f99f576c-h7p64", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.44.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"califa3b1a61f1e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 13:09:02.342389 containerd[1969]: 2025-12-16 13:09:02.071 [INFO][4532] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.44.1/32] ContainerID="f3643248d93ea2553c094780899a204259f0eade3cd1f15cea6980c79dbe7942" Namespace="calico-system" Pod="whisker-58f99f576c-h7p64" WorkloadEndpoint="ip--172--31--28--98-k8s-whisker--58f99f576c--h7p64-eth0" Dec 16 13:09:02.342544 containerd[1969]: 2025-12-16 13:09:02.071 [INFO][4532] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to califa3b1a61f1e ContainerID="f3643248d93ea2553c094780899a204259f0eade3cd1f15cea6980c79dbe7942" Namespace="calico-system" Pod="whisker-58f99f576c-h7p64" WorkloadEndpoint="ip--172--31--28--98-k8s-whisker--58f99f576c--h7p64-eth0" Dec 16 13:09:02.342544 containerd[1969]: 2025-12-16 13:09:02.220 [INFO][4532] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f3643248d93ea2553c094780899a204259f0eade3cd1f15cea6980c79dbe7942" Namespace="calico-system" Pod="whisker-58f99f576c-h7p64" WorkloadEndpoint="ip--172--31--28--98-k8s-whisker--58f99f576c--h7p64-eth0" Dec 16 13:09:02.342623 containerd[1969]: 2025-12-16 13:09:02.228 [INFO][4532] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="f3643248d93ea2553c094780899a204259f0eade3cd1f15cea6980c79dbe7942" Namespace="calico-system" Pod="whisker-58f99f576c-h7p64" WorkloadEndpoint="ip--172--31--28--98-k8s-whisker--58f99f576c--h7p64-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--28--98-k8s-whisker--58f99f576c--h7p64-eth0", GenerateName:"whisker-58f99f576c-", Namespace:"calico-system", SelfLink:"", UID:"f4a8c05f-aa26-454c-a381-75bd59548a78", ResourceVersion:"990", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 13, 8, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"58f99f576c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-28-98", ContainerID:"f3643248d93ea2553c094780899a204259f0eade3cd1f15cea6980c79dbe7942", Pod:"whisker-58f99f576c-h7p64", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.44.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"califa3b1a61f1e", MAC:"06:73:cc:f4:e4:43", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 13:09:02.342711 containerd[1969]: 2025-12-16 13:09:02.299 [INFO][4532] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="f3643248d93ea2553c094780899a204259f0eade3cd1f15cea6980c79dbe7942" Namespace="calico-system" Pod="whisker-58f99f576c-h7p64" WorkloadEndpoint="ip--172--31--28--98-k8s-whisker--58f99f576c--h7p64-eth0" Dec 16 13:09:02.604618 (udev-worker)[4681]: Network interface NamePolicy= disabled on kernel command line. Dec 16 13:09:02.680910 systemd-networkd[1567]: vxlan.calico: Link UP Dec 16 13:09:02.680922 systemd-networkd[1567]: vxlan.calico: Gained carrier Dec 16 13:09:02.801650 containerd[1969]: time="2025-12-16T13:09:02.799286809Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6d7fb6ffdb-t947q,Uid:402c8f91-f505-4b31-ab8d-437df33aba9f,Namespace:calico-apiserver,Attempt:0,}" Dec 16 13:09:02.999000 audit: BPF prog-id=197 op=LOAD Dec 16 13:09:02.999000 audit[4746]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffeee908870 a2=98 a3=0 items=0 ppid=4567 pid=4746 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:09:02.999000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 13:09:02.999000 audit: BPF prog-id=197 op=UNLOAD Dec 16 13:09:02.999000 audit[4746]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffeee908840 a3=0 items=0 ppid=4567 pid=4746 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:09:02.999000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 13:09:03.028000 audit: BPF prog-id=198 op=LOAD Dec 16 13:09:03.028000 audit[4746]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffeee908680 a2=94 a3=54428f items=0 ppid=4567 pid=4746 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:09:03.028000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 13:09:03.028000 audit: BPF prog-id=198 op=UNLOAD Dec 16 13:09:03.028000 audit[4746]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffeee908680 a2=94 a3=54428f items=0 ppid=4567 pid=4746 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:09:03.028000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 13:09:03.028000 audit: BPF prog-id=199 op=LOAD Dec 16 13:09:03.028000 audit[4746]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffeee9086b0 a2=94 a3=2 items=0 ppid=4567 pid=4746 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:09:03.028000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 13:09:03.028000 audit: BPF prog-id=199 op=UNLOAD Dec 16 13:09:03.028000 audit[4746]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffeee9086b0 a2=0 a3=2 items=0 ppid=4567 pid=4746 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:09:03.028000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 13:09:03.028000 audit: BPF prog-id=200 op=LOAD Dec 16 13:09:03.028000 audit[4746]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffeee908460 a2=94 a3=4 items=0 ppid=4567 pid=4746 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:09:03.028000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 13:09:03.028000 audit: BPF prog-id=200 op=UNLOAD Dec 16 13:09:03.028000 audit[4746]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffeee908460 a2=94 a3=4 items=0 ppid=4567 pid=4746 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:09:03.028000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 13:09:03.028000 audit: BPF prog-id=201 op=LOAD Dec 16 13:09:03.028000 audit[4746]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffeee908560 a2=94 a3=7ffeee9086e0 items=0 ppid=4567 pid=4746 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:09:03.028000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 13:09:03.028000 audit: BPF prog-id=201 op=UNLOAD Dec 16 13:09:03.028000 audit[4746]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffeee908560 a2=0 a3=7ffeee9086e0 items=0 ppid=4567 pid=4746 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:09:03.028000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 13:09:03.036000 audit: BPF prog-id=202 op=LOAD Dec 16 13:09:03.036000 audit[4746]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffeee907c90 a2=94 a3=2 items=0 ppid=4567 pid=4746 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:09:03.036000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 13:09:03.036000 audit: BPF prog-id=202 op=UNLOAD Dec 16 13:09:03.036000 audit[4746]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffeee907c90 a2=0 a3=2 items=0 ppid=4567 pid=4746 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:09:03.036000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 13:09:03.036000 audit: BPF prog-id=203 op=LOAD Dec 16 13:09:03.036000 audit[4746]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffeee907d90 a2=94 a3=30 items=0 ppid=4567 pid=4746 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:09:03.036000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 13:09:03.121000 audit: BPF prog-id=204 op=LOAD Dec 16 13:09:03.121000 audit[4750]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffeacda54f0 a2=98 a3=0 items=0 ppid=4567 pid=4750 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:09:03.121000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 13:09:03.123000 audit: BPF prog-id=204 op=UNLOAD Dec 16 13:09:03.123000 audit[4750]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffeacda54c0 a3=0 items=0 ppid=4567 pid=4750 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:09:03.123000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 13:09:03.126000 audit: BPF prog-id=205 op=LOAD Dec 16 13:09:03.126000 audit[4750]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffeacda52e0 a2=94 a3=54428f items=0 ppid=4567 pid=4750 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:09:03.126000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 13:09:03.134000 audit: BPF prog-id=205 op=UNLOAD Dec 16 13:09:03.134000 audit[4750]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffeacda52e0 a2=94 a3=54428f items=0 ppid=4567 pid=4750 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:09:03.134000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 13:09:03.134000 audit: BPF prog-id=206 op=LOAD Dec 16 13:09:03.134000 audit[4750]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffeacda5310 a2=94 a3=2 items=0 ppid=4567 pid=4750 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:09:03.134000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 13:09:03.136000 audit: BPF prog-id=206 op=UNLOAD Dec 16 13:09:03.136000 audit[4750]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffeacda5310 a2=0 a3=2 items=0 ppid=4567 pid=4750 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:09:03.136000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 13:09:03.421316 systemd-networkd[1567]: califa3b1a61f1e: Gained IPv6LL Dec 16 13:09:03.701672 containerd[1969]: time="2025-12-16T13:09:03.700841491Z" level=info msg="connecting to shim f3643248d93ea2553c094780899a204259f0eade3cd1f15cea6980c79dbe7942" address="unix:///run/containerd/s/885d0734537f7110367d7cd8fadbdf4b28b3d5f9f060aa31554ff59446bbc69f" namespace=k8s.io protocol=ttrpc version=3 Dec 16 13:09:03.797267 containerd[1969]: time="2025-12-16T13:09:03.797141654Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-xp9r7,Uid:4667e186-7669-4eee-8c92-538a1a091f5e,Namespace:kube-system,Attempt:0,}" Dec 16 13:09:03.797838 containerd[1969]: time="2025-12-16T13:09:03.797789978Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-h272q,Uid:c808a4b9-6eee-4490-92c6-5f208009c5e7,Namespace:calico-system,Attempt:0,}" Dec 16 13:09:04.202744 systemd[1]: Started cri-containerd-f3643248d93ea2553c094780899a204259f0eade3cd1f15cea6980c79dbe7942.scope - libcontainer container f3643248d93ea2553c094780899a204259f0eade3cd1f15cea6980c79dbe7942. Dec 16 13:09:04.332000 audit: BPF prog-id=207 op=LOAD Dec 16 13:09:04.334000 audit: BPF prog-id=208 op=LOAD Dec 16 13:09:04.334000 audit[4778]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0000f0238 a2=98 a3=0 items=0 ppid=4766 pid=4778 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:09:04.334000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6633363433323438643933656132353533633039343738303839396132 Dec 16 13:09:04.334000 audit: BPF prog-id=208 op=UNLOAD Dec 16 13:09:04.334000 audit[4778]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4766 pid=4778 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:09:04.334000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6633363433323438643933656132353533633039343738303839396132 Dec 16 13:09:04.334000 audit: BPF prog-id=209 op=LOAD Dec 16 13:09:04.334000 audit[4778]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0000f0488 a2=98 a3=0 items=0 ppid=4766 pid=4778 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:09:04.334000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6633363433323438643933656132353533633039343738303839396132 Dec 16 13:09:04.334000 audit: BPF prog-id=210 op=LOAD Dec 16 13:09:04.334000 audit[4778]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0000f0218 a2=98 a3=0 items=0 ppid=4766 pid=4778 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:09:04.334000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6633363433323438643933656132353533633039343738303839396132 Dec 16 13:09:04.334000 audit: BPF prog-id=210 op=UNLOAD Dec 16 13:09:04.334000 audit[4778]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4766 pid=4778 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:09:04.334000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6633363433323438643933656132353533633039343738303839396132 Dec 16 13:09:04.334000 audit: BPF prog-id=209 op=UNLOAD Dec 16 13:09:04.334000 audit[4778]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4766 pid=4778 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:09:04.334000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6633363433323438643933656132353533633039343738303839396132 Dec 16 13:09:04.334000 audit: BPF prog-id=211 op=LOAD Dec 16 13:09:04.334000 audit[4778]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0000f06e8 a2=98 a3=0 items=0 ppid=4766 pid=4778 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:09:04.334000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6633363433323438643933656132353533633039343738303839396132 Dec 16 13:09:04.388768 systemd-networkd[1567]: cali43554063c7a: Link UP Dec 16 13:09:04.390929 (udev-worker)[4751]: Network interface NamePolicy= disabled on kernel command line. Dec 16 13:09:04.393129 systemd-networkd[1567]: cali43554063c7a: Gained carrier Dec 16 13:09:04.469137 containerd[1969]: 2025-12-16 13:09:03.870 [INFO][4703] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--28--98-k8s-calico--apiserver--6d7fb6ffdb--x9w4j-eth0 calico-apiserver-6d7fb6ffdb- calico-apiserver 17fc83ee-aaa8-428d-ba14-4fb4545cfe65 917 0 2025-12-16 13:08:26 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6d7fb6ffdb projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-28-98 calico-apiserver-6d7fb6ffdb-x9w4j eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali43554063c7a [] [] }} ContainerID="64d2c4f9c99b5c21a64e98d7c8f5eaf18d9ae643f45320eb35c9a9705db783b7" Namespace="calico-apiserver" Pod="calico-apiserver-6d7fb6ffdb-x9w4j" WorkloadEndpoint="ip--172--31--28--98-k8s-calico--apiserver--6d7fb6ffdb--x9w4j-" Dec 16 13:09:04.469137 containerd[1969]: 2025-12-16 13:09:03.883 [INFO][4703] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="64d2c4f9c99b5c21a64e98d7c8f5eaf18d9ae643f45320eb35c9a9705db783b7" Namespace="calico-apiserver" Pod="calico-apiserver-6d7fb6ffdb-x9w4j" WorkloadEndpoint="ip--172--31--28--98-k8s-calico--apiserver--6d7fb6ffdb--x9w4j-eth0" Dec 16 13:09:04.469137 containerd[1969]: 2025-12-16 13:09:04.267 [INFO][4781] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="64d2c4f9c99b5c21a64e98d7c8f5eaf18d9ae643f45320eb35c9a9705db783b7" HandleID="k8s-pod-network.64d2c4f9c99b5c21a64e98d7c8f5eaf18d9ae643f45320eb35c9a9705db783b7" Workload="ip--172--31--28--98-k8s-calico--apiserver--6d7fb6ffdb--x9w4j-eth0" Dec 16 13:09:04.469461 containerd[1969]: 2025-12-16 13:09:04.267 [INFO][4781] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="64d2c4f9c99b5c21a64e98d7c8f5eaf18d9ae643f45320eb35c9a9705db783b7" HandleID="k8s-pod-network.64d2c4f9c99b5c21a64e98d7c8f5eaf18d9ae643f45320eb35c9a9705db783b7" Workload="ip--172--31--28--98-k8s-calico--apiserver--6d7fb6ffdb--x9w4j-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002e74f0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ip-172-31-28-98", "pod":"calico-apiserver-6d7fb6ffdb-x9w4j", "timestamp":"2025-12-16 13:09:04.267522357 +0000 UTC"}, Hostname:"ip-172-31-28-98", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 13:09:04.469461 containerd[1969]: 2025-12-16 13:09:04.268 [INFO][4781] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 13:09:04.469461 containerd[1969]: 2025-12-16 13:09:04.268 [INFO][4781] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 13:09:04.469461 containerd[1969]: 2025-12-16 13:09:04.268 [INFO][4781] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-28-98' Dec 16 13:09:04.469461 containerd[1969]: 2025-12-16 13:09:04.287 [INFO][4781] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.64d2c4f9c99b5c21a64e98d7c8f5eaf18d9ae643f45320eb35c9a9705db783b7" host="ip-172-31-28-98" Dec 16 13:09:04.469461 containerd[1969]: 2025-12-16 13:09:04.314 [INFO][4781] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-28-98" Dec 16 13:09:04.469461 containerd[1969]: 2025-12-16 13:09:04.327 [INFO][4781] ipam/ipam.go 511: Trying affinity for 192.168.44.0/26 host="ip-172-31-28-98" Dec 16 13:09:04.469461 containerd[1969]: 2025-12-16 13:09:04.338 [INFO][4781] ipam/ipam.go 158: Attempting to load block cidr=192.168.44.0/26 host="ip-172-31-28-98" Dec 16 13:09:04.469461 containerd[1969]: 2025-12-16 13:09:04.342 [INFO][4781] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.44.0/26 host="ip-172-31-28-98" Dec 16 13:09:04.469847 containerd[1969]: 2025-12-16 13:09:04.342 [INFO][4781] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.44.0/26 handle="k8s-pod-network.64d2c4f9c99b5c21a64e98d7c8f5eaf18d9ae643f45320eb35c9a9705db783b7" host="ip-172-31-28-98" Dec 16 13:09:04.469847 containerd[1969]: 2025-12-16 13:09:04.347 [INFO][4781] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.64d2c4f9c99b5c21a64e98d7c8f5eaf18d9ae643f45320eb35c9a9705db783b7 Dec 16 13:09:04.469847 containerd[1969]: 2025-12-16 13:09:04.361 [INFO][4781] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.44.0/26 handle="k8s-pod-network.64d2c4f9c99b5c21a64e98d7c8f5eaf18d9ae643f45320eb35c9a9705db783b7" host="ip-172-31-28-98" Dec 16 13:09:04.469847 containerd[1969]: 2025-12-16 13:09:04.374 [INFO][4781] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.44.2/26] block=192.168.44.0/26 handle="k8s-pod-network.64d2c4f9c99b5c21a64e98d7c8f5eaf18d9ae643f45320eb35c9a9705db783b7" host="ip-172-31-28-98" Dec 16 13:09:04.469847 containerd[1969]: 2025-12-16 13:09:04.374 [INFO][4781] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.44.2/26] handle="k8s-pod-network.64d2c4f9c99b5c21a64e98d7c8f5eaf18d9ae643f45320eb35c9a9705db783b7" host="ip-172-31-28-98" Dec 16 13:09:04.469847 containerd[1969]: 2025-12-16 13:09:04.374 [INFO][4781] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 13:09:04.469847 containerd[1969]: 2025-12-16 13:09:04.374 [INFO][4781] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.44.2/26] IPv6=[] ContainerID="64d2c4f9c99b5c21a64e98d7c8f5eaf18d9ae643f45320eb35c9a9705db783b7" HandleID="k8s-pod-network.64d2c4f9c99b5c21a64e98d7c8f5eaf18d9ae643f45320eb35c9a9705db783b7" Workload="ip--172--31--28--98-k8s-calico--apiserver--6d7fb6ffdb--x9w4j-eth0" Dec 16 13:09:04.475356 containerd[1969]: 2025-12-16 13:09:04.381 [INFO][4703] cni-plugin/k8s.go 418: Populated endpoint ContainerID="64d2c4f9c99b5c21a64e98d7c8f5eaf18d9ae643f45320eb35c9a9705db783b7" Namespace="calico-apiserver" Pod="calico-apiserver-6d7fb6ffdb-x9w4j" WorkloadEndpoint="ip--172--31--28--98-k8s-calico--apiserver--6d7fb6ffdb--x9w4j-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--28--98-k8s-calico--apiserver--6d7fb6ffdb--x9w4j-eth0", GenerateName:"calico-apiserver-6d7fb6ffdb-", Namespace:"calico-apiserver", SelfLink:"", UID:"17fc83ee-aaa8-428d-ba14-4fb4545cfe65", ResourceVersion:"917", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 13, 8, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6d7fb6ffdb", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-28-98", ContainerID:"", Pod:"calico-apiserver-6d7fb6ffdb-x9w4j", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.44.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali43554063c7a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 13:09:04.475640 containerd[1969]: 2025-12-16 13:09:04.381 [INFO][4703] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.44.2/32] ContainerID="64d2c4f9c99b5c21a64e98d7c8f5eaf18d9ae643f45320eb35c9a9705db783b7" Namespace="calico-apiserver" Pod="calico-apiserver-6d7fb6ffdb-x9w4j" WorkloadEndpoint="ip--172--31--28--98-k8s-calico--apiserver--6d7fb6ffdb--x9w4j-eth0" Dec 16 13:09:04.475640 containerd[1969]: 2025-12-16 13:09:04.381 [INFO][4703] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali43554063c7a ContainerID="64d2c4f9c99b5c21a64e98d7c8f5eaf18d9ae643f45320eb35c9a9705db783b7" Namespace="calico-apiserver" Pod="calico-apiserver-6d7fb6ffdb-x9w4j" WorkloadEndpoint="ip--172--31--28--98-k8s-calico--apiserver--6d7fb6ffdb--x9w4j-eth0" Dec 16 13:09:04.475640 containerd[1969]: 2025-12-16 13:09:04.397 [INFO][4703] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="64d2c4f9c99b5c21a64e98d7c8f5eaf18d9ae643f45320eb35c9a9705db783b7" Namespace="calico-apiserver" Pod="calico-apiserver-6d7fb6ffdb-x9w4j" WorkloadEndpoint="ip--172--31--28--98-k8s-calico--apiserver--6d7fb6ffdb--x9w4j-eth0" Dec 16 13:09:04.475803 containerd[1969]: 2025-12-16 13:09:04.399 [INFO][4703] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="64d2c4f9c99b5c21a64e98d7c8f5eaf18d9ae643f45320eb35c9a9705db783b7" Namespace="calico-apiserver" Pod="calico-apiserver-6d7fb6ffdb-x9w4j" WorkloadEndpoint="ip--172--31--28--98-k8s-calico--apiserver--6d7fb6ffdb--x9w4j-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--28--98-k8s-calico--apiserver--6d7fb6ffdb--x9w4j-eth0", GenerateName:"calico-apiserver-6d7fb6ffdb-", Namespace:"calico-apiserver", SelfLink:"", UID:"17fc83ee-aaa8-428d-ba14-4fb4545cfe65", ResourceVersion:"917", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 13, 8, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6d7fb6ffdb", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-28-98", ContainerID:"64d2c4f9c99b5c21a64e98d7c8f5eaf18d9ae643f45320eb35c9a9705db783b7", Pod:"calico-apiserver-6d7fb6ffdb-x9w4j", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.44.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali43554063c7a", MAC:"e6:db:ae:2e:41:4d", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 13:09:04.475895 containerd[1969]: 2025-12-16 13:09:04.451 [INFO][4703] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="64d2c4f9c99b5c21a64e98d7c8f5eaf18d9ae643f45320eb35c9a9705db783b7" Namespace="calico-apiserver" Pod="calico-apiserver-6d7fb6ffdb-x9w4j" WorkloadEndpoint="ip--172--31--28--98-k8s-calico--apiserver--6d7fb6ffdb--x9w4j-eth0" Dec 16 13:09:04.480283 containerd[1969]: time="2025-12-16T13:09:04.480041645Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-58f99f576c-h7p64,Uid:f4a8c05f-aa26-454c-a381-75bd59548a78,Namespace:calico-system,Attempt:0,} returns sandbox id \"f3643248d93ea2553c094780899a204259f0eade3cd1f15cea6980c79dbe7942\"" Dec 16 13:09:04.552000 audit: BPF prog-id=212 op=LOAD Dec 16 13:09:04.552000 audit[4750]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffeacda51d0 a2=94 a3=1 items=0 ppid=4567 pid=4750 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:09:04.552000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 13:09:04.552000 audit: BPF prog-id=212 op=UNLOAD Dec 16 13:09:04.552000 audit[4750]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffeacda51d0 a2=94 a3=1 items=0 ppid=4567 pid=4750 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:09:04.552000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 13:09:04.575000 audit: BPF prog-id=213 op=LOAD Dec 16 13:09:04.575000 audit[4750]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffeacda51c0 a2=94 a3=4 items=0 ppid=4567 pid=4750 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:09:04.575000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 13:09:04.576000 audit: BPF prog-id=213 op=UNLOAD Dec 16 13:09:04.576000 audit[4750]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7ffeacda51c0 a2=0 a3=4 items=0 ppid=4567 pid=4750 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:09:04.576000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 13:09:04.576000 audit: BPF prog-id=214 op=LOAD Dec 16 13:09:04.576000 audit[4750]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffeacda5020 a2=94 a3=5 items=0 ppid=4567 pid=4750 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:09:04.576000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 13:09:04.577000 audit: BPF prog-id=214 op=UNLOAD Dec 16 13:09:04.577000 audit[4750]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffeacda5020 a2=0 a3=5 items=0 ppid=4567 pid=4750 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:09:04.577000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 13:09:04.577000 audit: BPF prog-id=215 op=LOAD Dec 16 13:09:04.577000 audit[4750]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffeacda5240 a2=94 a3=6 items=0 ppid=4567 pid=4750 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:09:04.577000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 13:09:04.577000 audit: BPF prog-id=215 op=UNLOAD Dec 16 13:09:04.577000 audit[4750]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7ffeacda5240 a2=0 a3=6 items=0 ppid=4567 pid=4750 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:09:04.577000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 13:09:04.578000 audit: BPF prog-id=216 op=LOAD Dec 16 13:09:04.578000 audit[4750]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffeacda49f0 a2=94 a3=88 items=0 ppid=4567 pid=4750 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:09:04.578000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 13:09:04.578000 audit: BPF prog-id=217 op=LOAD Dec 16 13:09:04.578000 audit[4750]: SYSCALL arch=c000003e syscall=321 success=yes exit=7 a0=5 a1=7ffeacda4870 a2=94 a3=2 items=0 ppid=4567 pid=4750 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:09:04.578000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 13:09:04.578000 audit: BPF prog-id=217 op=UNLOAD Dec 16 13:09:04.578000 audit[4750]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=7 a1=7ffeacda48a0 a2=0 a3=7ffeacda49a0 items=0 ppid=4567 pid=4750 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:09:04.578000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 13:09:04.579000 audit: BPF prog-id=216 op=UNLOAD Dec 16 13:09:04.579000 audit[4750]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=27a5cd10 a2=0 a3=8635fa39222b956 items=0 ppid=4567 pid=4750 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:09:04.579000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 13:09:04.587529 containerd[1969]: time="2025-12-16T13:09:04.587405414Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Dec 16 13:09:04.607051 containerd[1969]: time="2025-12-16T13:09:04.606922085Z" level=info msg="connecting to shim 64d2c4f9c99b5c21a64e98d7c8f5eaf18d9ae643f45320eb35c9a9705db783b7" address="unix:///run/containerd/s/eda48ff5f3648cb34bd8ecb8ea3f1ded6b21731a59dee943a42465f898da1015" namespace=k8s.io protocol=ttrpc version=3 Dec 16 13:09:04.658632 systemd-networkd[1567]: calie50f80cfe07: Link UP Dec 16 13:09:04.659012 systemd-networkd[1567]: calie50f80cfe07: Gained carrier Dec 16 13:09:04.695616 systemd-networkd[1567]: vxlan.calico: Gained IPv6LL Dec 16 13:09:04.707000 audit: BPF prog-id=203 op=UNLOAD Dec 16 13:09:04.707000 audit[4567]: SYSCALL arch=c000003e syscall=263 success=yes exit=0 a0=ffffffffffffff9c a1=c000da2540 a2=0 a3=0 items=0 ppid=4551 pid=4567 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="calico-node" exe="/usr/bin/calico-node" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:09:04.707000 audit: PROCTITLE proctitle=63616C69636F2D6E6F6465002D66656C6978 Dec 16 13:09:04.734256 containerd[1969]: 2025-12-16 13:09:03.894 [INFO][4706] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--28--98-k8s-calico--kube--controllers--7bcdd655bc--b4pqw-eth0 calico-kube-controllers-7bcdd655bc- calico-system eef40561-fc3a-47f4-ab5c-0482b5980a8d 916 0 2025-12-16 13:08:34 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:7bcdd655bc projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ip-172-31-28-98 calico-kube-controllers-7bcdd655bc-b4pqw eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calie50f80cfe07 [] [] }} ContainerID="8a68779f2f0c13cd06dcdb2860baa251a66c8fdef99bbfeb1e878965ca0495e8" Namespace="calico-system" Pod="calico-kube-controllers-7bcdd655bc-b4pqw" WorkloadEndpoint="ip--172--31--28--98-k8s-calico--kube--controllers--7bcdd655bc--b4pqw-" Dec 16 13:09:04.734256 containerd[1969]: 2025-12-16 13:09:03.895 [INFO][4706] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="8a68779f2f0c13cd06dcdb2860baa251a66c8fdef99bbfeb1e878965ca0495e8" Namespace="calico-system" Pod="calico-kube-controllers-7bcdd655bc-b4pqw" WorkloadEndpoint="ip--172--31--28--98-k8s-calico--kube--controllers--7bcdd655bc--b4pqw-eth0" Dec 16 13:09:04.734256 containerd[1969]: 2025-12-16 13:09:04.280 [INFO][4786] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8a68779f2f0c13cd06dcdb2860baa251a66c8fdef99bbfeb1e878965ca0495e8" HandleID="k8s-pod-network.8a68779f2f0c13cd06dcdb2860baa251a66c8fdef99bbfeb1e878965ca0495e8" Workload="ip--172--31--28--98-k8s-calico--kube--controllers--7bcdd655bc--b4pqw-eth0" Dec 16 13:09:04.734873 containerd[1969]: 2025-12-16 13:09:04.282 [INFO][4786] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="8a68779f2f0c13cd06dcdb2860baa251a66c8fdef99bbfeb1e878965ca0495e8" HandleID="k8s-pod-network.8a68779f2f0c13cd06dcdb2860baa251a66c8fdef99bbfeb1e878965ca0495e8" Workload="ip--172--31--28--98-k8s-calico--kube--controllers--7bcdd655bc--b4pqw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004e3f0), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-28-98", "pod":"calico-kube-controllers-7bcdd655bc-b4pqw", "timestamp":"2025-12-16 13:09:04.280695301 +0000 UTC"}, Hostname:"ip-172-31-28-98", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 13:09:04.734873 containerd[1969]: 2025-12-16 13:09:04.282 [INFO][4786] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 13:09:04.734873 containerd[1969]: 2025-12-16 13:09:04.375 [INFO][4786] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 13:09:04.734873 containerd[1969]: 2025-12-16 13:09:04.375 [INFO][4786] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-28-98' Dec 16 13:09:04.734873 containerd[1969]: 2025-12-16 13:09:04.394 [INFO][4786] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.8a68779f2f0c13cd06dcdb2860baa251a66c8fdef99bbfeb1e878965ca0495e8" host="ip-172-31-28-98" Dec 16 13:09:04.734873 containerd[1969]: 2025-12-16 13:09:04.416 [INFO][4786] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-28-98" Dec 16 13:09:04.734873 containerd[1969]: 2025-12-16 13:09:04.467 [INFO][4786] ipam/ipam.go 511: Trying affinity for 192.168.44.0/26 host="ip-172-31-28-98" Dec 16 13:09:04.734873 containerd[1969]: 2025-12-16 13:09:04.478 [INFO][4786] ipam/ipam.go 158: Attempting to load block cidr=192.168.44.0/26 host="ip-172-31-28-98" Dec 16 13:09:04.734873 containerd[1969]: 2025-12-16 13:09:04.489 [INFO][4786] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.44.0/26 host="ip-172-31-28-98" Dec 16 13:09:04.737794 containerd[1969]: 2025-12-16 13:09:04.489 [INFO][4786] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.44.0/26 handle="k8s-pod-network.8a68779f2f0c13cd06dcdb2860baa251a66c8fdef99bbfeb1e878965ca0495e8" host="ip-172-31-28-98" Dec 16 13:09:04.737794 containerd[1969]: 2025-12-16 13:09:04.500 [INFO][4786] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.8a68779f2f0c13cd06dcdb2860baa251a66c8fdef99bbfeb1e878965ca0495e8 Dec 16 13:09:04.737794 containerd[1969]: 2025-12-16 13:09:04.544 [INFO][4786] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.44.0/26 handle="k8s-pod-network.8a68779f2f0c13cd06dcdb2860baa251a66c8fdef99bbfeb1e878965ca0495e8" host="ip-172-31-28-98" Dec 16 13:09:04.737794 containerd[1969]: 2025-12-16 13:09:04.576 [INFO][4786] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.44.3/26] block=192.168.44.0/26 handle="k8s-pod-network.8a68779f2f0c13cd06dcdb2860baa251a66c8fdef99bbfeb1e878965ca0495e8" host="ip-172-31-28-98" Dec 16 13:09:04.737794 containerd[1969]: 2025-12-16 13:09:04.580 [INFO][4786] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.44.3/26] handle="k8s-pod-network.8a68779f2f0c13cd06dcdb2860baa251a66c8fdef99bbfeb1e878965ca0495e8" host="ip-172-31-28-98" Dec 16 13:09:04.737794 containerd[1969]: 2025-12-16 13:09:04.580 [INFO][4786] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 13:09:04.737794 containerd[1969]: 2025-12-16 13:09:04.580 [INFO][4786] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.44.3/26] IPv6=[] ContainerID="8a68779f2f0c13cd06dcdb2860baa251a66c8fdef99bbfeb1e878965ca0495e8" HandleID="k8s-pod-network.8a68779f2f0c13cd06dcdb2860baa251a66c8fdef99bbfeb1e878965ca0495e8" Workload="ip--172--31--28--98-k8s-calico--kube--controllers--7bcdd655bc--b4pqw-eth0" Dec 16 13:09:04.739275 containerd[1969]: 2025-12-16 13:09:04.614 [INFO][4706] cni-plugin/k8s.go 418: Populated endpoint ContainerID="8a68779f2f0c13cd06dcdb2860baa251a66c8fdef99bbfeb1e878965ca0495e8" Namespace="calico-system" Pod="calico-kube-controllers-7bcdd655bc-b4pqw" WorkloadEndpoint="ip--172--31--28--98-k8s-calico--kube--controllers--7bcdd655bc--b4pqw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--28--98-k8s-calico--kube--controllers--7bcdd655bc--b4pqw-eth0", GenerateName:"calico-kube-controllers-7bcdd655bc-", Namespace:"calico-system", SelfLink:"", UID:"eef40561-fc3a-47f4-ab5c-0482b5980a8d", ResourceVersion:"916", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 13, 8, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7bcdd655bc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-28-98", ContainerID:"", Pod:"calico-kube-controllers-7bcdd655bc-b4pqw", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.44.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calie50f80cfe07", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 13:09:04.739389 containerd[1969]: 2025-12-16 13:09:04.619 [INFO][4706] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.44.3/32] ContainerID="8a68779f2f0c13cd06dcdb2860baa251a66c8fdef99bbfeb1e878965ca0495e8" Namespace="calico-system" Pod="calico-kube-controllers-7bcdd655bc-b4pqw" WorkloadEndpoint="ip--172--31--28--98-k8s-calico--kube--controllers--7bcdd655bc--b4pqw-eth0" Dec 16 13:09:04.739389 containerd[1969]: 2025-12-16 13:09:04.625 [INFO][4706] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie50f80cfe07 ContainerID="8a68779f2f0c13cd06dcdb2860baa251a66c8fdef99bbfeb1e878965ca0495e8" Namespace="calico-system" Pod="calico-kube-controllers-7bcdd655bc-b4pqw" WorkloadEndpoint="ip--172--31--28--98-k8s-calico--kube--controllers--7bcdd655bc--b4pqw-eth0" Dec 16 13:09:04.739389 containerd[1969]: 2025-12-16 13:09:04.669 [INFO][4706] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8a68779f2f0c13cd06dcdb2860baa251a66c8fdef99bbfeb1e878965ca0495e8" Namespace="calico-system" Pod="calico-kube-controllers-7bcdd655bc-b4pqw" WorkloadEndpoint="ip--172--31--28--98-k8s-calico--kube--controllers--7bcdd655bc--b4pqw-eth0" Dec 16 13:09:04.739477 containerd[1969]: 2025-12-16 13:09:04.672 [INFO][4706] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="8a68779f2f0c13cd06dcdb2860baa251a66c8fdef99bbfeb1e878965ca0495e8" Namespace="calico-system" Pod="calico-kube-controllers-7bcdd655bc-b4pqw" WorkloadEndpoint="ip--172--31--28--98-k8s-calico--kube--controllers--7bcdd655bc--b4pqw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--28--98-k8s-calico--kube--controllers--7bcdd655bc--b4pqw-eth0", GenerateName:"calico-kube-controllers-7bcdd655bc-", Namespace:"calico-system", SelfLink:"", UID:"eef40561-fc3a-47f4-ab5c-0482b5980a8d", ResourceVersion:"916", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 13, 8, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7bcdd655bc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-28-98", ContainerID:"8a68779f2f0c13cd06dcdb2860baa251a66c8fdef99bbfeb1e878965ca0495e8", Pod:"calico-kube-controllers-7bcdd655bc-b4pqw", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.44.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calie50f80cfe07", MAC:"7a:3d:f4:01:8a:08", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 13:09:04.739555 containerd[1969]: 2025-12-16 13:09:04.716 [INFO][4706] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="8a68779f2f0c13cd06dcdb2860baa251a66c8fdef99bbfeb1e878965ca0495e8" Namespace="calico-system" Pod="calico-kube-controllers-7bcdd655bc-b4pqw" WorkloadEndpoint="ip--172--31--28--98-k8s-calico--kube--controllers--7bcdd655bc--b4pqw-eth0" Dec 16 13:09:04.762431 systemd[1]: Started cri-containerd-64d2c4f9c99b5c21a64e98d7c8f5eaf18d9ae643f45320eb35c9a9705db783b7.scope - libcontainer container 64d2c4f9c99b5c21a64e98d7c8f5eaf18d9ae643f45320eb35c9a9705db783b7. Dec 16 13:09:04.782202 containerd[1969]: time="2025-12-16T13:09:04.781949270Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-hdxm2,Uid:6fbf2e8b-b432-4b20-866b-c50e77db1d45,Namespace:kube-system,Attempt:0,}" Dec 16 13:09:04.797944 systemd-networkd[1567]: cali499f427dd94: Link UP Dec 16 13:09:04.809334 systemd-networkd[1567]: cali499f427dd94: Gained carrier Dec 16 13:09:04.837538 containerd[1969]: time="2025-12-16T13:09:04.837473829Z" level=info msg="connecting to shim 8a68779f2f0c13cd06dcdb2860baa251a66c8fdef99bbfeb1e878965ca0495e8" address="unix:///run/containerd/s/670a84aa349672b39ec42ac576a62d4ca5f3fbde5fae8bf9281c561ac1c649f0" namespace=k8s.io protocol=ttrpc version=3 Dec 16 13:09:04.864348 containerd[1969]: 2025-12-16 13:09:04.106 [INFO][4704] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--28--98-k8s-goldmane--666569f655--wpbz6-eth0 goldmane-666569f655- calico-system ea48f51b-a248-4d71-8caa-ed889e7f5fac 915 0 2025-12-16 13:08:32 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:666569f655 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ip-172-31-28-98 goldmane-666569f655-wpbz6 eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali499f427dd94 [] [] }} ContainerID="e4d364b20c6d7277aa37cab05bdc5b40fb5a679698f2a209b86cc1906423d38d" Namespace="calico-system" Pod="goldmane-666569f655-wpbz6" WorkloadEndpoint="ip--172--31--28--98-k8s-goldmane--666569f655--wpbz6-" Dec 16 13:09:04.864348 containerd[1969]: 2025-12-16 13:09:04.135 [INFO][4704] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="e4d364b20c6d7277aa37cab05bdc5b40fb5a679698f2a209b86cc1906423d38d" Namespace="calico-system" Pod="goldmane-666569f655-wpbz6" WorkloadEndpoint="ip--172--31--28--98-k8s-goldmane--666569f655--wpbz6-eth0" Dec 16 13:09:04.864348 containerd[1969]: 2025-12-16 13:09:04.325 [INFO][4803] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e4d364b20c6d7277aa37cab05bdc5b40fb5a679698f2a209b86cc1906423d38d" HandleID="k8s-pod-network.e4d364b20c6d7277aa37cab05bdc5b40fb5a679698f2a209b86cc1906423d38d" Workload="ip--172--31--28--98-k8s-goldmane--666569f655--wpbz6-eth0" Dec 16 13:09:04.865420 containerd[1969]: 2025-12-16 13:09:04.325 [INFO][4803] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="e4d364b20c6d7277aa37cab05bdc5b40fb5a679698f2a209b86cc1906423d38d" HandleID="k8s-pod-network.e4d364b20c6d7277aa37cab05bdc5b40fb5a679698f2a209b86cc1906423d38d" Workload="ip--172--31--28--98-k8s-goldmane--666569f655--wpbz6-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00038da00), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-28-98", "pod":"goldmane-666569f655-wpbz6", "timestamp":"2025-12-16 13:09:04.325166512 +0000 UTC"}, Hostname:"ip-172-31-28-98", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 13:09:04.865420 containerd[1969]: 2025-12-16 13:09:04.326 [INFO][4803] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 13:09:04.865420 containerd[1969]: 2025-12-16 13:09:04.580 [INFO][4803] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 13:09:04.865420 containerd[1969]: 2025-12-16 13:09:04.581 [INFO][4803] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-28-98' Dec 16 13:09:04.865420 containerd[1969]: 2025-12-16 13:09:04.614 [INFO][4803] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.e4d364b20c6d7277aa37cab05bdc5b40fb5a679698f2a209b86cc1906423d38d" host="ip-172-31-28-98" Dec 16 13:09:04.865420 containerd[1969]: 2025-12-16 13:09:04.637 [INFO][4803] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-28-98" Dec 16 13:09:04.865420 containerd[1969]: 2025-12-16 13:09:04.653 [INFO][4803] ipam/ipam.go 511: Trying affinity for 192.168.44.0/26 host="ip-172-31-28-98" Dec 16 13:09:04.865420 containerd[1969]: 2025-12-16 13:09:04.670 [INFO][4803] ipam/ipam.go 158: Attempting to load block cidr=192.168.44.0/26 host="ip-172-31-28-98" Dec 16 13:09:04.865420 containerd[1969]: 2025-12-16 13:09:04.680 [INFO][4803] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.44.0/26 host="ip-172-31-28-98" Dec 16 13:09:04.865420 containerd[1969]: 2025-12-16 13:09:04.680 [INFO][4803] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.44.0/26 handle="k8s-pod-network.e4d364b20c6d7277aa37cab05bdc5b40fb5a679698f2a209b86cc1906423d38d" host="ip-172-31-28-98" Dec 16 13:09:04.865869 containerd[1969]: 2025-12-16 13:09:04.688 [INFO][4803] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.e4d364b20c6d7277aa37cab05bdc5b40fb5a679698f2a209b86cc1906423d38d Dec 16 13:09:04.865869 containerd[1969]: 2025-12-16 13:09:04.727 [INFO][4803] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.44.0/26 handle="k8s-pod-network.e4d364b20c6d7277aa37cab05bdc5b40fb5a679698f2a209b86cc1906423d38d" host="ip-172-31-28-98" Dec 16 13:09:04.865869 containerd[1969]: 2025-12-16 13:09:04.757 [INFO][4803] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.44.4/26] block=192.168.44.0/26 handle="k8s-pod-network.e4d364b20c6d7277aa37cab05bdc5b40fb5a679698f2a209b86cc1906423d38d" host="ip-172-31-28-98" Dec 16 13:09:04.865869 containerd[1969]: 2025-12-16 13:09:04.759 [INFO][4803] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.44.4/26] handle="k8s-pod-network.e4d364b20c6d7277aa37cab05bdc5b40fb5a679698f2a209b86cc1906423d38d" host="ip-172-31-28-98" Dec 16 13:09:04.865869 containerd[1969]: 2025-12-16 13:09:04.759 [INFO][4803] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 13:09:04.865869 containerd[1969]: 2025-12-16 13:09:04.759 [INFO][4803] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.44.4/26] IPv6=[] ContainerID="e4d364b20c6d7277aa37cab05bdc5b40fb5a679698f2a209b86cc1906423d38d" HandleID="k8s-pod-network.e4d364b20c6d7277aa37cab05bdc5b40fb5a679698f2a209b86cc1906423d38d" Workload="ip--172--31--28--98-k8s-goldmane--666569f655--wpbz6-eth0" Dec 16 13:09:04.867215 containerd[1969]: 2025-12-16 13:09:04.769 [INFO][4704] cni-plugin/k8s.go 418: Populated endpoint ContainerID="e4d364b20c6d7277aa37cab05bdc5b40fb5a679698f2a209b86cc1906423d38d" Namespace="calico-system" Pod="goldmane-666569f655-wpbz6" WorkloadEndpoint="ip--172--31--28--98-k8s-goldmane--666569f655--wpbz6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--28--98-k8s-goldmane--666569f655--wpbz6-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"ea48f51b-a248-4d71-8caa-ed889e7f5fac", ResourceVersion:"915", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 13, 8, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-28-98", ContainerID:"", Pod:"goldmane-666569f655-wpbz6", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.44.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali499f427dd94", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 13:09:04.867215 containerd[1969]: 2025-12-16 13:09:04.769 [INFO][4704] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.44.4/32] ContainerID="e4d364b20c6d7277aa37cab05bdc5b40fb5a679698f2a209b86cc1906423d38d" Namespace="calico-system" Pod="goldmane-666569f655-wpbz6" WorkloadEndpoint="ip--172--31--28--98-k8s-goldmane--666569f655--wpbz6-eth0" Dec 16 13:09:04.867388 containerd[1969]: 2025-12-16 13:09:04.769 [INFO][4704] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali499f427dd94 ContainerID="e4d364b20c6d7277aa37cab05bdc5b40fb5a679698f2a209b86cc1906423d38d" Namespace="calico-system" Pod="goldmane-666569f655-wpbz6" WorkloadEndpoint="ip--172--31--28--98-k8s-goldmane--666569f655--wpbz6-eth0" Dec 16 13:09:04.867388 containerd[1969]: 2025-12-16 13:09:04.816 [INFO][4704] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e4d364b20c6d7277aa37cab05bdc5b40fb5a679698f2a209b86cc1906423d38d" Namespace="calico-system" Pod="goldmane-666569f655-wpbz6" WorkloadEndpoint="ip--172--31--28--98-k8s-goldmane--666569f655--wpbz6-eth0" Dec 16 13:09:04.867464 containerd[1969]: 2025-12-16 13:09:04.819 [INFO][4704] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="e4d364b20c6d7277aa37cab05bdc5b40fb5a679698f2a209b86cc1906423d38d" Namespace="calico-system" Pod="goldmane-666569f655-wpbz6" WorkloadEndpoint="ip--172--31--28--98-k8s-goldmane--666569f655--wpbz6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--28--98-k8s-goldmane--666569f655--wpbz6-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"ea48f51b-a248-4d71-8caa-ed889e7f5fac", ResourceVersion:"915", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 13, 8, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-28-98", ContainerID:"e4d364b20c6d7277aa37cab05bdc5b40fb5a679698f2a209b86cc1906423d38d", Pod:"goldmane-666569f655-wpbz6", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.44.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali499f427dd94", MAC:"6e:c8:df:34:b2:b7", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 13:09:04.867563 containerd[1969]: 2025-12-16 13:09:04.841 [INFO][4704] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="e4d364b20c6d7277aa37cab05bdc5b40fb5a679698f2a209b86cc1906423d38d" Namespace="calico-system" Pod="goldmane-666569f655-wpbz6" WorkloadEndpoint="ip--172--31--28--98-k8s-goldmane--666569f655--wpbz6-eth0" Dec 16 13:09:04.868000 audit: BPF prog-id=218 op=LOAD Dec 16 13:09:04.872000 audit: BPF prog-id=219 op=LOAD Dec 16 13:09:04.872000 audit[4860]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0238 a2=98 a3=0 items=0 ppid=4847 pid=4860 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:09:04.872000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3634643263346639633939623563323161363465393864376338663565 Dec 16 13:09:04.872000 audit: BPF prog-id=219 op=UNLOAD Dec 16 13:09:04.872000 audit[4860]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4847 pid=4860 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:09:04.872000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3634643263346639633939623563323161363465393864376338663565 Dec 16 13:09:04.872000 audit: BPF prog-id=220 op=LOAD Dec 16 13:09:04.872000 audit[4860]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0488 a2=98 a3=0 items=0 ppid=4847 pid=4860 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:09:04.872000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3634643263346639633939623563323161363465393864376338663565 Dec 16 13:09:04.876000 audit: BPF prog-id=221 op=LOAD Dec 16 13:09:04.876000 audit[4860]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001a0218 a2=98 a3=0 items=0 ppid=4847 pid=4860 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:09:04.876000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3634643263346639633939623563323161363465393864376338663565 Dec 16 13:09:04.877000 audit: BPF prog-id=221 op=UNLOAD Dec 16 13:09:04.877000 audit[4860]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4847 pid=4860 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:09:04.877000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3634643263346639633939623563323161363465393864376338663565 Dec 16 13:09:04.878000 audit: BPF prog-id=220 op=UNLOAD Dec 16 13:09:04.878000 audit[4860]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4847 pid=4860 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:09:04.878000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3634643263346639633939623563323161363465393864376338663565 Dec 16 13:09:04.878000 audit: BPF prog-id=222 op=LOAD Dec 16 13:09:04.878000 audit[4860]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a06e8 a2=98 a3=0 items=0 ppid=4847 pid=4860 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:09:04.878000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3634643263346639633939623563323161363465393864376338663565 Dec 16 13:09:04.943499 containerd[1969]: time="2025-12-16T13:09:04.943261253Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:09:04.991169 containerd[1969]: time="2025-12-16T13:09:04.990687659Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Dec 16 13:09:04.991945 containerd[1969]: time="2025-12-16T13:09:04.991875076Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Dec 16 13:09:05.008088 kubelet[3309]: E1216 13:09:05.005745 3309 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 13:09:05.008975 kubelet[3309]: E1216 13:09:05.008766 3309 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 13:09:05.050286 kubelet[3309]: E1216 13:09:05.048208 3309 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:561161844d8542869bf93b20f103b053,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-v44l9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-58f99f576c-h7p64_calico-system(f4a8c05f-aa26-454c-a381-75bd59548a78): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Dec 16 13:09:05.051405 systemd[1]: Started cri-containerd-8a68779f2f0c13cd06dcdb2860baa251a66c8fdef99bbfeb1e878965ca0495e8.scope - libcontainer container 8a68779f2f0c13cd06dcdb2860baa251a66c8fdef99bbfeb1e878965ca0495e8. Dec 16 13:09:05.064989 containerd[1969]: time="2025-12-16T13:09:05.064273230Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Dec 16 13:09:05.215423 containerd[1969]: time="2025-12-16T13:09:05.215347768Z" level=info msg="connecting to shim e4d364b20c6d7277aa37cab05bdc5b40fb5a679698f2a209b86cc1906423d38d" address="unix:///run/containerd/s/cefcd6b9559b29eabe1e8381dffe92b363a1c62d1fe9fa7409c1a4812199b3d1" namespace=k8s.io protocol=ttrpc version=3 Dec 16 13:09:05.217482 containerd[1969]: time="2025-12-16T13:09:05.217437669Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6d7fb6ffdb-x9w4j,Uid:17fc83ee-aaa8-428d-ba14-4fb4545cfe65,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"64d2c4f9c99b5c21a64e98d7c8f5eaf18d9ae643f45320eb35c9a9705db783b7\"" Dec 16 13:09:05.330745 systemd[1]: Started cri-containerd-e4d364b20c6d7277aa37cab05bdc5b40fb5a679698f2a209b86cc1906423d38d.scope - libcontainer container e4d364b20c6d7277aa37cab05bdc5b40fb5a679698f2a209b86cc1906423d38d. Dec 16 13:09:05.392698 containerd[1969]: time="2025-12-16T13:09:05.392420039Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:09:05.397831 containerd[1969]: time="2025-12-16T13:09:05.397445618Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Dec 16 13:09:05.398509 containerd[1969]: time="2025-12-16T13:09:05.397448902Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Dec 16 13:09:05.401536 kubelet[3309]: E1216 13:09:05.401240 3309 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 13:09:05.401536 kubelet[3309]: E1216 13:09:05.401334 3309 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 13:09:05.401828 kubelet[3309]: E1216 13:09:05.401592 3309 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-v44l9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-58f99f576c-h7p64_calico-system(f4a8c05f-aa26-454c-a381-75bd59548a78): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Dec 16 13:09:05.402765 containerd[1969]: time="2025-12-16T13:09:05.402717162Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 13:09:05.408080 kubelet[3309]: E1216 13:09:05.407972 3309 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-58f99f576c-h7p64" podUID="f4a8c05f-aa26-454c-a381-75bd59548a78" Dec 16 13:09:05.462268 systemd-networkd[1567]: cali43554063c7a: Gained IPv6LL Dec 16 13:09:05.565000 audit: BPF prog-id=223 op=LOAD Dec 16 13:09:05.567000 audit: BPF prog-id=224 op=LOAD Dec 16 13:09:05.567000 audit[4920]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106238 a2=98 a3=0 items=0 ppid=4900 pid=4920 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:09:05.567000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3861363837373966326630633133636430366463646232383630626161 Dec 16 13:09:05.567000 audit: BPF prog-id=224 op=UNLOAD Dec 16 13:09:05.567000 audit[4920]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4900 pid=4920 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:09:05.567000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3861363837373966326630633133636430366463646232383630626161 Dec 16 13:09:05.568000 audit: BPF prog-id=225 op=LOAD Dec 16 13:09:05.568000 audit[4920]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106488 a2=98 a3=0 items=0 ppid=4900 pid=4920 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:09:05.568000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3861363837373966326630633133636430366463646232383630626161 Dec 16 13:09:05.568000 audit: BPF prog-id=226 op=LOAD Dec 16 13:09:05.568000 audit[4920]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000106218 a2=98 a3=0 items=0 ppid=4900 pid=4920 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:09:05.568000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3861363837373966326630633133636430366463646232383630626161 Dec 16 13:09:05.570000 audit: BPF prog-id=226 op=UNLOAD Dec 16 13:09:05.570000 audit[4920]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4900 pid=4920 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:09:05.570000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3861363837373966326630633133636430366463646232383630626161 Dec 16 13:09:05.570000 audit: BPF prog-id=225 op=UNLOAD Dec 16 13:09:05.570000 audit[4920]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4900 pid=4920 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:09:05.570000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3861363837373966326630633133636430366463646232383630626161 Dec 16 13:09:05.570000 audit: BPF prog-id=227 op=LOAD Dec 16 13:09:05.570000 audit[4920]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001066e8 a2=98 a3=0 items=0 ppid=4900 pid=4920 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:09:05.570000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3861363837373966326630633133636430366463646232383630626161 Dec 16 13:09:05.573000 audit[5014]: NETFILTER_CFG table=nat:121 family=2 entries=15 op=nft_register_chain pid=5014 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 16 13:09:05.573000 audit[5014]: SYSCALL arch=c000003e syscall=46 success=yes exit=5084 a0=3 a1=7ffd7f601570 a2=0 a3=7ffd7f60155c items=0 ppid=4567 pid=5014 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:09:05.573000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 16 13:09:05.602593 systemd-networkd[1567]: cali307707cd4c1: Link UP Dec 16 13:09:05.610047 systemd-networkd[1567]: cali307707cd4c1: Gained carrier Dec 16 13:09:05.667177 containerd[1969]: 2025-12-16 13:09:05.145 [INFO][4926] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--28--98-k8s-csi--node--driver--h272q-eth0 csi-node-driver- calico-system c808a4b9-6eee-4490-92c6-5f208009c5e7 800 0 2025-12-16 13:08:34 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:857b56db8f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ip-172-31-28-98 csi-node-driver-h272q eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali307707cd4c1 [] [] }} ContainerID="2e9bf89beeb1165bc3feb8c28331772b14d00abc565e54758b68faece2debcbf" Namespace="calico-system" Pod="csi-node-driver-h272q" WorkloadEndpoint="ip--172--31--28--98-k8s-csi--node--driver--h272q-" Dec 16 13:09:05.667177 containerd[1969]: 2025-12-16 13:09:05.145 [INFO][4926] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="2e9bf89beeb1165bc3feb8c28331772b14d00abc565e54758b68faece2debcbf" Namespace="calico-system" Pod="csi-node-driver-h272q" WorkloadEndpoint="ip--172--31--28--98-k8s-csi--node--driver--h272q-eth0" Dec 16 13:09:05.667177 containerd[1969]: 2025-12-16 13:09:05.436 [INFO][5004] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2e9bf89beeb1165bc3feb8c28331772b14d00abc565e54758b68faece2debcbf" HandleID="k8s-pod-network.2e9bf89beeb1165bc3feb8c28331772b14d00abc565e54758b68faece2debcbf" Workload="ip--172--31--28--98-k8s-csi--node--driver--h272q-eth0" Dec 16 13:09:05.667508 containerd[1969]: 2025-12-16 13:09:05.440 [INFO][5004] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="2e9bf89beeb1165bc3feb8c28331772b14d00abc565e54758b68faece2debcbf" HandleID="k8s-pod-network.2e9bf89beeb1165bc3feb8c28331772b14d00abc565e54758b68faece2debcbf" Workload="ip--172--31--28--98-k8s-csi--node--driver--h272q-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000122320), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-28-98", "pod":"csi-node-driver-h272q", "timestamp":"2025-12-16 13:09:05.436519098 +0000 UTC"}, Hostname:"ip-172-31-28-98", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 13:09:05.667508 containerd[1969]: 2025-12-16 13:09:05.441 [INFO][5004] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 13:09:05.667508 containerd[1969]: 2025-12-16 13:09:05.441 [INFO][5004] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 13:09:05.667508 containerd[1969]: 2025-12-16 13:09:05.441 [INFO][5004] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-28-98' Dec 16 13:09:05.667508 containerd[1969]: 2025-12-16 13:09:05.494 [INFO][5004] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.2e9bf89beeb1165bc3feb8c28331772b14d00abc565e54758b68faece2debcbf" host="ip-172-31-28-98" Dec 16 13:09:05.667508 containerd[1969]: 2025-12-16 13:09:05.509 [INFO][5004] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-28-98" Dec 16 13:09:05.667508 containerd[1969]: 2025-12-16 13:09:05.540 [INFO][5004] ipam/ipam.go 511: Trying affinity for 192.168.44.0/26 host="ip-172-31-28-98" Dec 16 13:09:05.667508 containerd[1969]: 2025-12-16 13:09:05.548 [INFO][5004] ipam/ipam.go 158: Attempting to load block cidr=192.168.44.0/26 host="ip-172-31-28-98" Dec 16 13:09:05.667508 containerd[1969]: 2025-12-16 13:09:05.554 [INFO][5004] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.44.0/26 host="ip-172-31-28-98" Dec 16 13:09:05.667508 containerd[1969]: 2025-12-16 13:09:05.554 [INFO][5004] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.44.0/26 handle="k8s-pod-network.2e9bf89beeb1165bc3feb8c28331772b14d00abc565e54758b68faece2debcbf" host="ip-172-31-28-98" Dec 16 13:09:05.624000 audit[5012]: NETFILTER_CFG table=raw:122 family=2 entries=21 op=nft_register_chain pid=5012 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 16 13:09:05.624000 audit[5012]: SYSCALL arch=c000003e syscall=46 success=yes exit=8452 a0=3 a1=7fff6f95dcc0 a2=0 a3=7fff6f95dcac items=0 ppid=4567 pid=5012 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:09:05.624000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 16 13:09:05.673362 containerd[1969]: 2025-12-16 13:09:05.557 [INFO][5004] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.2e9bf89beeb1165bc3feb8c28331772b14d00abc565e54758b68faece2debcbf Dec 16 13:09:05.673362 containerd[1969]: 2025-12-16 13:09:05.566 [INFO][5004] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.44.0/26 handle="k8s-pod-network.2e9bf89beeb1165bc3feb8c28331772b14d00abc565e54758b68faece2debcbf" host="ip-172-31-28-98" Dec 16 13:09:05.673362 containerd[1969]: 2025-12-16 13:09:05.579 [INFO][5004] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.44.5/26] block=192.168.44.0/26 handle="k8s-pod-network.2e9bf89beeb1165bc3feb8c28331772b14d00abc565e54758b68faece2debcbf" host="ip-172-31-28-98" Dec 16 13:09:05.673362 containerd[1969]: 2025-12-16 13:09:05.580 [INFO][5004] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.44.5/26] handle="k8s-pod-network.2e9bf89beeb1165bc3feb8c28331772b14d00abc565e54758b68faece2debcbf" host="ip-172-31-28-98" Dec 16 13:09:05.673362 containerd[1969]: 2025-12-16 13:09:05.580 [INFO][5004] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 13:09:05.673362 containerd[1969]: 2025-12-16 13:09:05.580 [INFO][5004] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.44.5/26] IPv6=[] ContainerID="2e9bf89beeb1165bc3feb8c28331772b14d00abc565e54758b68faece2debcbf" HandleID="k8s-pod-network.2e9bf89beeb1165bc3feb8c28331772b14d00abc565e54758b68faece2debcbf" Workload="ip--172--31--28--98-k8s-csi--node--driver--h272q-eth0" Dec 16 13:09:05.673620 containerd[1969]: 2025-12-16 13:09:05.589 [INFO][4926] cni-plugin/k8s.go 418: Populated endpoint ContainerID="2e9bf89beeb1165bc3feb8c28331772b14d00abc565e54758b68faece2debcbf" Namespace="calico-system" Pod="csi-node-driver-h272q" WorkloadEndpoint="ip--172--31--28--98-k8s-csi--node--driver--h272q-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--28--98-k8s-csi--node--driver--h272q-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"c808a4b9-6eee-4490-92c6-5f208009c5e7", ResourceVersion:"800", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 13, 8, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-28-98", ContainerID:"", Pod:"csi-node-driver-h272q", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.44.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali307707cd4c1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 13:09:05.673766 containerd[1969]: 2025-12-16 13:09:05.589 [INFO][4926] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.44.5/32] ContainerID="2e9bf89beeb1165bc3feb8c28331772b14d00abc565e54758b68faece2debcbf" Namespace="calico-system" Pod="csi-node-driver-h272q" WorkloadEndpoint="ip--172--31--28--98-k8s-csi--node--driver--h272q-eth0" Dec 16 13:09:05.673766 containerd[1969]: 2025-12-16 13:09:05.589 [INFO][4926] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali307707cd4c1 ContainerID="2e9bf89beeb1165bc3feb8c28331772b14d00abc565e54758b68faece2debcbf" Namespace="calico-system" Pod="csi-node-driver-h272q" WorkloadEndpoint="ip--172--31--28--98-k8s-csi--node--driver--h272q-eth0" Dec 16 13:09:05.673766 containerd[1969]: 2025-12-16 13:09:05.605 [INFO][4926] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2e9bf89beeb1165bc3feb8c28331772b14d00abc565e54758b68faece2debcbf" Namespace="calico-system" Pod="csi-node-driver-h272q" WorkloadEndpoint="ip--172--31--28--98-k8s-csi--node--driver--h272q-eth0" Dec 16 13:09:05.673880 containerd[1969]: 2025-12-16 13:09:05.606 [INFO][4926] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="2e9bf89beeb1165bc3feb8c28331772b14d00abc565e54758b68faece2debcbf" Namespace="calico-system" Pod="csi-node-driver-h272q" WorkloadEndpoint="ip--172--31--28--98-k8s-csi--node--driver--h272q-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--28--98-k8s-csi--node--driver--h272q-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"c808a4b9-6eee-4490-92c6-5f208009c5e7", ResourceVersion:"800", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 13, 8, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-28-98", ContainerID:"2e9bf89beeb1165bc3feb8c28331772b14d00abc565e54758b68faece2debcbf", Pod:"csi-node-driver-h272q", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.44.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali307707cd4c1", MAC:"16:3f:8d:ba:b6:39", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 13:09:05.673971 containerd[1969]: 2025-12-16 13:09:05.644 [INFO][4926] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="2e9bf89beeb1165bc3feb8c28331772b14d00abc565e54758b68faece2debcbf" Namespace="calico-system" Pod="csi-node-driver-h272q" WorkloadEndpoint="ip--172--31--28--98-k8s-csi--node--driver--h272q-eth0" Dec 16 13:09:05.678000 audit[5006]: NETFILTER_CFG table=mangle:123 family=2 entries=16 op=nft_register_chain pid=5006 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 16 13:09:05.678000 audit[5006]: SYSCALL arch=c000003e syscall=46 success=yes exit=6868 a0=3 a1=7ffc4d8094f0 a2=0 a3=7ffc4d8094dc items=0 ppid=4567 pid=5006 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:09:05.678000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 16 13:09:05.700222 containerd[1969]: time="2025-12-16T13:09:05.699717252Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:09:05.714170 containerd[1969]: time="2025-12-16T13:09:05.713349394Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 13:09:05.714488 containerd[1969]: time="2025-12-16T13:09:05.714456806Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 16 13:09:05.715304 kubelet[3309]: E1216 13:09:05.715014 3309 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 13:09:05.715658 kubelet[3309]: E1216 13:09:05.715449 3309 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 13:09:05.718428 kubelet[3309]: E1216 13:09:05.718053 3309 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6h55z,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6d7fb6ffdb-x9w4j_calico-apiserver(17fc83ee-aaa8-428d-ba14-4fb4545cfe65): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 13:09:05.719848 kubelet[3309]: E1216 13:09:05.719776 3309 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6d7fb6ffdb-x9w4j" podUID="17fc83ee-aaa8-428d-ba14-4fb4545cfe65" Dec 16 13:09:05.721000 audit[5084]: NETFILTER_CFG table=filter:124 family=2 entries=39 op=nft_register_chain pid=5084 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 16 13:09:05.721000 audit[5084]: SYSCALL arch=c000003e syscall=46 success=yes exit=18968 a0=3 a1=7ffd9370fb60 a2=0 a3=7ffd9370fb4c items=0 ppid=4567 pid=5084 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:09:05.721000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 16 13:09:05.767000 audit: BPF prog-id=228 op=LOAD Dec 16 13:09:05.769000 audit: BPF prog-id=229 op=LOAD Dec 16 13:09:05.769000 audit[5038]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000178238 a2=98 a3=0 items=0 ppid=5023 pid=5038 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:09:05.769000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6534643336346232306336643732373761613337636162303562646335 Dec 16 13:09:05.770000 audit: BPF prog-id=229 op=UNLOAD Dec 16 13:09:05.770000 audit[5038]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=5023 pid=5038 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:09:05.770000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6534643336346232306336643732373761613337636162303562646335 Dec 16 13:09:05.772000 audit: BPF prog-id=230 op=LOAD Dec 16 13:09:05.772000 audit[5038]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000178488 a2=98 a3=0 items=0 ppid=5023 pid=5038 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:09:05.772000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6534643336346232306336643732373761613337636162303562646335 Dec 16 13:09:05.772000 audit: BPF prog-id=231 op=LOAD Dec 16 13:09:05.772000 audit[5038]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000178218 a2=98 a3=0 items=0 ppid=5023 pid=5038 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:09:05.772000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6534643336346232306336643732373761613337636162303562646335 Dec 16 13:09:05.772000 audit: BPF prog-id=231 op=UNLOAD Dec 16 13:09:05.772000 audit[5038]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=5023 pid=5038 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:09:05.772000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6534643336346232306336643732373761613337636162303562646335 Dec 16 13:09:05.773000 audit: BPF prog-id=230 op=UNLOAD Dec 16 13:09:05.773000 audit[5038]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=5023 pid=5038 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:09:05.773000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6534643336346232306336643732373761613337636162303562646335 Dec 16 13:09:05.773000 audit: BPF prog-id=232 op=LOAD Dec 16 13:09:05.773000 audit[5038]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001786e8 a2=98 a3=0 items=0 ppid=5023 pid=5038 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:09:05.773000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6534643336346232306336643732373761613337636162303562646335 Dec 16 13:09:05.782290 containerd[1969]: time="2025-12-16T13:09:05.781915594Z" level=info msg="connecting to shim 2e9bf89beeb1165bc3feb8c28331772b14d00abc565e54758b68faece2debcbf" address="unix:///run/containerd/s/6f601136956e25b2c44d7dab9e76e61c0b3c09c11412421342d9f26dc41faf14" namespace=k8s.io protocol=ttrpc version=3 Dec 16 13:09:05.839757 systemd-networkd[1567]: cali3160487b38d: Link UP Dec 16 13:09:05.841898 systemd-networkd[1567]: cali3160487b38d: Gained carrier Dec 16 13:09:05.912403 systemd[1]: Started cri-containerd-2e9bf89beeb1165bc3feb8c28331772b14d00abc565e54758b68faece2debcbf.scope - libcontainer container 2e9bf89beeb1165bc3feb8c28331772b14d00abc565e54758b68faece2debcbf. Dec 16 13:09:05.923971 kubelet[3309]: E1216 13:09:05.923896 3309 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6d7fb6ffdb-x9w4j" podUID="17fc83ee-aaa8-428d-ba14-4fb4545cfe65" Dec 16 13:09:05.926325 kubelet[3309]: E1216 13:09:05.926266 3309 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-58f99f576c-h7p64" podUID="f4a8c05f-aa26-454c-a381-75bd59548a78" Dec 16 13:09:05.937311 containerd[1969]: 2025-12-16 13:09:05.286 [INFO][4930] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--28--98-k8s-coredns--674b8bbfcf--hdxm2-eth0 coredns-674b8bbfcf- kube-system 6fbf2e8b-b432-4b20-866b-c50e77db1d45 912 0 2025-12-16 13:07:44 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-28-98 coredns-674b8bbfcf-hdxm2 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali3160487b38d [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="7bd8c448264fe0443872a688888b6869e2847cd1bf742910ebb2b3a6655a69b9" Namespace="kube-system" Pod="coredns-674b8bbfcf-hdxm2" WorkloadEndpoint="ip--172--31--28--98-k8s-coredns--674b8bbfcf--hdxm2-" Dec 16 13:09:05.937311 containerd[1969]: 2025-12-16 13:09:05.288 [INFO][4930] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="7bd8c448264fe0443872a688888b6869e2847cd1bf742910ebb2b3a6655a69b9" Namespace="kube-system" Pod="coredns-674b8bbfcf-hdxm2" WorkloadEndpoint="ip--172--31--28--98-k8s-coredns--674b8bbfcf--hdxm2-eth0" Dec 16 13:09:05.937311 containerd[1969]: 2025-12-16 13:09:05.458 [INFO][5052] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7bd8c448264fe0443872a688888b6869e2847cd1bf742910ebb2b3a6655a69b9" HandleID="k8s-pod-network.7bd8c448264fe0443872a688888b6869e2847cd1bf742910ebb2b3a6655a69b9" Workload="ip--172--31--28--98-k8s-coredns--674b8bbfcf--hdxm2-eth0" Dec 16 13:09:05.937634 containerd[1969]: 2025-12-16 13:09:05.461 [INFO][5052] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="7bd8c448264fe0443872a688888b6869e2847cd1bf742910ebb2b3a6655a69b9" HandleID="k8s-pod-network.7bd8c448264fe0443872a688888b6869e2847cd1bf742910ebb2b3a6655a69b9" Workload="ip--172--31--28--98-k8s-coredns--674b8bbfcf--hdxm2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000398430), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-28-98", "pod":"coredns-674b8bbfcf-hdxm2", "timestamp":"2025-12-16 13:09:05.458267184 +0000 UTC"}, Hostname:"ip-172-31-28-98", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 13:09:05.937634 containerd[1969]: 2025-12-16 13:09:05.461 [INFO][5052] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 13:09:05.937634 containerd[1969]: 2025-12-16 13:09:05.580 [INFO][5052] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 13:09:05.937634 containerd[1969]: 2025-12-16 13:09:05.581 [INFO][5052] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-28-98' Dec 16 13:09:05.937634 containerd[1969]: 2025-12-16 13:09:05.614 [INFO][5052] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.7bd8c448264fe0443872a688888b6869e2847cd1bf742910ebb2b3a6655a69b9" host="ip-172-31-28-98" Dec 16 13:09:05.937634 containerd[1969]: 2025-12-16 13:09:05.654 [INFO][5052] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-28-98" Dec 16 13:09:05.937634 containerd[1969]: 2025-12-16 13:09:05.695 [INFO][5052] ipam/ipam.go 511: Trying affinity for 192.168.44.0/26 host="ip-172-31-28-98" Dec 16 13:09:05.937634 containerd[1969]: 2025-12-16 13:09:05.714 [INFO][5052] ipam/ipam.go 158: Attempting to load block cidr=192.168.44.0/26 host="ip-172-31-28-98" Dec 16 13:09:05.937634 containerd[1969]: 2025-12-16 13:09:05.724 [INFO][5052] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.44.0/26 host="ip-172-31-28-98" Dec 16 13:09:05.937634 containerd[1969]: 2025-12-16 13:09:05.726 [INFO][5052] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.44.0/26 handle="k8s-pod-network.7bd8c448264fe0443872a688888b6869e2847cd1bf742910ebb2b3a6655a69b9" host="ip-172-31-28-98" Dec 16 13:09:05.938082 containerd[1969]: 2025-12-16 13:09:05.736 [INFO][5052] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.7bd8c448264fe0443872a688888b6869e2847cd1bf742910ebb2b3a6655a69b9 Dec 16 13:09:05.938082 containerd[1969]: 2025-12-16 13:09:05.766 [INFO][5052] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.44.0/26 handle="k8s-pod-network.7bd8c448264fe0443872a688888b6869e2847cd1bf742910ebb2b3a6655a69b9" host="ip-172-31-28-98" Dec 16 13:09:05.938082 containerd[1969]: 2025-12-16 13:09:05.794 [INFO][5052] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.44.6/26] block=192.168.44.0/26 handle="k8s-pod-network.7bd8c448264fe0443872a688888b6869e2847cd1bf742910ebb2b3a6655a69b9" host="ip-172-31-28-98" Dec 16 13:09:05.938082 containerd[1969]: 2025-12-16 13:09:05.795 [INFO][5052] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.44.6/26] handle="k8s-pod-network.7bd8c448264fe0443872a688888b6869e2847cd1bf742910ebb2b3a6655a69b9" host="ip-172-31-28-98" Dec 16 13:09:05.938082 containerd[1969]: 2025-12-16 13:09:05.795 [INFO][5052] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 13:09:05.938082 containerd[1969]: 2025-12-16 13:09:05.795 [INFO][5052] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.44.6/26] IPv6=[] ContainerID="7bd8c448264fe0443872a688888b6869e2847cd1bf742910ebb2b3a6655a69b9" HandleID="k8s-pod-network.7bd8c448264fe0443872a688888b6869e2847cd1bf742910ebb2b3a6655a69b9" Workload="ip--172--31--28--98-k8s-coredns--674b8bbfcf--hdxm2-eth0" Dec 16 13:09:05.938330 containerd[1969]: 2025-12-16 13:09:05.830 [INFO][4930] cni-plugin/k8s.go 418: Populated endpoint ContainerID="7bd8c448264fe0443872a688888b6869e2847cd1bf742910ebb2b3a6655a69b9" Namespace="kube-system" Pod="coredns-674b8bbfcf-hdxm2" WorkloadEndpoint="ip--172--31--28--98-k8s-coredns--674b8bbfcf--hdxm2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--28--98-k8s-coredns--674b8bbfcf--hdxm2-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"6fbf2e8b-b432-4b20-866b-c50e77db1d45", ResourceVersion:"912", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 13, 7, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-28-98", ContainerID:"", Pod:"coredns-674b8bbfcf-hdxm2", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.44.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali3160487b38d", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 13:09:05.938330 containerd[1969]: 2025-12-16 13:09:05.831 [INFO][4930] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.44.6/32] ContainerID="7bd8c448264fe0443872a688888b6869e2847cd1bf742910ebb2b3a6655a69b9" Namespace="kube-system" Pod="coredns-674b8bbfcf-hdxm2" WorkloadEndpoint="ip--172--31--28--98-k8s-coredns--674b8bbfcf--hdxm2-eth0" Dec 16 13:09:05.938330 containerd[1969]: 2025-12-16 13:09:05.831 [INFO][4930] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3160487b38d ContainerID="7bd8c448264fe0443872a688888b6869e2847cd1bf742910ebb2b3a6655a69b9" Namespace="kube-system" Pod="coredns-674b8bbfcf-hdxm2" WorkloadEndpoint="ip--172--31--28--98-k8s-coredns--674b8bbfcf--hdxm2-eth0" Dec 16 13:09:05.938330 containerd[1969]: 2025-12-16 13:09:05.846 [INFO][4930] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7bd8c448264fe0443872a688888b6869e2847cd1bf742910ebb2b3a6655a69b9" Namespace="kube-system" Pod="coredns-674b8bbfcf-hdxm2" WorkloadEndpoint="ip--172--31--28--98-k8s-coredns--674b8bbfcf--hdxm2-eth0" Dec 16 13:09:05.938330 containerd[1969]: 2025-12-16 13:09:05.854 [INFO][4930] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="7bd8c448264fe0443872a688888b6869e2847cd1bf742910ebb2b3a6655a69b9" Namespace="kube-system" Pod="coredns-674b8bbfcf-hdxm2" WorkloadEndpoint="ip--172--31--28--98-k8s-coredns--674b8bbfcf--hdxm2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--28--98-k8s-coredns--674b8bbfcf--hdxm2-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"6fbf2e8b-b432-4b20-866b-c50e77db1d45", ResourceVersion:"912", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 13, 7, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-28-98", ContainerID:"7bd8c448264fe0443872a688888b6869e2847cd1bf742910ebb2b3a6655a69b9", Pod:"coredns-674b8bbfcf-hdxm2", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.44.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali3160487b38d", MAC:"82:2b:15:5a:9c:10", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 13:09:05.938330 containerd[1969]: 2025-12-16 13:09:05.896 [INFO][4930] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="7bd8c448264fe0443872a688888b6869e2847cd1bf742910ebb2b3a6655a69b9" Namespace="kube-system" Pod="coredns-674b8bbfcf-hdxm2" WorkloadEndpoint="ip--172--31--28--98-k8s-coredns--674b8bbfcf--hdxm2-eth0" Dec 16 13:09:05.994000 audit: BPF prog-id=233 op=LOAD Dec 16 13:09:05.996449 containerd[1969]: time="2025-12-16T13:09:05.995913121Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7bcdd655bc-b4pqw,Uid:eef40561-fc3a-47f4-ab5c-0482b5980a8d,Namespace:calico-system,Attempt:0,} returns sandbox id \"8a68779f2f0c13cd06dcdb2860baa251a66c8fdef99bbfeb1e878965ca0495e8\"" Dec 16 13:09:05.997000 audit: BPF prog-id=234 op=LOAD Dec 16 13:09:05.997000 audit[5109]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a8238 a2=98 a3=0 items=0 ppid=5095 pid=5109 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:09:05.997000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3265396266383962656562313136356263336665623863323833333137 Dec 16 13:09:05.997000 audit: BPF prog-id=234 op=UNLOAD Dec 16 13:09:05.997000 audit[5109]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5095 pid=5109 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:09:05.997000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3265396266383962656562313136356263336665623863323833333137 Dec 16 13:09:05.998000 audit: BPF prog-id=235 op=LOAD Dec 16 13:09:05.998000 audit[5109]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a8488 a2=98 a3=0 items=0 ppid=5095 pid=5109 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:09:05.998000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3265396266383962656562313136356263336665623863323833333137 Dec 16 13:09:05.998000 audit: BPF prog-id=236 op=LOAD Dec 16 13:09:05.998000 audit[5109]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001a8218 a2=98 a3=0 items=0 ppid=5095 pid=5109 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:09:05.998000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3265396266383962656562313136356263336665623863323833333137 Dec 16 13:09:05.998000 audit: BPF prog-id=236 op=UNLOAD Dec 16 13:09:05.998000 audit[5109]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5095 pid=5109 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:09:05.998000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3265396266383962656562313136356263336665623863323833333137 Dec 16 13:09:05.998000 audit: BPF prog-id=235 op=UNLOAD Dec 16 13:09:05.998000 audit[5109]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5095 pid=5109 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:09:05.998000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3265396266383962656562313136356263336665623863323833333137 Dec 16 13:09:05.998000 audit: BPF prog-id=237 op=LOAD Dec 16 13:09:05.998000 audit[5109]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a86e8 a2=98 a3=0 items=0 ppid=5095 pid=5109 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:09:05.998000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3265396266383962656562313136356263336665623863323833333137 Dec 16 13:09:06.005343 containerd[1969]: time="2025-12-16T13:09:06.005195757Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Dec 16 13:09:06.077653 systemd-networkd[1567]: cali9d8badd62be: Link UP Dec 16 13:09:06.089034 containerd[1969]: time="2025-12-16T13:09:06.088504175Z" level=info msg="connecting to shim 7bd8c448264fe0443872a688888b6869e2847cd1bf742910ebb2b3a6655a69b9" address="unix:///run/containerd/s/d1bab767961ed836a74d64ff748a927604950f8a4e33aeb01a92e92815eb8993" namespace=k8s.io protocol=ttrpc version=3 Dec 16 13:09:06.089227 systemd-networkd[1567]: cali9d8badd62be: Gained carrier Dec 16 13:09:06.105871 containerd[1969]: time="2025-12-16T13:09:06.105792987Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-h272q,Uid:c808a4b9-6eee-4490-92c6-5f208009c5e7,Namespace:calico-system,Attempt:0,} returns sandbox id \"2e9bf89beeb1165bc3feb8c28331772b14d00abc565e54758b68faece2debcbf\"" Dec 16 13:09:06.170172 containerd[1969]: 2025-12-16 13:09:05.157 [INFO][4948] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--28--98-k8s-coredns--674b8bbfcf--xp9r7-eth0 coredns-674b8bbfcf- kube-system 4667e186-7669-4eee-8c92-538a1a091f5e 911 0 2025-12-16 13:07:44 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-28-98 coredns-674b8bbfcf-xp9r7 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali9d8badd62be [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="9307d7d084db0bf12de6615888af7accb78f6d94fbf6680a7d54aa1f3eabc89e" Namespace="kube-system" Pod="coredns-674b8bbfcf-xp9r7" WorkloadEndpoint="ip--172--31--28--98-k8s-coredns--674b8bbfcf--xp9r7-" Dec 16 13:09:06.170172 containerd[1969]: 2025-12-16 13:09:05.157 [INFO][4948] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="9307d7d084db0bf12de6615888af7accb78f6d94fbf6680a7d54aa1f3eabc89e" Namespace="kube-system" Pod="coredns-674b8bbfcf-xp9r7" WorkloadEndpoint="ip--172--31--28--98-k8s-coredns--674b8bbfcf--xp9r7-eth0" Dec 16 13:09:06.170172 containerd[1969]: 2025-12-16 13:09:05.467 [INFO][5011] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9307d7d084db0bf12de6615888af7accb78f6d94fbf6680a7d54aa1f3eabc89e" HandleID="k8s-pod-network.9307d7d084db0bf12de6615888af7accb78f6d94fbf6680a7d54aa1f3eabc89e" Workload="ip--172--31--28--98-k8s-coredns--674b8bbfcf--xp9r7-eth0" Dec 16 13:09:06.170172 containerd[1969]: 2025-12-16 13:09:05.467 [INFO][5011] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="9307d7d084db0bf12de6615888af7accb78f6d94fbf6680a7d54aa1f3eabc89e" HandleID="k8s-pod-network.9307d7d084db0bf12de6615888af7accb78f6d94fbf6680a7d54aa1f3eabc89e" Workload="ip--172--31--28--98-k8s-coredns--674b8bbfcf--xp9r7-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003c7b40), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-28-98", "pod":"coredns-674b8bbfcf-xp9r7", "timestamp":"2025-12-16 13:09:05.467391828 +0000 UTC"}, Hostname:"ip-172-31-28-98", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 13:09:06.170172 containerd[1969]: 2025-12-16 13:09:05.467 [INFO][5011] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 13:09:06.170172 containerd[1969]: 2025-12-16 13:09:05.795 [INFO][5011] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 13:09:06.170172 containerd[1969]: 2025-12-16 13:09:05.797 [INFO][5011] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-28-98' Dec 16 13:09:06.170172 containerd[1969]: 2025-12-16 13:09:05.849 [INFO][5011] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.9307d7d084db0bf12de6615888af7accb78f6d94fbf6680a7d54aa1f3eabc89e" host="ip-172-31-28-98" Dec 16 13:09:06.170172 containerd[1969]: 2025-12-16 13:09:05.874 [INFO][5011] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-28-98" Dec 16 13:09:06.170172 containerd[1969]: 2025-12-16 13:09:05.909 [INFO][5011] ipam/ipam.go 511: Trying affinity for 192.168.44.0/26 host="ip-172-31-28-98" Dec 16 13:09:06.170172 containerd[1969]: 2025-12-16 13:09:05.919 [INFO][5011] ipam/ipam.go 158: Attempting to load block cidr=192.168.44.0/26 host="ip-172-31-28-98" Dec 16 13:09:06.170172 containerd[1969]: 2025-12-16 13:09:05.942 [INFO][5011] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.44.0/26 host="ip-172-31-28-98" Dec 16 13:09:06.170172 containerd[1969]: 2025-12-16 13:09:05.942 [INFO][5011] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.44.0/26 handle="k8s-pod-network.9307d7d084db0bf12de6615888af7accb78f6d94fbf6680a7d54aa1f3eabc89e" host="ip-172-31-28-98" Dec 16 13:09:06.170172 containerd[1969]: 2025-12-16 13:09:05.952 [INFO][5011] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.9307d7d084db0bf12de6615888af7accb78f6d94fbf6680a7d54aa1f3eabc89e Dec 16 13:09:06.170172 containerd[1969]: 2025-12-16 13:09:05.993 [INFO][5011] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.44.0/26 handle="k8s-pod-network.9307d7d084db0bf12de6615888af7accb78f6d94fbf6680a7d54aa1f3eabc89e" host="ip-172-31-28-98" Dec 16 13:09:06.170172 containerd[1969]: 2025-12-16 13:09:06.044 [INFO][5011] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.44.7/26] block=192.168.44.0/26 handle="k8s-pod-network.9307d7d084db0bf12de6615888af7accb78f6d94fbf6680a7d54aa1f3eabc89e" host="ip-172-31-28-98" Dec 16 13:09:06.170172 containerd[1969]: 2025-12-16 13:09:06.047 [INFO][5011] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.44.7/26] handle="k8s-pod-network.9307d7d084db0bf12de6615888af7accb78f6d94fbf6680a7d54aa1f3eabc89e" host="ip-172-31-28-98" Dec 16 13:09:06.170172 containerd[1969]: 2025-12-16 13:09:06.047 [INFO][5011] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 13:09:06.170172 containerd[1969]: 2025-12-16 13:09:06.047 [INFO][5011] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.44.7/26] IPv6=[] ContainerID="9307d7d084db0bf12de6615888af7accb78f6d94fbf6680a7d54aa1f3eabc89e" HandleID="k8s-pod-network.9307d7d084db0bf12de6615888af7accb78f6d94fbf6680a7d54aa1f3eabc89e" Workload="ip--172--31--28--98-k8s-coredns--674b8bbfcf--xp9r7-eth0" Dec 16 13:09:06.175917 containerd[1969]: 2025-12-16 13:09:06.056 [INFO][4948] cni-plugin/k8s.go 418: Populated endpoint ContainerID="9307d7d084db0bf12de6615888af7accb78f6d94fbf6680a7d54aa1f3eabc89e" Namespace="kube-system" Pod="coredns-674b8bbfcf-xp9r7" WorkloadEndpoint="ip--172--31--28--98-k8s-coredns--674b8bbfcf--xp9r7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--28--98-k8s-coredns--674b8bbfcf--xp9r7-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"4667e186-7669-4eee-8c92-538a1a091f5e", ResourceVersion:"911", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 13, 7, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-28-98", ContainerID:"", Pod:"coredns-674b8bbfcf-xp9r7", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.44.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali9d8badd62be", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 13:09:06.175917 containerd[1969]: 2025-12-16 13:09:06.057 [INFO][4948] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.44.7/32] ContainerID="9307d7d084db0bf12de6615888af7accb78f6d94fbf6680a7d54aa1f3eabc89e" Namespace="kube-system" Pod="coredns-674b8bbfcf-xp9r7" WorkloadEndpoint="ip--172--31--28--98-k8s-coredns--674b8bbfcf--xp9r7-eth0" Dec 16 13:09:06.175917 containerd[1969]: 2025-12-16 13:09:06.058 [INFO][4948] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9d8badd62be ContainerID="9307d7d084db0bf12de6615888af7accb78f6d94fbf6680a7d54aa1f3eabc89e" Namespace="kube-system" Pod="coredns-674b8bbfcf-xp9r7" WorkloadEndpoint="ip--172--31--28--98-k8s-coredns--674b8bbfcf--xp9r7-eth0" Dec 16 13:09:06.175917 containerd[1969]: 2025-12-16 13:09:06.097 [INFO][4948] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9307d7d084db0bf12de6615888af7accb78f6d94fbf6680a7d54aa1f3eabc89e" Namespace="kube-system" Pod="coredns-674b8bbfcf-xp9r7" WorkloadEndpoint="ip--172--31--28--98-k8s-coredns--674b8bbfcf--xp9r7-eth0" Dec 16 13:09:06.175917 containerd[1969]: 2025-12-16 13:09:06.103 [INFO][4948] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="9307d7d084db0bf12de6615888af7accb78f6d94fbf6680a7d54aa1f3eabc89e" Namespace="kube-system" Pod="coredns-674b8bbfcf-xp9r7" WorkloadEndpoint="ip--172--31--28--98-k8s-coredns--674b8bbfcf--xp9r7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--28--98-k8s-coredns--674b8bbfcf--xp9r7-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"4667e186-7669-4eee-8c92-538a1a091f5e", ResourceVersion:"911", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 13, 7, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-28-98", ContainerID:"9307d7d084db0bf12de6615888af7accb78f6d94fbf6680a7d54aa1f3eabc89e", Pod:"coredns-674b8bbfcf-xp9r7", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.44.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali9d8badd62be", MAC:"fe:db:bf:56:ec:c2", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 13:09:06.175917 containerd[1969]: 2025-12-16 13:09:06.146 [INFO][4948] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="9307d7d084db0bf12de6615888af7accb78f6d94fbf6680a7d54aa1f3eabc89e" Namespace="kube-system" Pod="coredns-674b8bbfcf-xp9r7" WorkloadEndpoint="ip--172--31--28--98-k8s-coredns--674b8bbfcf--xp9r7-eth0" Dec 16 13:09:06.201454 systemd[1]: Started cri-containerd-7bd8c448264fe0443872a688888b6869e2847cd1bf742910ebb2b3a6655a69b9.scope - libcontainer container 7bd8c448264fe0443872a688888b6869e2847cd1bf742910ebb2b3a6655a69b9. Dec 16 13:09:06.255592 containerd[1969]: time="2025-12-16T13:09:06.255527082Z" level=info msg="connecting to shim 9307d7d084db0bf12de6615888af7accb78f6d94fbf6680a7d54aa1f3eabc89e" address="unix:///run/containerd/s/09e6344630007d908dd141c5fbab711249ed24646889ac98b829a612554d2832" namespace=k8s.io protocol=ttrpc version=3 Dec 16 13:09:06.259370 containerd[1969]: time="2025-12-16T13:09:06.259311911Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:09:06.261720 containerd[1969]: time="2025-12-16T13:09:06.261660711Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Dec 16 13:09:06.261906 containerd[1969]: time="2025-12-16T13:09:06.261683296Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Dec 16 13:09:06.263038 kubelet[3309]: E1216 13:09:06.262980 3309 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 16 13:09:06.263566 kubelet[3309]: E1216 13:09:06.263262 3309 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 16 13:09:06.264226 kubelet[3309]: E1216 13:09:06.264093 3309 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-q6qrs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-7bcdd655bc-b4pqw_calico-system(eef40561-fc3a-47f4-ab5c-0482b5980a8d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Dec 16 13:09:06.266024 kubelet[3309]: E1216 13:09:06.265981 3309 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7bcdd655bc-b4pqw" podUID="eef40561-fc3a-47f4-ab5c-0482b5980a8d" Dec 16 13:09:06.268591 containerd[1969]: time="2025-12-16T13:09:06.268541600Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Dec 16 13:09:06.295831 systemd-networkd[1567]: calie50f80cfe07: Gained IPv6LL Dec 16 13:09:06.262000 audit[5198]: NETFILTER_CFG table=filter:125 family=2 entries=199 op=nft_register_chain pid=5198 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 16 13:09:06.262000 audit[5198]: SYSCALL arch=c000003e syscall=46 success=yes exit=119652 a0=3 a1=7ffed82389a0 a2=0 a3=7ffed823898c items=0 ppid=4567 pid=5198 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:09:06.262000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 16 13:09:06.318000 audit: BPF prog-id=238 op=LOAD Dec 16 13:09:06.323000 audit: BPF prog-id=239 op=LOAD Dec 16 13:09:06.323000 audit[5171]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000128238 a2=98 a3=0 items=0 ppid=5158 pid=5171 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:09:06.323000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3762643863343438323634666530343433383732613638383838386236 Dec 16 13:09:06.327000 audit: BPF prog-id=239 op=UNLOAD Dec 16 13:09:06.327000 audit[5171]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5158 pid=5171 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:09:06.327000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3762643863343438323634666530343433383732613638383838386236 Dec 16 13:09:06.331000 audit: BPF prog-id=240 op=LOAD Dec 16 13:09:06.331000 audit[5171]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000128488 a2=98 a3=0 items=0 ppid=5158 pid=5171 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:09:06.331000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3762643863343438323634666530343433383732613638383838386236 Dec 16 13:09:06.331000 audit: BPF prog-id=241 op=LOAD Dec 16 13:09:06.331000 audit[5171]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000128218 a2=98 a3=0 items=0 ppid=5158 pid=5171 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:09:06.331000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3762643863343438323634666530343433383732613638383838386236 Dec 16 13:09:06.331000 audit: BPF prog-id=241 op=UNLOAD Dec 16 13:09:06.331000 audit[5171]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5158 pid=5171 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:09:06.331000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3762643863343438323634666530343433383732613638383838386236 Dec 16 13:09:06.331000 audit: BPF prog-id=240 op=UNLOAD Dec 16 13:09:06.331000 audit[5171]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5158 pid=5171 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:09:06.331000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3762643863343438323634666530343433383732613638383838386236 Dec 16 13:09:06.334000 audit: BPF prog-id=242 op=LOAD Dec 16 13:09:06.334000 audit[5171]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001286e8 a2=98 a3=0 items=0 ppid=5158 pid=5171 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:09:06.334000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3762643863343438323634666530343433383732613638383838386236 Dec 16 13:09:06.349114 systemd-networkd[1567]: cali84ab746864d: Link UP Dec 16 13:09:06.356501 systemd-networkd[1567]: cali84ab746864d: Gained carrier Dec 16 13:09:06.358380 systemd-networkd[1567]: cali499f427dd94: Gained IPv6LL Dec 16 13:09:06.379000 audit[5233]: NETFILTER_CFG table=filter:126 family=2 entries=20 op=nft_register_rule pid=5233 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 13:09:06.379000 audit[5233]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7fffcc490010 a2=0 a3=7fffcc48fffc items=0 ppid=3636 pid=5233 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:09:06.379000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 13:09:06.384000 audit[5233]: NETFILTER_CFG table=nat:127 family=2 entries=14 op=nft_register_rule pid=5233 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 13:09:06.384000 audit[5233]: SYSCALL arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7fffcc490010 a2=0 a3=0 items=0 ppid=3636 pid=5233 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:09:06.384000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 13:09:06.413641 containerd[1969]: 2025-12-16 13:09:05.262 [INFO][4925] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--28--98-k8s-calico--apiserver--6d7fb6ffdb--t947q-eth0 calico-apiserver-6d7fb6ffdb- calico-apiserver 402c8f91-f505-4b31-ab8d-437df33aba9f 914 0 2025-12-16 13:08:26 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6d7fb6ffdb projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-28-98 calico-apiserver-6d7fb6ffdb-t947q eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali84ab746864d [] [] }} ContainerID="567303ab9f153e3a09c33ca41cc72352b5f3b79b4658beb9b9d9939c16de70a7" Namespace="calico-apiserver" Pod="calico-apiserver-6d7fb6ffdb-t947q" WorkloadEndpoint="ip--172--31--28--98-k8s-calico--apiserver--6d7fb6ffdb--t947q-" Dec 16 13:09:06.413641 containerd[1969]: 2025-12-16 13:09:05.265 [INFO][4925] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="567303ab9f153e3a09c33ca41cc72352b5f3b79b4658beb9b9d9939c16de70a7" Namespace="calico-apiserver" Pod="calico-apiserver-6d7fb6ffdb-t947q" WorkloadEndpoint="ip--172--31--28--98-k8s-calico--apiserver--6d7fb6ffdb--t947q-eth0" Dec 16 13:09:06.413641 containerd[1969]: 2025-12-16 13:09:05.520 [INFO][5053] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="567303ab9f153e3a09c33ca41cc72352b5f3b79b4658beb9b9d9939c16de70a7" HandleID="k8s-pod-network.567303ab9f153e3a09c33ca41cc72352b5f3b79b4658beb9b9d9939c16de70a7" Workload="ip--172--31--28--98-k8s-calico--apiserver--6d7fb6ffdb--t947q-eth0" Dec 16 13:09:06.413641 containerd[1969]: 2025-12-16 13:09:05.520 [INFO][5053] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="567303ab9f153e3a09c33ca41cc72352b5f3b79b4658beb9b9d9939c16de70a7" HandleID="k8s-pod-network.567303ab9f153e3a09c33ca41cc72352b5f3b79b4658beb9b9d9939c16de70a7" Workload="ip--172--31--28--98-k8s-calico--apiserver--6d7fb6ffdb--t947q-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004e9e0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ip-172-31-28-98", "pod":"calico-apiserver-6d7fb6ffdb-t947q", "timestamp":"2025-12-16 13:09:05.520153028 +0000 UTC"}, Hostname:"ip-172-31-28-98", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 13:09:06.413641 containerd[1969]: 2025-12-16 13:09:05.520 [INFO][5053] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 13:09:06.413641 containerd[1969]: 2025-12-16 13:09:06.047 [INFO][5053] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 13:09:06.413641 containerd[1969]: 2025-12-16 13:09:06.047 [INFO][5053] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-28-98' Dec 16 13:09:06.413641 containerd[1969]: 2025-12-16 13:09:06.144 [INFO][5053] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.567303ab9f153e3a09c33ca41cc72352b5f3b79b4658beb9b9d9939c16de70a7" host="ip-172-31-28-98" Dec 16 13:09:06.413641 containerd[1969]: 2025-12-16 13:09:06.167 [INFO][5053] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-28-98" Dec 16 13:09:06.413641 containerd[1969]: 2025-12-16 13:09:06.206 [INFO][5053] ipam/ipam.go 511: Trying affinity for 192.168.44.0/26 host="ip-172-31-28-98" Dec 16 13:09:06.413641 containerd[1969]: 2025-12-16 13:09:06.213 [INFO][5053] ipam/ipam.go 158: Attempting to load block cidr=192.168.44.0/26 host="ip-172-31-28-98" Dec 16 13:09:06.413641 containerd[1969]: 2025-12-16 13:09:06.226 [INFO][5053] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.44.0/26 host="ip-172-31-28-98" Dec 16 13:09:06.413641 containerd[1969]: 2025-12-16 13:09:06.227 [INFO][5053] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.44.0/26 handle="k8s-pod-network.567303ab9f153e3a09c33ca41cc72352b5f3b79b4658beb9b9d9939c16de70a7" host="ip-172-31-28-98" Dec 16 13:09:06.413641 containerd[1969]: 2025-12-16 13:09:06.238 [INFO][5053] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.567303ab9f153e3a09c33ca41cc72352b5f3b79b4658beb9b9d9939c16de70a7 Dec 16 13:09:06.413641 containerd[1969]: 2025-12-16 13:09:06.264 [INFO][5053] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.44.0/26 handle="k8s-pod-network.567303ab9f153e3a09c33ca41cc72352b5f3b79b4658beb9b9d9939c16de70a7" host="ip-172-31-28-98" Dec 16 13:09:06.413641 containerd[1969]: 2025-12-16 13:09:06.291 [INFO][5053] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.44.8/26] block=192.168.44.0/26 handle="k8s-pod-network.567303ab9f153e3a09c33ca41cc72352b5f3b79b4658beb9b9d9939c16de70a7" host="ip-172-31-28-98" Dec 16 13:09:06.413641 containerd[1969]: 2025-12-16 13:09:06.291 [INFO][5053] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.44.8/26] handle="k8s-pod-network.567303ab9f153e3a09c33ca41cc72352b5f3b79b4658beb9b9d9939c16de70a7" host="ip-172-31-28-98" Dec 16 13:09:06.413641 containerd[1969]: 2025-12-16 13:09:06.291 [INFO][5053] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 13:09:06.413641 containerd[1969]: 2025-12-16 13:09:06.291 [INFO][5053] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.44.8/26] IPv6=[] ContainerID="567303ab9f153e3a09c33ca41cc72352b5f3b79b4658beb9b9d9939c16de70a7" HandleID="k8s-pod-network.567303ab9f153e3a09c33ca41cc72352b5f3b79b4658beb9b9d9939c16de70a7" Workload="ip--172--31--28--98-k8s-calico--apiserver--6d7fb6ffdb--t947q-eth0" Dec 16 13:09:06.415719 containerd[1969]: 2025-12-16 13:09:06.331 [INFO][4925] cni-plugin/k8s.go 418: Populated endpoint ContainerID="567303ab9f153e3a09c33ca41cc72352b5f3b79b4658beb9b9d9939c16de70a7" Namespace="calico-apiserver" Pod="calico-apiserver-6d7fb6ffdb-t947q" WorkloadEndpoint="ip--172--31--28--98-k8s-calico--apiserver--6d7fb6ffdb--t947q-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--28--98-k8s-calico--apiserver--6d7fb6ffdb--t947q-eth0", GenerateName:"calico-apiserver-6d7fb6ffdb-", Namespace:"calico-apiserver", SelfLink:"", UID:"402c8f91-f505-4b31-ab8d-437df33aba9f", ResourceVersion:"914", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 13, 8, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6d7fb6ffdb", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-28-98", ContainerID:"", Pod:"calico-apiserver-6d7fb6ffdb-t947q", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.44.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali84ab746864d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 13:09:06.415719 containerd[1969]: 2025-12-16 13:09:06.331 [INFO][4925] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.44.8/32] ContainerID="567303ab9f153e3a09c33ca41cc72352b5f3b79b4658beb9b9d9939c16de70a7" Namespace="calico-apiserver" Pod="calico-apiserver-6d7fb6ffdb-t947q" WorkloadEndpoint="ip--172--31--28--98-k8s-calico--apiserver--6d7fb6ffdb--t947q-eth0" Dec 16 13:09:06.415719 containerd[1969]: 2025-12-16 13:09:06.334 [INFO][4925] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali84ab746864d ContainerID="567303ab9f153e3a09c33ca41cc72352b5f3b79b4658beb9b9d9939c16de70a7" Namespace="calico-apiserver" Pod="calico-apiserver-6d7fb6ffdb-t947q" WorkloadEndpoint="ip--172--31--28--98-k8s-calico--apiserver--6d7fb6ffdb--t947q-eth0" Dec 16 13:09:06.415719 containerd[1969]: 2025-12-16 13:09:06.364 [INFO][4925] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="567303ab9f153e3a09c33ca41cc72352b5f3b79b4658beb9b9d9939c16de70a7" Namespace="calico-apiserver" Pod="calico-apiserver-6d7fb6ffdb-t947q" WorkloadEndpoint="ip--172--31--28--98-k8s-calico--apiserver--6d7fb6ffdb--t947q-eth0" Dec 16 13:09:06.415719 containerd[1969]: 2025-12-16 13:09:06.367 [INFO][4925] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="567303ab9f153e3a09c33ca41cc72352b5f3b79b4658beb9b9d9939c16de70a7" Namespace="calico-apiserver" Pod="calico-apiserver-6d7fb6ffdb-t947q" WorkloadEndpoint="ip--172--31--28--98-k8s-calico--apiserver--6d7fb6ffdb--t947q-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--28--98-k8s-calico--apiserver--6d7fb6ffdb--t947q-eth0", GenerateName:"calico-apiserver-6d7fb6ffdb-", Namespace:"calico-apiserver", SelfLink:"", UID:"402c8f91-f505-4b31-ab8d-437df33aba9f", ResourceVersion:"914", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 13, 8, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6d7fb6ffdb", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-28-98", ContainerID:"567303ab9f153e3a09c33ca41cc72352b5f3b79b4658beb9b9d9939c16de70a7", Pod:"calico-apiserver-6d7fb6ffdb-t947q", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.44.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali84ab746864d", MAC:"b2:67:18:99:68:6f", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 13:09:06.415719 containerd[1969]: 2025-12-16 13:09:06.400 [INFO][4925] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="567303ab9f153e3a09c33ca41cc72352b5f3b79b4658beb9b9d9939c16de70a7" Namespace="calico-apiserver" Pod="calico-apiserver-6d7fb6ffdb-t947q" WorkloadEndpoint="ip--172--31--28--98-k8s-calico--apiserver--6d7fb6ffdb--t947q-eth0" Dec 16 13:09:06.420422 systemd[1]: Started cri-containerd-9307d7d084db0bf12de6615888af7accb78f6d94fbf6680a7d54aa1f3eabc89e.scope - libcontainer container 9307d7d084db0bf12de6615888af7accb78f6d94fbf6680a7d54aa1f3eabc89e. Dec 16 13:09:06.462417 containerd[1969]: time="2025-12-16T13:09:06.462365345Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-wpbz6,Uid:ea48f51b-a248-4d71-8caa-ed889e7f5fac,Namespace:calico-system,Attempt:0,} returns sandbox id \"e4d364b20c6d7277aa37cab05bdc5b40fb5a679698f2a209b86cc1906423d38d\"" Dec 16 13:09:06.465000 audit: BPF prog-id=243 op=LOAD Dec 16 13:09:06.466000 audit: BPF prog-id=244 op=LOAD Dec 16 13:09:06.466000 audit[5218]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000178238 a2=98 a3=0 items=0 ppid=5205 pid=5218 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:09:06.466000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3933303764376430383464623062663132646536363135383838616637 Dec 16 13:09:06.466000 audit: BPF prog-id=244 op=UNLOAD Dec 16 13:09:06.466000 audit[5218]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=5205 pid=5218 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:09:06.466000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3933303764376430383464623062663132646536363135383838616637 Dec 16 13:09:06.460000 audit[5250]: NETFILTER_CFG table=filter:128 family=2 entries=20 op=nft_register_rule pid=5250 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 13:09:06.467000 audit: BPF prog-id=245 op=LOAD Dec 16 13:09:06.467000 audit[5218]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000178488 a2=98 a3=0 items=0 ppid=5205 pid=5218 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:09:06.467000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3933303764376430383464623062663132646536363135383838616637 Dec 16 13:09:06.467000 audit: BPF prog-id=246 op=LOAD Dec 16 13:09:06.467000 audit[5218]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000178218 a2=98 a3=0 items=0 ppid=5205 pid=5218 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:09:06.467000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3933303764376430383464623062663132646536363135383838616637 Dec 16 13:09:06.468000 audit: BPF prog-id=246 op=UNLOAD Dec 16 13:09:06.468000 audit[5218]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=5205 pid=5218 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:09:06.468000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3933303764376430383464623062663132646536363135383838616637 Dec 16 13:09:06.468000 audit: BPF prog-id=245 op=UNLOAD Dec 16 13:09:06.468000 audit[5218]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=5205 pid=5218 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:09:06.468000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3933303764376430383464623062663132646536363135383838616637 Dec 16 13:09:06.469000 audit: BPF prog-id=247 op=LOAD Dec 16 13:09:06.469000 audit[5218]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001786e8 a2=98 a3=0 items=0 ppid=5205 pid=5218 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:09:06.469000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3933303764376430383464623062663132646536363135383838616637 Dec 16 13:09:06.460000 audit[5250]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffe1ab8d2b0 a2=0 a3=7ffe1ab8d29c items=0 ppid=3636 pid=5250 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:09:06.460000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 13:09:06.481000 audit[5250]: NETFILTER_CFG table=nat:129 family=2 entries=14 op=nft_register_rule pid=5250 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 13:09:06.481000 audit[5250]: SYSCALL arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7ffe1ab8d2b0 a2=0 a3=0 items=0 ppid=3636 pid=5250 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:09:06.481000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 13:09:06.520000 audit[5257]: NETFILTER_CFG table=filter:130 family=2 entries=84 op=nft_register_chain pid=5257 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 16 13:09:06.520000 audit[5257]: SYSCALL arch=c000003e syscall=46 success=yes exit=44020 a0=3 a1=7fffc598b430 a2=0 a3=7fffc598b41c items=0 ppid=4567 pid=5257 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:09:06.520000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 16 13:09:06.573000 audit[5273]: NETFILTER_CFG table=filter:131 family=2 entries=53 op=nft_register_chain pid=5273 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 16 13:09:06.573000 audit[5273]: SYSCALL arch=c000003e syscall=46 success=yes exit=26608 a0=3 a1=7ffd6daa87e0 a2=0 a3=7ffd6daa87cc items=0 ppid=4567 pid=5273 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:09:06.573000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 16 13:09:06.584513 containerd[1969]: time="2025-12-16T13:09:06.584438334Z" level=info msg="connecting to shim 567303ab9f153e3a09c33ca41cc72352b5f3b79b4658beb9b9d9939c16de70a7" address="unix:///run/containerd/s/c3eff524142db9e56e04ae6fa003d360c5cbf881f4bc17d3b78f9922bc31211d" namespace=k8s.io protocol=ttrpc version=3 Dec 16 13:09:06.586054 containerd[1969]: time="2025-12-16T13:09:06.585970203Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-hdxm2,Uid:6fbf2e8b-b432-4b20-866b-c50e77db1d45,Namespace:kube-system,Attempt:0,} returns sandbox id \"7bd8c448264fe0443872a688888b6869e2847cd1bf742910ebb2b3a6655a69b9\"" Dec 16 13:09:06.595698 containerd[1969]: time="2025-12-16T13:09:06.595427706Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-xp9r7,Uid:4667e186-7669-4eee-8c92-538a1a091f5e,Namespace:kube-system,Attempt:0,} returns sandbox id \"9307d7d084db0bf12de6615888af7accb78f6d94fbf6680a7d54aa1f3eabc89e\"" Dec 16 13:09:06.603347 containerd[1969]: time="2025-12-16T13:09:06.603081030Z" level=info msg="CreateContainer within sandbox \"7bd8c448264fe0443872a688888b6869e2847cd1bf742910ebb2b3a6655a69b9\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 16 13:09:06.608387 containerd[1969]: time="2025-12-16T13:09:06.608327513Z" level=info msg="CreateContainer within sandbox \"9307d7d084db0bf12de6615888af7accb78f6d94fbf6680a7d54aa1f3eabc89e\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 16 13:09:06.632219 systemd[1]: Started cri-containerd-567303ab9f153e3a09c33ca41cc72352b5f3b79b4658beb9b9d9939c16de70a7.scope - libcontainer container 567303ab9f153e3a09c33ca41cc72352b5f3b79b4658beb9b9d9939c16de70a7. Dec 16 13:09:06.636286 containerd[1969]: time="2025-12-16T13:09:06.636214155Z" level=info msg="Container bfa2a0f9d21adb9fed1e21c6181655a4b34f07601fc031c0081527e7af92be48: CDI devices from CRI Config.CDIDevices: []" Dec 16 13:09:06.662464 containerd[1969]: time="2025-12-16T13:09:06.662175481Z" level=info msg="Container d212c42653209a104b85ade4fadba621fd3a733c53c71de44edd9e4f9deffaa1: CDI devices from CRI Config.CDIDevices: []" Dec 16 13:09:06.668000 audit: BPF prog-id=248 op=LOAD Dec 16 13:09:06.671000 audit: BPF prog-id=249 op=LOAD Dec 16 13:09:06.671000 audit[5293]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0238 a2=98 a3=0 items=0 ppid=5282 pid=5293 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:09:06.671000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3536373330336162396631353365336130396333336361343163633732 Dec 16 13:09:06.672000 audit: BPF prog-id=249 op=UNLOAD Dec 16 13:09:06.672000 audit[5293]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5282 pid=5293 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:09:06.672000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3536373330336162396631353365336130396333336361343163633732 Dec 16 13:09:06.672000 audit: BPF prog-id=250 op=LOAD Dec 16 13:09:06.672000 audit[5293]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0488 a2=98 a3=0 items=0 ppid=5282 pid=5293 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:09:06.672000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3536373330336162396631353365336130396333336361343163633732 Dec 16 13:09:06.673000 audit: BPF prog-id=251 op=LOAD Dec 16 13:09:06.673000 audit[5293]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001a0218 a2=98 a3=0 items=0 ppid=5282 pid=5293 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:09:06.673000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3536373330336162396631353365336130396333336361343163633732 Dec 16 13:09:06.673000 audit: BPF prog-id=251 op=UNLOAD Dec 16 13:09:06.673000 audit[5293]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5282 pid=5293 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:09:06.673000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3536373330336162396631353365336130396333336361343163633732 Dec 16 13:09:06.674000 audit: BPF prog-id=250 op=UNLOAD Dec 16 13:09:06.674000 audit[5293]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5282 pid=5293 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:09:06.674000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3536373330336162396631353365336130396333336361343163633732 Dec 16 13:09:06.675070 containerd[1969]: time="2025-12-16T13:09:06.674946345Z" level=info msg="CreateContainer within sandbox \"7bd8c448264fe0443872a688888b6869e2847cd1bf742910ebb2b3a6655a69b9\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"bfa2a0f9d21adb9fed1e21c6181655a4b34f07601fc031c0081527e7af92be48\"" Dec 16 13:09:06.674000 audit: BPF prog-id=252 op=LOAD Dec 16 13:09:06.674000 audit[5293]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a06e8 a2=98 a3=0 items=0 ppid=5282 pid=5293 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:09:06.674000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3536373330336162396631353365336130396333336361343163633732 Dec 16 13:09:06.677209 containerd[1969]: time="2025-12-16T13:09:06.677171720Z" level=info msg="StartContainer for \"bfa2a0f9d21adb9fed1e21c6181655a4b34f07601fc031c0081527e7af92be48\"" Dec 16 13:09:06.680562 containerd[1969]: time="2025-12-16T13:09:06.680501280Z" level=info msg="connecting to shim bfa2a0f9d21adb9fed1e21c6181655a4b34f07601fc031c0081527e7af92be48" address="unix:///run/containerd/s/d1bab767961ed836a74d64ff748a927604950f8a4e33aeb01a92e92815eb8993" protocol=ttrpc version=3 Dec 16 13:09:06.683399 containerd[1969]: time="2025-12-16T13:09:06.683351498Z" level=info msg="CreateContainer within sandbox \"9307d7d084db0bf12de6615888af7accb78f6d94fbf6680a7d54aa1f3eabc89e\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"d212c42653209a104b85ade4fadba621fd3a733c53c71de44edd9e4f9deffaa1\"" Dec 16 13:09:06.687133 containerd[1969]: time="2025-12-16T13:09:06.686572370Z" level=info msg="StartContainer for \"d212c42653209a104b85ade4fadba621fd3a733c53c71de44edd9e4f9deffaa1\"" Dec 16 13:09:06.691504 containerd[1969]: time="2025-12-16T13:09:06.691316659Z" level=info msg="connecting to shim d212c42653209a104b85ade4fadba621fd3a733c53c71de44edd9e4f9deffaa1" address="unix:///run/containerd/s/09e6344630007d908dd141c5fbab711249ed24646889ac98b829a612554d2832" protocol=ttrpc version=3 Dec 16 13:09:06.748889 containerd[1969]: time="2025-12-16T13:09:06.721596157Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:09:06.748889 containerd[1969]: time="2025-12-16T13:09:06.724008811Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Dec 16 13:09:06.748889 containerd[1969]: time="2025-12-16T13:09:06.724188468Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Dec 16 13:09:06.748889 containerd[1969]: time="2025-12-16T13:09:06.725482867Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Dec 16 13:09:06.749210 kubelet[3309]: E1216 13:09:06.724562 3309 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 16 13:09:06.749210 kubelet[3309]: E1216 13:09:06.724641 3309 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 16 13:09:06.752652 kubelet[3309]: E1216 13:09:06.724988 3309 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-m7qnn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-h272q_calico-system(c808a4b9-6eee-4490-92c6-5f208009c5e7): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Dec 16 13:09:06.757461 systemd[1]: Started cri-containerd-bfa2a0f9d21adb9fed1e21c6181655a4b34f07601fc031c0081527e7af92be48.scope - libcontainer container bfa2a0f9d21adb9fed1e21c6181655a4b34f07601fc031c0081527e7af92be48. Dec 16 13:09:06.804726 systemd[1]: Started cri-containerd-d212c42653209a104b85ade4fadba621fd3a733c53c71de44edd9e4f9deffaa1.scope - libcontainer container d212c42653209a104b85ade4fadba621fd3a733c53c71de44edd9e4f9deffaa1. Dec 16 13:09:06.816000 audit: BPF prog-id=253 op=LOAD Dec 16 13:09:06.823000 audit: BPF prog-id=254 op=LOAD Dec 16 13:09:06.823000 audit[5313]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=5158 pid=5313 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:09:06.823000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6266613261306639643231616462396665643165323163363138313635 Dec 16 13:09:06.825000 audit: BPF prog-id=254 op=UNLOAD Dec 16 13:09:06.825000 audit[5313]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5158 pid=5313 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:09:06.825000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6266613261306639643231616462396665643165323163363138313635 Dec 16 13:09:06.825000 audit: BPF prog-id=255 op=LOAD Dec 16 13:09:06.825000 audit[5313]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=5158 pid=5313 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:09:06.825000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6266613261306639643231616462396665643165323163363138313635 Dec 16 13:09:06.829000 audit: BPF prog-id=256 op=LOAD Dec 16 13:09:06.829000 audit[5313]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=5158 pid=5313 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:09:06.829000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6266613261306639643231616462396665643165323163363138313635 Dec 16 13:09:06.829000 audit: BPF prog-id=256 op=UNLOAD Dec 16 13:09:06.829000 audit[5313]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5158 pid=5313 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:09:06.829000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6266613261306639643231616462396665643165323163363138313635 Dec 16 13:09:06.829000 audit: BPF prog-id=255 op=UNLOAD Dec 16 13:09:06.829000 audit[5313]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5158 pid=5313 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:09:06.829000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6266613261306639643231616462396665643165323163363138313635 Dec 16 13:09:06.830000 audit: BPF prog-id=257 op=LOAD Dec 16 13:09:06.830000 audit[5313]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=5158 pid=5313 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:09:06.830000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6266613261306639643231616462396665643165323163363138313635 Dec 16 13:09:06.835009 containerd[1969]: time="2025-12-16T13:09:06.834895811Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6d7fb6ffdb-t947q,Uid:402c8f91-f505-4b31-ab8d-437df33aba9f,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"567303ab9f153e3a09c33ca41cc72352b5f3b79b4658beb9b9d9939c16de70a7\"" Dec 16 13:09:06.862000 audit: BPF prog-id=258 op=LOAD Dec 16 13:09:06.866000 audit: BPF prog-id=259 op=LOAD Dec 16 13:09:06.866000 audit[5314]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00018c238 a2=98 a3=0 items=0 ppid=5205 pid=5314 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:09:06.866000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6432313263343236353332303961313034623835616465346661646261 Dec 16 13:09:06.867000 audit: BPF prog-id=259 op=UNLOAD Dec 16 13:09:06.867000 audit[5314]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5205 pid=5314 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:09:06.867000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6432313263343236353332303961313034623835616465346661646261 Dec 16 13:09:06.868000 audit: BPF prog-id=260 op=LOAD Dec 16 13:09:06.868000 audit[5314]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00018c488 a2=98 a3=0 items=0 ppid=5205 pid=5314 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:09:06.868000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6432313263343236353332303961313034623835616465346661646261 Dec 16 13:09:06.868000 audit: BPF prog-id=261 op=LOAD Dec 16 13:09:06.868000 audit[5314]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00018c218 a2=98 a3=0 items=0 ppid=5205 pid=5314 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:09:06.868000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6432313263343236353332303961313034623835616465346661646261 Dec 16 13:09:06.868000 audit: BPF prog-id=261 op=UNLOAD Dec 16 13:09:06.868000 audit[5314]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5205 pid=5314 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:09:06.868000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6432313263343236353332303961313034623835616465346661646261 Dec 16 13:09:06.868000 audit: BPF prog-id=260 op=UNLOAD Dec 16 13:09:06.868000 audit[5314]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5205 pid=5314 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:09:06.868000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6432313263343236353332303961313034623835616465346661646261 Dec 16 13:09:06.870000 audit: BPF prog-id=262 op=LOAD Dec 16 13:09:06.870000 audit[5314]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00018c6e8 a2=98 a3=0 items=0 ppid=5205 pid=5314 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:09:06.870000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6432313263343236353332303961313034623835616465346661646261 Dec 16 13:09:06.890493 containerd[1969]: time="2025-12-16T13:09:06.890357919Z" level=info msg="StartContainer for \"bfa2a0f9d21adb9fed1e21c6181655a4b34f07601fc031c0081527e7af92be48\" returns successfully" Dec 16 13:09:06.910255 containerd[1969]: time="2025-12-16T13:09:06.910155617Z" level=info msg="StartContainer for \"d212c42653209a104b85ade4fadba621fd3a733c53c71de44edd9e4f9deffaa1\" returns successfully" Dec 16 13:09:06.947566 kubelet[3309]: E1216 13:09:06.947355 3309 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7bcdd655bc-b4pqw" podUID="eef40561-fc3a-47f4-ab5c-0482b5980a8d" Dec 16 13:09:06.960313 kubelet[3309]: E1216 13:09:06.960258 3309 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6d7fb6ffdb-x9w4j" podUID="17fc83ee-aaa8-428d-ba14-4fb4545cfe65" Dec 16 13:09:06.996133 kubelet[3309]: I1216 13:09:06.991882 3309 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-xp9r7" podStartSLOduration=82.991849366 podStartE2EDuration="1m22.991849366s" podCreationTimestamp="2025-12-16 13:07:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 13:09:06.991129185 +0000 UTC m=+86.598245267" watchObservedRunningTime="2025-12-16 13:09:06.991849366 +0000 UTC m=+86.598965462" Dec 16 13:09:06.996562 kubelet[3309]: I1216 13:09:06.996397 3309 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-hdxm2" podStartSLOduration=82.996371894 podStartE2EDuration="1m22.996371894s" podCreationTimestamp="2025-12-16 13:07:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 13:09:06.969734527 +0000 UTC m=+86.576850609" watchObservedRunningTime="2025-12-16 13:09:06.996371894 +0000 UTC m=+86.603487950" Dec 16 13:09:07.000229 systemd-networkd[1567]: cali3160487b38d: Gained IPv6LL Dec 16 13:09:07.052725 containerd[1969]: time="2025-12-16T13:09:07.052082547Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:09:07.057453 containerd[1969]: time="2025-12-16T13:09:07.054709378Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Dec 16 13:09:07.057453 containerd[1969]: time="2025-12-16T13:09:07.054824484Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Dec 16 13:09:07.058624 kubelet[3309]: E1216 13:09:07.058570 3309 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 16 13:09:07.059159 kubelet[3309]: E1216 13:09:07.059043 3309 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 16 13:09:07.064784 systemd-networkd[1567]: cali307707cd4c1: Gained IPv6LL Dec 16 13:09:07.071276 kubelet[3309]: E1216 13:09:07.070750 3309 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wps7z,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-wpbz6_calico-system(ea48f51b-a248-4d71-8caa-ed889e7f5fac): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Dec 16 13:09:07.071448 containerd[1969]: time="2025-12-16T13:09:07.070600524Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Dec 16 13:09:07.073053 kubelet[3309]: E1216 13:09:07.073001 3309 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-wpbz6" podUID="ea48f51b-a248-4d71-8caa-ed889e7f5fac" Dec 16 13:09:07.381485 containerd[1969]: time="2025-12-16T13:09:07.381366357Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:09:07.384171 containerd[1969]: time="2025-12-16T13:09:07.384102297Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Dec 16 13:09:07.384508 containerd[1969]: time="2025-12-16T13:09:07.384042256Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Dec 16 13:09:07.384624 kubelet[3309]: E1216 13:09:07.384510 3309 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 16 13:09:07.384624 kubelet[3309]: E1216 13:09:07.384575 3309 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 16 13:09:07.385603 kubelet[3309]: E1216 13:09:07.384877 3309 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-m7qnn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-h272q_calico-system(c808a4b9-6eee-4490-92c6-5f208009c5e7): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Dec 16 13:09:07.386384 containerd[1969]: time="2025-12-16T13:09:07.385964713Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 13:09:07.386598 kubelet[3309]: E1216 13:09:07.386290 3309 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-h272q" podUID="c808a4b9-6eee-4490-92c6-5f208009c5e7" Dec 16 13:09:07.517000 audit[5371]: NETFILTER_CFG table=filter:132 family=2 entries=20 op=nft_register_rule pid=5371 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 13:09:07.518527 kernel: kauditd_printk_skb: 357 callbacks suppressed Dec 16 13:09:07.518635 kernel: audit: type=1325 audit(1765890547.517:744): table=filter:132 family=2 entries=20 op=nft_register_rule pid=5371 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 13:09:07.517000 audit[5371]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7fff5c6b7660 a2=0 a3=7fff5c6b764c items=0 ppid=3636 pid=5371 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:09:07.522339 kernel: audit: type=1300 audit(1765890547.517:744): arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7fff5c6b7660 a2=0 a3=7fff5c6b764c items=0 ppid=3636 pid=5371 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:09:07.517000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 13:09:07.527646 kernel: audit: type=1327 audit(1765890547.517:744): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 13:09:07.529000 audit[5371]: NETFILTER_CFG table=nat:133 family=2 entries=14 op=nft_register_rule pid=5371 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 13:09:07.530309 kernel: audit: type=1325 audit(1765890547.529:745): table=nat:133 family=2 entries=14 op=nft_register_rule pid=5371 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 13:09:07.537103 kernel: audit: type=1300 audit(1765890547.529:745): arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7fff5c6b7660 a2=0 a3=0 items=0 ppid=3636 pid=5371 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:09:07.529000 audit[5371]: SYSCALL arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7fff5c6b7660 a2=0 a3=0 items=0 ppid=3636 pid=5371 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:09:07.529000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 13:09:07.541102 kernel: audit: type=1327 audit(1765890547.529:745): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 13:09:07.574451 systemd-networkd[1567]: cali84ab746864d: Gained IPv6LL Dec 16 13:09:07.712039 containerd[1969]: time="2025-12-16T13:09:07.711432626Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:09:07.714317 containerd[1969]: time="2025-12-16T13:09:07.714183868Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 13:09:07.714317 containerd[1969]: time="2025-12-16T13:09:07.714229409Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 16 13:09:07.714552 kubelet[3309]: E1216 13:09:07.714486 3309 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 13:09:07.720000 kubelet[3309]: E1216 13:09:07.714555 3309 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 13:09:07.720000 kubelet[3309]: E1216 13:09:07.714799 3309 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vqwxt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6d7fb6ffdb-t947q_calico-apiserver(402c8f91-f505-4b31-ab8d-437df33aba9f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 13:09:07.720800 kubelet[3309]: E1216 13:09:07.720721 3309 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6d7fb6ffdb-t947q" podUID="402c8f91-f505-4b31-ab8d-437df33aba9f" Dec 16 13:09:07.894401 systemd-networkd[1567]: cali9d8badd62be: Gained IPv6LL Dec 16 13:09:08.015398 kubelet[3309]: E1216 13:09:08.015152 3309 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7bcdd655bc-b4pqw" podUID="eef40561-fc3a-47f4-ab5c-0482b5980a8d" Dec 16 13:09:08.015398 kubelet[3309]: E1216 13:09:08.015276 3309 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-wpbz6" podUID="ea48f51b-a248-4d71-8caa-ed889e7f5fac" Dec 16 13:09:08.015398 kubelet[3309]: E1216 13:09:08.015349 3309 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6d7fb6ffdb-t947q" podUID="402c8f91-f505-4b31-ab8d-437df33aba9f" Dec 16 13:09:08.017337 kubelet[3309]: E1216 13:09:08.017262 3309 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-h272q" podUID="c808a4b9-6eee-4490-92c6-5f208009c5e7" Dec 16 13:09:08.572000 audit[5379]: NETFILTER_CFG table=filter:134 family=2 entries=20 op=nft_register_rule pid=5379 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 13:09:08.572000 audit[5379]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffcad788640 a2=0 a3=7ffcad78862c items=0 ppid=3636 pid=5379 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:09:08.577015 kernel: audit: type=1325 audit(1765890548.572:746): table=filter:134 family=2 entries=20 op=nft_register_rule pid=5379 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 13:09:08.577120 kernel: audit: type=1300 audit(1765890548.572:746): arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffcad788640 a2=0 a3=7ffcad78862c items=0 ppid=3636 pid=5379 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:09:08.572000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 13:09:08.582141 kernel: audit: type=1327 audit(1765890548.572:746): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 13:09:08.578000 audit[5379]: NETFILTER_CFG table=nat:135 family=2 entries=14 op=nft_register_rule pid=5379 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 13:09:08.585604 kernel: audit: type=1325 audit(1765890548.578:747): table=nat:135 family=2 entries=14 op=nft_register_rule pid=5379 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 13:09:08.578000 audit[5379]: SYSCALL arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7ffcad788640 a2=0 a3=0 items=0 ppid=3636 pid=5379 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:09:08.578000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 13:09:09.918419 systemd[1]: Started sshd@7-172.31.28.98:22-139.178.89.65:56408.service - OpenSSH per-connection server daemon (139.178.89.65:56408). Dec 16 13:09:09.918000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-172.31.28.98:22-139.178.89.65:56408 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:09:10.210369 ntpd[1932]: Listen normally on 6 vxlan.calico 192.168.44.0:123 Dec 16 13:09:10.212793 ntpd[1932]: 16 Dec 13:09:10 ntpd[1932]: Listen normally on 6 vxlan.calico 192.168.44.0:123 Dec 16 13:09:10.212793 ntpd[1932]: 16 Dec 13:09:10 ntpd[1932]: Listen normally on 7 califa3b1a61f1e [fe80::ecee:eeff:feee:eeee%4]:123 Dec 16 13:09:10.212793 ntpd[1932]: 16 Dec 13:09:10 ntpd[1932]: Listen normally on 8 vxlan.calico [fe80::6424:51ff:fe29:b641%5]:123 Dec 16 13:09:10.212793 ntpd[1932]: 16 Dec 13:09:10 ntpd[1932]: Listen normally on 9 cali43554063c7a [fe80::ecee:eeff:feee:eeee%8]:123 Dec 16 13:09:10.212793 ntpd[1932]: 16 Dec 13:09:10 ntpd[1932]: Listen normally on 10 calie50f80cfe07 [fe80::ecee:eeff:feee:eeee%9]:123 Dec 16 13:09:10.212793 ntpd[1932]: 16 Dec 13:09:10 ntpd[1932]: Listen normally on 11 cali499f427dd94 [fe80::ecee:eeff:feee:eeee%10]:123 Dec 16 13:09:10.212793 ntpd[1932]: 16 Dec 13:09:10 ntpd[1932]: Listen normally on 12 cali307707cd4c1 [fe80::ecee:eeff:feee:eeee%11]:123 Dec 16 13:09:10.212793 ntpd[1932]: 16 Dec 13:09:10 ntpd[1932]: Listen normally on 13 cali3160487b38d [fe80::ecee:eeff:feee:eeee%12]:123 Dec 16 13:09:10.212793 ntpd[1932]: 16 Dec 13:09:10 ntpd[1932]: Listen normally on 14 cali9d8badd62be [fe80::ecee:eeff:feee:eeee%13]:123 Dec 16 13:09:10.212793 ntpd[1932]: 16 Dec 13:09:10 ntpd[1932]: Listen normally on 15 cali84ab746864d [fe80::ecee:eeff:feee:eeee%14]:123 Dec 16 13:09:10.210436 ntpd[1932]: Listen normally on 7 califa3b1a61f1e [fe80::ecee:eeff:feee:eeee%4]:123 Dec 16 13:09:10.210472 ntpd[1932]: Listen normally on 8 vxlan.calico [fe80::6424:51ff:fe29:b641%5]:123 Dec 16 13:09:10.210500 ntpd[1932]: Listen normally on 9 cali43554063c7a [fe80::ecee:eeff:feee:eeee%8]:123 Dec 16 13:09:10.210527 ntpd[1932]: Listen normally on 10 calie50f80cfe07 [fe80::ecee:eeff:feee:eeee%9]:123 Dec 16 13:09:10.210555 ntpd[1932]: Listen normally on 11 cali499f427dd94 [fe80::ecee:eeff:feee:eeee%10]:123 Dec 16 13:09:10.210582 ntpd[1932]: Listen normally on 12 cali307707cd4c1 [fe80::ecee:eeff:feee:eeee%11]:123 Dec 16 13:09:10.210621 ntpd[1932]: Listen normally on 13 cali3160487b38d [fe80::ecee:eeff:feee:eeee%12]:123 Dec 16 13:09:10.210650 ntpd[1932]: Listen normally on 14 cali9d8badd62be [fe80::ecee:eeff:feee:eeee%13]:123 Dec 16 13:09:10.210676 ntpd[1932]: Listen normally on 15 cali84ab746864d [fe80::ecee:eeff:feee:eeee%14]:123 Dec 16 13:09:10.293000 audit[5385]: USER_ACCT pid=5385 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 13:09:10.294661 sshd[5385]: Accepted publickey for core from 139.178.89.65 port 56408 ssh2: RSA SHA256:KHLvalz0pEUwMHEW+CYnePnCR/HY9aPnYIRYzgcsWEk Dec 16 13:09:10.296000 audit[5385]: CRED_ACQ pid=5385 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 13:09:10.296000 audit[5385]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff3d5fe480 a2=3 a3=0 items=0 ppid=1 pid=5385 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=8 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:09:10.296000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 13:09:10.318000 audit[5385]: USER_START pid=5385 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 13:09:10.321000 audit[5391]: CRED_ACQ pid=5391 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 13:09:10.309716 systemd-logind[1939]: New session 8 of user core. Dec 16 13:09:10.298918 sshd-session[5385]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:09:10.313412 systemd[1]: Started session-8.scope - Session 8 of User core. Dec 16 13:09:11.292587 sshd[5391]: Connection closed by 139.178.89.65 port 56408 Dec 16 13:09:11.293285 sshd-session[5385]: pam_unix(sshd:session): session closed for user core Dec 16 13:09:11.297000 audit[5385]: USER_END pid=5385 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 13:09:11.298000 audit[5385]: CRED_DISP pid=5385 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 13:09:11.300000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-172.31.28.98:22-139.178.89.65:56408 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:09:11.301418 systemd[1]: sshd@7-172.31.28.98:22-139.178.89.65:56408.service: Deactivated successfully. Dec 16 13:09:11.303934 systemd-logind[1939]: Session 8 logged out. Waiting for processes to exit. Dec 16 13:09:11.305719 systemd[1]: session-8.scope: Deactivated successfully. Dec 16 13:09:11.308161 systemd-logind[1939]: Removed session 8. Dec 16 13:09:16.326000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-172.31.28.98:22-139.178.89.65:42620 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:09:16.326112 systemd[1]: Started sshd@8-172.31.28.98:22-139.178.89.65:42620.service - OpenSSH per-connection server daemon (139.178.89.65:42620). Dec 16 13:09:16.328038 kernel: kauditd_printk_skb: 13 callbacks suppressed Dec 16 13:09:16.328157 kernel: audit: type=1130 audit(1765890556.326:757): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-172.31.28.98:22-139.178.89.65:42620 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:09:16.525000 audit[5424]: USER_ACCT pid=5424 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 13:09:16.527005 sshd-session[5424]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:09:16.527866 sshd[5424]: Accepted publickey for core from 139.178.89.65 port 42620 ssh2: RSA SHA256:KHLvalz0pEUwMHEW+CYnePnCR/HY9aPnYIRYzgcsWEk Dec 16 13:09:16.535747 kernel: audit: type=1101 audit(1765890556.525:758): pid=5424 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 13:09:16.535949 kernel: audit: type=1103 audit(1765890556.526:759): pid=5424 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 13:09:16.526000 audit[5424]: CRED_ACQ pid=5424 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 13:09:16.533671 systemd-logind[1939]: New session 9 of user core. Dec 16 13:09:16.526000 audit[5424]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc101bf7c0 a2=3 a3=0 items=0 ppid=1 pid=5424 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=9 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:09:16.541121 kernel: audit: type=1006 audit(1765890556.526:760): pid=5424 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=9 res=1 Dec 16 13:09:16.541236 kernel: audit: type=1300 audit(1765890556.526:760): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc101bf7c0 a2=3 a3=0 items=0 ppid=1 pid=5424 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=9 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:09:16.526000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 13:09:16.545892 kernel: audit: type=1327 audit(1765890556.526:760): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 13:09:16.546634 systemd[1]: Started session-9.scope - Session 9 of User core. Dec 16 13:09:16.551000 audit[5424]: USER_START pid=5424 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 13:09:16.554000 audit[5427]: CRED_ACQ pid=5427 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 13:09:16.559844 kernel: audit: type=1105 audit(1765890556.551:761): pid=5424 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 13:09:16.559971 kernel: audit: type=1103 audit(1765890556.554:762): pid=5427 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 13:09:16.706697 sshd[5427]: Connection closed by 139.178.89.65 port 42620 Dec 16 13:09:16.708963 sshd-session[5424]: pam_unix(sshd:session): session closed for user core Dec 16 13:09:16.711000 audit[5424]: USER_END pid=5424 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 13:09:16.718476 systemd[1]: sshd@8-172.31.28.98:22-139.178.89.65:42620.service: Deactivated successfully. Dec 16 13:09:16.720261 kernel: audit: type=1106 audit(1765890556.711:763): pid=5424 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 13:09:16.720414 kernel: audit: type=1104 audit(1765890556.711:764): pid=5424 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 13:09:16.711000 audit[5424]: CRED_DISP pid=5424 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 13:09:16.723415 systemd[1]: session-9.scope: Deactivated successfully. Dec 16 13:09:16.725753 systemd-logind[1939]: Session 9 logged out. Waiting for processes to exit. Dec 16 13:09:16.715000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-172.31.28.98:22-139.178.89.65:42620 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:09:16.729083 systemd-logind[1939]: Removed session 9. Dec 16 13:09:17.756947 containerd[1969]: time="2025-12-16T13:09:17.755972175Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Dec 16 13:09:18.022599 containerd[1969]: time="2025-12-16T13:09:18.022432477Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:09:18.025436 containerd[1969]: time="2025-12-16T13:09:18.025274300Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Dec 16 13:09:18.025436 containerd[1969]: time="2025-12-16T13:09:18.025321388Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Dec 16 13:09:18.025700 kubelet[3309]: E1216 13:09:18.025605 3309 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 13:09:18.025700 kubelet[3309]: E1216 13:09:18.025668 3309 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 13:09:18.026457 kubelet[3309]: E1216 13:09:18.025849 3309 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:561161844d8542869bf93b20f103b053,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-v44l9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-58f99f576c-h7p64_calico-system(f4a8c05f-aa26-454c-a381-75bd59548a78): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Dec 16 13:09:18.029374 containerd[1969]: time="2025-12-16T13:09:18.029256604Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Dec 16 13:09:18.309865 containerd[1969]: time="2025-12-16T13:09:18.309708660Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:09:18.312265 containerd[1969]: time="2025-12-16T13:09:18.312082971Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Dec 16 13:09:18.312265 containerd[1969]: time="2025-12-16T13:09:18.312181524Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Dec 16 13:09:18.312471 kubelet[3309]: E1216 13:09:18.312352 3309 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 13:09:18.312471 kubelet[3309]: E1216 13:09:18.312399 3309 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 13:09:18.312585 kubelet[3309]: E1216 13:09:18.312523 3309 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-v44l9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-58f99f576c-h7p64_calico-system(f4a8c05f-aa26-454c-a381-75bd59548a78): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Dec 16 13:09:18.314019 kubelet[3309]: E1216 13:09:18.313975 3309 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-58f99f576c-h7p64" podUID="f4a8c05f-aa26-454c-a381-75bd59548a78" Dec 16 13:09:18.758228 containerd[1969]: time="2025-12-16T13:09:18.757782045Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 13:09:19.023873 containerd[1969]: time="2025-12-16T13:09:19.023400874Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:09:19.025839 containerd[1969]: time="2025-12-16T13:09:19.025757548Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 13:09:19.026198 containerd[1969]: time="2025-12-16T13:09:19.025767245Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 16 13:09:19.026322 kubelet[3309]: E1216 13:09:19.026265 3309 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 13:09:19.027220 kubelet[3309]: E1216 13:09:19.026316 3309 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 13:09:19.027220 kubelet[3309]: E1216 13:09:19.026486 3309 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vqwxt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6d7fb6ffdb-t947q_calico-apiserver(402c8f91-f505-4b31-ab8d-437df33aba9f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 13:09:19.028148 kubelet[3309]: E1216 13:09:19.028091 3309 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6d7fb6ffdb-t947q" podUID="402c8f91-f505-4b31-ab8d-437df33aba9f" Dec 16 13:09:19.104000 audit[5442]: NETFILTER_CFG table=filter:136 family=2 entries=17 op=nft_register_rule pid=5442 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 13:09:19.104000 audit[5442]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffecad51670 a2=0 a3=7ffecad5165c items=0 ppid=3636 pid=5442 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:09:19.104000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 13:09:19.112000 audit[5442]: NETFILTER_CFG table=nat:137 family=2 entries=35 op=nft_register_chain pid=5442 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 13:09:19.112000 audit[5442]: SYSCALL arch=c000003e syscall=46 success=yes exit=14196 a0=3 a1=7ffecad51670 a2=0 a3=7ffecad5165c items=0 ppid=3636 pid=5442 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:09:19.112000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 13:09:19.133000 audit[5444]: NETFILTER_CFG table=filter:138 family=2 entries=14 op=nft_register_rule pid=5444 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 13:09:19.133000 audit[5444]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffcacd1bd80 a2=0 a3=7ffcacd1bd6c items=0 ppid=3636 pid=5444 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:09:19.133000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 13:09:19.149000 audit[5444]: NETFILTER_CFG table=nat:139 family=2 entries=56 op=nft_register_chain pid=5444 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 13:09:19.149000 audit[5444]: SYSCALL arch=c000003e syscall=46 success=yes exit=19860 a0=3 a1=7ffcacd1bd80 a2=0 a3=7ffcacd1bd6c items=0 ppid=3636 pid=5444 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:09:19.149000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 13:09:20.758582 containerd[1969]: time="2025-12-16T13:09:20.758280928Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Dec 16 13:09:21.040976 containerd[1969]: time="2025-12-16T13:09:21.040686493Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:09:21.043299 containerd[1969]: time="2025-12-16T13:09:21.043156266Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Dec 16 13:09:21.043299 containerd[1969]: time="2025-12-16T13:09:21.043241849Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Dec 16 13:09:21.044732 kubelet[3309]: E1216 13:09:21.044680 3309 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 16 13:09:21.048901 kubelet[3309]: E1216 13:09:21.045304 3309 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 16 13:09:21.048901 kubelet[3309]: E1216 13:09:21.045968 3309 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-m7qnn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-h272q_calico-system(c808a4b9-6eee-4490-92c6-5f208009c5e7): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Dec 16 13:09:21.049158 containerd[1969]: time="2025-12-16T13:09:21.046915765Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Dec 16 13:09:21.309957 containerd[1969]: time="2025-12-16T13:09:21.309774121Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:09:21.312386 containerd[1969]: time="2025-12-16T13:09:21.312310721Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Dec 16 13:09:21.312636 containerd[1969]: time="2025-12-16T13:09:21.312331925Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Dec 16 13:09:21.312706 kubelet[3309]: E1216 13:09:21.312660 3309 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 16 13:09:21.312786 kubelet[3309]: E1216 13:09:21.312727 3309 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 16 13:09:21.313270 kubelet[3309]: E1216 13:09:21.313154 3309 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wps7z,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-wpbz6_calico-system(ea48f51b-a248-4d71-8caa-ed889e7f5fac): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Dec 16 13:09:21.314077 containerd[1969]: time="2025-12-16T13:09:21.314026379Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Dec 16 13:09:21.314720 kubelet[3309]: E1216 13:09:21.314632 3309 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-wpbz6" podUID="ea48f51b-a248-4d71-8caa-ed889e7f5fac" Dec 16 13:09:21.570141 containerd[1969]: time="2025-12-16T13:09:21.569964151Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:09:21.572395 containerd[1969]: time="2025-12-16T13:09:21.572247275Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Dec 16 13:09:21.572395 containerd[1969]: time="2025-12-16T13:09:21.572354753Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Dec 16 13:09:21.572758 kubelet[3309]: E1216 13:09:21.572710 3309 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 16 13:09:21.572909 kubelet[3309]: E1216 13:09:21.572853 3309 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 16 13:09:21.573153 kubelet[3309]: E1216 13:09:21.573086 3309 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-m7qnn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-h272q_calico-system(c808a4b9-6eee-4490-92c6-5f208009c5e7): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Dec 16 13:09:21.575111 kubelet[3309]: E1216 13:09:21.574598 3309 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-h272q" podUID="c808a4b9-6eee-4490-92c6-5f208009c5e7" Dec 16 13:09:21.750845 systemd[1]: Started sshd@9-172.31.28.98:22-139.178.89.65:54686.service - OpenSSH per-connection server daemon (139.178.89.65:54686). Dec 16 13:09:21.751000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-172.31.28.98:22-139.178.89.65:54686 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:09:21.752467 kernel: kauditd_printk_skb: 13 callbacks suppressed Dec 16 13:09:21.752505 kernel: audit: type=1130 audit(1765890561.751:770): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-172.31.28.98:22-139.178.89.65:54686 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:09:21.757387 containerd[1969]: time="2025-12-16T13:09:21.757092093Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 13:09:21.942000 audit[5450]: USER_ACCT pid=5450 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 13:09:21.944418 sshd[5450]: Accepted publickey for core from 139.178.89.65 port 54686 ssh2: RSA SHA256:KHLvalz0pEUwMHEW+CYnePnCR/HY9aPnYIRYzgcsWEk Dec 16 13:09:21.949603 kernel: audit: type=1101 audit(1765890561.942:771): pid=5450 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 13:09:21.950000 audit[5450]: CRED_ACQ pid=5450 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 13:09:21.952328 sshd-session[5450]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:09:21.959397 kernel: audit: type=1103 audit(1765890561.950:772): pid=5450 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 13:09:21.959601 kernel: audit: type=1006 audit(1765890561.950:773): pid=5450 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=10 res=1 Dec 16 13:09:21.950000 audit[5450]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffdf3cebbe0 a2=3 a3=0 items=0 ppid=1 pid=5450 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:09:21.963452 kernel: audit: type=1300 audit(1765890561.950:773): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffdf3cebbe0 a2=3 a3=0 items=0 ppid=1 pid=5450 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:09:21.969161 kernel: audit: type=1327 audit(1765890561.950:773): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 13:09:21.950000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 13:09:21.970172 systemd-logind[1939]: New session 10 of user core. Dec 16 13:09:21.976425 systemd[1]: Started session-10.scope - Session 10 of User core. Dec 16 13:09:21.984000 audit[5450]: USER_START pid=5450 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 13:09:21.995135 kernel: audit: type=1105 audit(1765890561.984:774): pid=5450 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 13:09:21.995284 kernel: audit: type=1103 audit(1765890561.988:775): pid=5453 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 13:09:21.988000 audit[5453]: CRED_ACQ pid=5453 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 13:09:22.014313 containerd[1969]: time="2025-12-16T13:09:22.014239962Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:09:22.016848 containerd[1969]: time="2025-12-16T13:09:22.016787888Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 13:09:22.018254 containerd[1969]: time="2025-12-16T13:09:22.016800089Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 16 13:09:22.018582 kubelet[3309]: E1216 13:09:22.017202 3309 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 13:09:22.018582 kubelet[3309]: E1216 13:09:22.017277 3309 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 13:09:22.018582 kubelet[3309]: E1216 13:09:22.017473 3309 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6h55z,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6d7fb6ffdb-x9w4j_calico-apiserver(17fc83ee-aaa8-428d-ba14-4fb4545cfe65): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 13:09:22.019193 kubelet[3309]: E1216 13:09:22.019147 3309 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6d7fb6ffdb-x9w4j" podUID="17fc83ee-aaa8-428d-ba14-4fb4545cfe65" Dec 16 13:09:22.163703 sshd[5453]: Connection closed by 139.178.89.65 port 54686 Dec 16 13:09:22.164500 sshd-session[5450]: pam_unix(sshd:session): session closed for user core Dec 16 13:09:22.176224 kernel: audit: type=1106 audit(1765890562.168:776): pid=5450 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 13:09:22.168000 audit[5450]: USER_END pid=5450 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 13:09:22.174262 systemd[1]: sshd@9-172.31.28.98:22-139.178.89.65:54686.service: Deactivated successfully. Dec 16 13:09:22.169000 audit[5450]: CRED_DISP pid=5450 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 13:09:22.179598 systemd[1]: session-10.scope: Deactivated successfully. Dec 16 13:09:22.184125 kernel: audit: type=1104 audit(1765890562.169:777): pid=5450 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 13:09:22.182542 systemd-logind[1939]: Session 10 logged out. Waiting for processes to exit. Dec 16 13:09:22.174000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-172.31.28.98:22-139.178.89.65:54686 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:09:22.201000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-172.31.28.98:22-139.178.89.65:54688 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:09:22.201858 systemd[1]: Started sshd@10-172.31.28.98:22-139.178.89.65:54688.service - OpenSSH per-connection server daemon (139.178.89.65:54688). Dec 16 13:09:22.205391 systemd-logind[1939]: Removed session 10. Dec 16 13:09:22.415000 audit[5467]: USER_ACCT pid=5467 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 13:09:22.416501 sshd[5467]: Accepted publickey for core from 139.178.89.65 port 54688 ssh2: RSA SHA256:KHLvalz0pEUwMHEW+CYnePnCR/HY9aPnYIRYzgcsWEk Dec 16 13:09:22.417000 audit[5467]: CRED_ACQ pid=5467 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 13:09:22.418000 audit[5467]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc2d92c310 a2=3 a3=0 items=0 ppid=1 pid=5467 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:09:22.418000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 13:09:22.418710 sshd-session[5467]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:09:22.425832 systemd-logind[1939]: New session 11 of user core. Dec 16 13:09:22.432419 systemd[1]: Started session-11.scope - Session 11 of User core. Dec 16 13:09:22.437000 audit[5467]: USER_START pid=5467 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 13:09:22.440000 audit[5470]: CRED_ACQ pid=5470 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 13:09:22.650087 sshd[5470]: Connection closed by 139.178.89.65 port 54688 Dec 16 13:09:22.653801 sshd-session[5467]: pam_unix(sshd:session): session closed for user core Dec 16 13:09:22.655000 audit[5467]: USER_END pid=5467 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 13:09:22.655000 audit[5467]: CRED_DISP pid=5467 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 13:09:22.662453 systemd[1]: sshd@10-172.31.28.98:22-139.178.89.65:54688.service: Deactivated successfully. Dec 16 13:09:22.662000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-172.31.28.98:22-139.178.89.65:54688 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:09:22.670004 systemd[1]: session-11.scope: Deactivated successfully. Dec 16 13:09:22.676805 systemd-logind[1939]: Session 11 logged out. Waiting for processes to exit. Dec 16 13:09:22.698000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-172.31.28.98:22-139.178.89.65:54694 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:09:22.697761 systemd[1]: Started sshd@11-172.31.28.98:22-139.178.89.65:54694.service - OpenSSH per-connection server daemon (139.178.89.65:54694). Dec 16 13:09:22.700128 systemd-logind[1939]: Removed session 11. Dec 16 13:09:22.756704 containerd[1969]: time="2025-12-16T13:09:22.756430257Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Dec 16 13:09:22.894000 audit[5480]: USER_ACCT pid=5480 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 13:09:22.894779 sshd[5480]: Accepted publickey for core from 139.178.89.65 port 54694 ssh2: RSA SHA256:KHLvalz0pEUwMHEW+CYnePnCR/HY9aPnYIRYzgcsWEk Dec 16 13:09:22.896000 audit[5480]: CRED_ACQ pid=5480 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 13:09:22.896000 audit[5480]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffca4e8e3b0 a2=3 a3=0 items=0 ppid=1 pid=5480 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:09:22.896000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 13:09:22.897190 sshd-session[5480]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:09:22.903989 systemd-logind[1939]: New session 12 of user core. Dec 16 13:09:22.907951 systemd[1]: Started session-12.scope - Session 12 of User core. Dec 16 13:09:22.913000 audit[5480]: USER_START pid=5480 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 13:09:22.915000 audit[5483]: CRED_ACQ pid=5483 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 13:09:23.039879 containerd[1969]: time="2025-12-16T13:09:23.039809620Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:09:23.042792 containerd[1969]: time="2025-12-16T13:09:23.042280747Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Dec 16 13:09:23.042792 containerd[1969]: time="2025-12-16T13:09:23.042428554Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Dec 16 13:09:23.043099 kubelet[3309]: E1216 13:09:23.042939 3309 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 16 13:09:23.043099 kubelet[3309]: E1216 13:09:23.043010 3309 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 16 13:09:23.044813 kubelet[3309]: E1216 13:09:23.044726 3309 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-q6qrs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-7bcdd655bc-b4pqw_calico-system(eef40561-fc3a-47f4-ab5c-0482b5980a8d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Dec 16 13:09:23.046270 kubelet[3309]: E1216 13:09:23.046226 3309 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7bcdd655bc-b4pqw" podUID="eef40561-fc3a-47f4-ab5c-0482b5980a8d" Dec 16 13:09:23.094737 sshd[5483]: Connection closed by 139.178.89.65 port 54694 Dec 16 13:09:23.096357 sshd-session[5480]: pam_unix(sshd:session): session closed for user core Dec 16 13:09:23.098000 audit[5480]: USER_END pid=5480 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 13:09:23.098000 audit[5480]: CRED_DISP pid=5480 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 13:09:23.102444 systemd[1]: sshd@11-172.31.28.98:22-139.178.89.65:54694.service: Deactivated successfully. Dec 16 13:09:23.102000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-172.31.28.98:22-139.178.89.65:54694 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:09:23.105361 systemd[1]: session-12.scope: Deactivated successfully. Dec 16 13:09:23.107892 systemd-logind[1939]: Session 12 logged out. Waiting for processes to exit. Dec 16 13:09:23.110033 systemd-logind[1939]: Removed session 12. Dec 16 13:09:28.134470 systemd[1]: Started sshd@12-172.31.28.98:22-139.178.89.65:54702.service - OpenSSH per-connection server daemon (139.178.89.65:54702). Dec 16 13:09:28.137171 kernel: kauditd_printk_skb: 23 callbacks suppressed Dec 16 13:09:28.137701 kernel: audit: type=1130 audit(1765890568.134:797): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-172.31.28.98:22-139.178.89.65:54702 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:09:28.134000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-172.31.28.98:22-139.178.89.65:54702 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:09:28.316000 audit[5505]: USER_ACCT pid=5505 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 13:09:28.320371 sshd-session[5505]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:09:28.322746 sshd[5505]: Accepted publickey for core from 139.178.89.65 port 54702 ssh2: RSA SHA256:KHLvalz0pEUwMHEW+CYnePnCR/HY9aPnYIRYzgcsWEk Dec 16 13:09:28.323084 kernel: audit: type=1101 audit(1765890568.316:798): pid=5505 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 13:09:28.319000 audit[5505]: CRED_ACQ pid=5505 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 13:09:28.330319 kernel: audit: type=1103 audit(1765890568.319:799): pid=5505 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 13:09:28.334578 systemd-logind[1939]: New session 13 of user core. Dec 16 13:09:28.345205 kernel: audit: type=1006 audit(1765890568.319:800): pid=5505 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=13 res=1 Dec 16 13:09:28.345348 kernel: audit: type=1300 audit(1765890568.319:800): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffcab58b4d0 a2=3 a3=0 items=0 ppid=1 pid=5505 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:09:28.319000 audit[5505]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffcab58b4d0 a2=3 a3=0 items=0 ppid=1 pid=5505 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:09:28.319000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 13:09:28.351096 kernel: audit: type=1327 audit(1765890568.319:800): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 13:09:28.350444 systemd[1]: Started session-13.scope - Session 13 of User core. Dec 16 13:09:28.366413 kernel: audit: type=1105 audit(1765890568.357:801): pid=5505 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 13:09:28.357000 audit[5505]: USER_START pid=5505 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 13:09:28.367000 audit[5509]: CRED_ACQ pid=5509 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 13:09:28.373096 kernel: audit: type=1103 audit(1765890568.367:802): pid=5509 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 13:09:28.534881 sshd[5509]: Connection closed by 139.178.89.65 port 54702 Dec 16 13:09:28.535201 sshd-session[5505]: pam_unix(sshd:session): session closed for user core Dec 16 13:09:28.539000 audit[5505]: USER_END pid=5505 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 13:09:28.546254 kernel: audit: type=1106 audit(1765890568.539:803): pid=5505 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 13:09:28.546312 systemd[1]: sshd@12-172.31.28.98:22-139.178.89.65:54702.service: Deactivated successfully. Dec 16 13:09:28.539000 audit[5505]: CRED_DISP pid=5505 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 13:09:28.550582 systemd[1]: session-13.scope: Deactivated successfully. Dec 16 13:09:28.546000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-172.31.28.98:22-139.178.89.65:54702 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:09:28.552902 systemd-logind[1939]: Session 13 logged out. Waiting for processes to exit. Dec 16 13:09:28.553122 kernel: audit: type=1104 audit(1765890568.539:804): pid=5505 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 13:09:28.556314 systemd-logind[1939]: Removed session 13. Dec 16 13:09:31.756692 kubelet[3309]: E1216 13:09:31.756590 3309 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-wpbz6" podUID="ea48f51b-a248-4d71-8caa-ed889e7f5fac" Dec 16 13:09:31.757514 kubelet[3309]: E1216 13:09:31.756527 3309 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6d7fb6ffdb-t947q" podUID="402c8f91-f505-4b31-ab8d-437df33aba9f" Dec 16 13:09:33.580000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-172.31.28.98:22-139.178.89.65:49204 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:09:33.580819 systemd[1]: Started sshd@13-172.31.28.98:22-139.178.89.65:49204.service - OpenSSH per-connection server daemon (139.178.89.65:49204). Dec 16 13:09:33.589534 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 16 13:09:33.589639 kernel: audit: type=1130 audit(1765890573.580:806): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-172.31.28.98:22-139.178.89.65:49204 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:09:33.746000 audit[5545]: USER_ACCT pid=5545 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 13:09:33.747100 sshd[5545]: Accepted publickey for core from 139.178.89.65 port 49204 ssh2: RSA SHA256:KHLvalz0pEUwMHEW+CYnePnCR/HY9aPnYIRYzgcsWEk Dec 16 13:09:33.751197 sshd-session[5545]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:09:33.750000 audit[5545]: CRED_ACQ pid=5545 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 13:09:33.756316 kernel: audit: type=1101 audit(1765890573.746:807): pid=5545 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 13:09:33.756421 kernel: audit: type=1103 audit(1765890573.750:808): pid=5545 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 13:09:33.761782 kubelet[3309]: E1216 13:09:33.761646 3309 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-58f99f576c-h7p64" podUID="f4a8c05f-aa26-454c-a381-75bd59548a78" Dec 16 13:09:33.767107 kernel: audit: type=1006 audit(1765890573.750:809): pid=5545 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=14 res=1 Dec 16 13:09:33.750000 audit[5545]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe62e16de0 a2=3 a3=0 items=0 ppid=1 pid=5545 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:09:33.780085 kernel: audit: type=1300 audit(1765890573.750:809): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe62e16de0 a2=3 a3=0 items=0 ppid=1 pid=5545 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:09:33.780200 kernel: audit: type=1327 audit(1765890573.750:809): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 13:09:33.750000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 13:09:33.777375 systemd-logind[1939]: New session 14 of user core. Dec 16 13:09:33.783767 systemd[1]: Started session-14.scope - Session 14 of User core. Dec 16 13:09:33.790000 audit[5545]: USER_START pid=5545 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 13:09:33.800773 kernel: audit: type=1105 audit(1765890573.790:810): pid=5545 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 13:09:33.800000 audit[5549]: CRED_ACQ pid=5549 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 13:09:33.807219 kernel: audit: type=1103 audit(1765890573.800:811): pid=5549 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 13:09:33.946837 sshd[5549]: Connection closed by 139.178.89.65 port 49204 Dec 16 13:09:33.949655 sshd-session[5545]: pam_unix(sshd:session): session closed for user core Dec 16 13:09:33.958000 audit[5545]: USER_END pid=5545 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 13:09:33.969107 kernel: audit: type=1106 audit(1765890573.958:812): pid=5545 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 13:09:33.969224 kernel: audit: type=1104 audit(1765890573.960:813): pid=5545 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 13:09:33.960000 audit[5545]: CRED_DISP pid=5545 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 13:09:33.967269 systemd[1]: sshd@13-172.31.28.98:22-139.178.89.65:49204.service: Deactivated successfully. Dec 16 13:09:33.967000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-172.31.28.98:22-139.178.89.65:49204 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:09:33.970816 systemd[1]: session-14.scope: Deactivated successfully. Dec 16 13:09:33.972549 systemd-logind[1939]: Session 14 logged out. Waiting for processes to exit. Dec 16 13:09:33.974565 systemd-logind[1939]: Removed session 14. Dec 16 13:09:34.759950 kubelet[3309]: E1216 13:09:34.759845 3309 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7bcdd655bc-b4pqw" podUID="eef40561-fc3a-47f4-ab5c-0482b5980a8d" Dec 16 13:09:34.763152 kubelet[3309]: E1216 13:09:34.763051 3309 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-h272q" podUID="c808a4b9-6eee-4490-92c6-5f208009c5e7" Dec 16 13:09:36.760837 kubelet[3309]: E1216 13:09:36.757449 3309 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6d7fb6ffdb-x9w4j" podUID="17fc83ee-aaa8-428d-ba14-4fb4545cfe65" Dec 16 13:09:38.994715 systemd[1]: Started sshd@14-172.31.28.98:22-139.178.89.65:49208.service - OpenSSH per-connection server daemon (139.178.89.65:49208). Dec 16 13:09:38.995000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-172.31.28.98:22-139.178.89.65:49208 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:09:38.999260 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 16 13:09:38.999353 kernel: audit: type=1130 audit(1765890578.995:815): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-172.31.28.98:22-139.178.89.65:49208 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:09:39.329000 audit[5563]: USER_ACCT pid=5563 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 13:09:39.332278 sshd[5563]: Accepted publickey for core from 139.178.89.65 port 49208 ssh2: RSA SHA256:KHLvalz0pEUwMHEW+CYnePnCR/HY9aPnYIRYzgcsWEk Dec 16 13:09:39.335788 sshd-session[5563]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:09:39.337250 kernel: audit: type=1101 audit(1765890579.329:816): pid=5563 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 13:09:39.337487 kernel: audit: type=1103 audit(1765890579.332:817): pid=5563 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 13:09:39.332000 audit[5563]: CRED_ACQ pid=5563 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 13:09:39.343203 kernel: audit: type=1006 audit(1765890579.332:818): pid=5563 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=15 res=1 Dec 16 13:09:39.332000 audit[5563]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffce4be0c20 a2=3 a3=0 items=0 ppid=1 pid=5563 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:09:39.346928 systemd-logind[1939]: New session 15 of user core. Dec 16 13:09:39.350266 kernel: audit: type=1300 audit(1765890579.332:818): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffce4be0c20 a2=3 a3=0 items=0 ppid=1 pid=5563 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:09:39.332000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 13:09:39.353747 kernel: audit: type=1327 audit(1765890579.332:818): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 13:09:39.356461 systemd[1]: Started session-15.scope - Session 15 of User core. Dec 16 13:09:39.361000 audit[5563]: USER_START pid=5563 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 13:09:39.365000 audit[5566]: CRED_ACQ pid=5566 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 13:09:39.370326 kernel: audit: type=1105 audit(1765890579.361:819): pid=5563 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 13:09:39.370418 kernel: audit: type=1103 audit(1765890579.365:820): pid=5566 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 13:09:39.624000 sshd[5566]: Connection closed by 139.178.89.65 port 49208 Dec 16 13:09:39.628142 sshd-session[5563]: pam_unix(sshd:session): session closed for user core Dec 16 13:09:39.631000 audit[5563]: USER_END pid=5563 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 13:09:39.637374 systemd[1]: sshd@14-172.31.28.98:22-139.178.89.65:49208.service: Deactivated successfully. Dec 16 13:09:39.639140 kernel: audit: type=1106 audit(1765890579.631:821): pid=5563 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 13:09:39.640197 kernel: audit: type=1104 audit(1765890579.631:822): pid=5563 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 13:09:39.631000 audit[5563]: CRED_DISP pid=5563 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 13:09:39.643271 systemd[1]: session-15.scope: Deactivated successfully. Dec 16 13:09:39.634000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-172.31.28.98:22-139.178.89.65:49208 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:09:39.646901 systemd-logind[1939]: Session 15 logged out. Waiting for processes to exit. Dec 16 13:09:39.649649 systemd-logind[1939]: Removed session 15. Dec 16 13:09:43.759711 containerd[1969]: time="2025-12-16T13:09:43.759656216Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Dec 16 13:09:44.159127 containerd[1969]: time="2025-12-16T13:09:44.159075572Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:09:44.161353 containerd[1969]: time="2025-12-16T13:09:44.161255294Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Dec 16 13:09:44.161595 containerd[1969]: time="2025-12-16T13:09:44.161273045Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Dec 16 13:09:44.161677 kubelet[3309]: E1216 13:09:44.161570 3309 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 16 13:09:44.161677 kubelet[3309]: E1216 13:09:44.161633 3309 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 16 13:09:44.162350 kubelet[3309]: E1216 13:09:44.161852 3309 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wps7z,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-wpbz6_calico-system(ea48f51b-a248-4d71-8caa-ed889e7f5fac): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Dec 16 13:09:44.163908 kubelet[3309]: E1216 13:09:44.163129 3309 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-wpbz6" podUID="ea48f51b-a248-4d71-8caa-ed889e7f5fac" Dec 16 13:09:44.664551 systemd[1]: Started sshd@15-172.31.28.98:22-139.178.89.65:58604.service - OpenSSH per-connection server daemon (139.178.89.65:58604). Dec 16 13:09:44.676443 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 16 13:09:44.676641 kernel: audit: type=1130 audit(1765890584.665:824): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-172.31.28.98:22-139.178.89.65:58604 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:09:44.665000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-172.31.28.98:22-139.178.89.65:58604 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:09:44.881000 audit[5580]: USER_ACCT pid=5580 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 13:09:44.886875 sshd-session[5580]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:09:44.891537 sshd[5580]: Accepted publickey for core from 139.178.89.65 port 58604 ssh2: RSA SHA256:KHLvalz0pEUwMHEW+CYnePnCR/HY9aPnYIRYzgcsWEk Dec 16 13:09:44.895611 kernel: audit: type=1101 audit(1765890584.881:825): pid=5580 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 13:09:44.885000 audit[5580]: CRED_ACQ pid=5580 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 13:09:44.902826 kernel: audit: type=1103 audit(1765890584.885:826): pid=5580 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 13:09:44.909324 kernel: audit: type=1006 audit(1765890584.885:827): pid=5580 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=16 res=1 Dec 16 13:09:44.909188 systemd-logind[1939]: New session 16 of user core. Dec 16 13:09:44.885000 audit[5580]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe7d5fe1d0 a2=3 a3=0 items=0 ppid=1 pid=5580 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:09:44.885000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 13:09:44.921893 kernel: audit: type=1300 audit(1765890584.885:827): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe7d5fe1d0 a2=3 a3=0 items=0 ppid=1 pid=5580 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:09:44.922034 kernel: audit: type=1327 audit(1765890584.885:827): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 13:09:44.925400 systemd[1]: Started session-16.scope - Session 16 of User core. Dec 16 13:09:44.929000 audit[5580]: USER_START pid=5580 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 13:09:44.933000 audit[5583]: CRED_ACQ pid=5583 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 13:09:44.938614 kernel: audit: type=1105 audit(1765890584.929:828): pid=5580 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 13:09:44.938883 kernel: audit: type=1103 audit(1765890584.933:829): pid=5583 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 13:09:45.121381 sshd[5583]: Connection closed by 139.178.89.65 port 58604 Dec 16 13:09:45.123617 sshd-session[5580]: pam_unix(sshd:session): session closed for user core Dec 16 13:09:45.125000 audit[5580]: USER_END pid=5580 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 13:09:45.134309 systemd[1]: sshd@15-172.31.28.98:22-139.178.89.65:58604.service: Deactivated successfully. Dec 16 13:09:45.136086 kernel: audit: type=1106 audit(1765890585.125:830): pid=5580 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 13:09:45.142447 kernel: audit: type=1104 audit(1765890585.125:831): pid=5580 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 13:09:45.125000 audit[5580]: CRED_DISP pid=5580 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 13:09:45.140904 systemd[1]: session-16.scope: Deactivated successfully. Dec 16 13:09:45.134000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-172.31.28.98:22-139.178.89.65:58604 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:09:45.143309 systemd-logind[1939]: Session 16 logged out. Waiting for processes to exit. Dec 16 13:09:45.164374 systemd[1]: Started sshd@16-172.31.28.98:22-139.178.89.65:58612.service - OpenSSH per-connection server daemon (139.178.89.65:58612). Dec 16 13:09:45.164000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-172.31.28.98:22-139.178.89.65:58612 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:09:45.166789 systemd-logind[1939]: Removed session 16. Dec 16 13:09:45.349000 audit[5595]: USER_ACCT pid=5595 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 13:09:45.350346 sshd[5595]: Accepted publickey for core from 139.178.89.65 port 58612 ssh2: RSA SHA256:KHLvalz0pEUwMHEW+CYnePnCR/HY9aPnYIRYzgcsWEk Dec 16 13:09:45.351000 audit[5595]: CRED_ACQ pid=5595 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 13:09:45.351000 audit[5595]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffcff7ae090 a2=3 a3=0 items=0 ppid=1 pid=5595 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:09:45.351000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 13:09:45.352373 sshd-session[5595]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:09:45.359150 systemd-logind[1939]: New session 17 of user core. Dec 16 13:09:45.371647 systemd[1]: Started session-17.scope - Session 17 of User core. Dec 16 13:09:45.376000 audit[5595]: USER_START pid=5595 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 13:09:45.378000 audit[5598]: CRED_ACQ pid=5598 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 13:09:45.758911 containerd[1969]: time="2025-12-16T13:09:45.758610286Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 13:09:46.070169 containerd[1969]: time="2025-12-16T13:09:46.070015342Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:09:46.074750 containerd[1969]: time="2025-12-16T13:09:46.073824075Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 13:09:46.074750 containerd[1969]: time="2025-12-16T13:09:46.073969859Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 16 13:09:46.075870 kubelet[3309]: E1216 13:09:46.075213 3309 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 13:09:46.083876 kubelet[3309]: E1216 13:09:46.080598 3309 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 13:09:46.083876 kubelet[3309]: E1216 13:09:46.080815 3309 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vqwxt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6d7fb6ffdb-t947q_calico-apiserver(402c8f91-f505-4b31-ab8d-437df33aba9f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 13:09:46.083876 kubelet[3309]: E1216 13:09:46.082166 3309 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6d7fb6ffdb-t947q" podUID="402c8f91-f505-4b31-ab8d-437df33aba9f" Dec 16 13:09:46.551701 sshd[5598]: Connection closed by 139.178.89.65 port 58612 Dec 16 13:09:46.553310 sshd-session[5595]: pam_unix(sshd:session): session closed for user core Dec 16 13:09:46.555000 audit[5595]: USER_END pid=5595 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 13:09:46.556000 audit[5595]: CRED_DISP pid=5595 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 13:09:46.559950 systemd-logind[1939]: Session 17 logged out. Waiting for processes to exit. Dec 16 13:09:46.560479 systemd[1]: sshd@16-172.31.28.98:22-139.178.89.65:58612.service: Deactivated successfully. Dec 16 13:09:46.561000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-172.31.28.98:22-139.178.89.65:58612 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:09:46.563966 systemd[1]: session-17.scope: Deactivated successfully. Dec 16 13:09:46.566576 systemd-logind[1939]: Removed session 17. Dec 16 13:09:46.587339 systemd[1]: Started sshd@17-172.31.28.98:22-139.178.89.65:58614.service - OpenSSH per-connection server daemon (139.178.89.65:58614). Dec 16 13:09:46.587000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-172.31.28.98:22-139.178.89.65:58614 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:09:46.769000 audit[5614]: USER_ACCT pid=5614 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 13:09:46.771890 sshd[5614]: Accepted publickey for core from 139.178.89.65 port 58614 ssh2: RSA SHA256:KHLvalz0pEUwMHEW+CYnePnCR/HY9aPnYIRYzgcsWEk Dec 16 13:09:46.775631 containerd[1969]: time="2025-12-16T13:09:46.775536724Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Dec 16 13:09:46.775000 audit[5614]: CRED_ACQ pid=5614 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 13:09:46.775000 audit[5614]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe5aac60f0 a2=3 a3=0 items=0 ppid=1 pid=5614 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:09:46.775000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 13:09:46.777704 sshd-session[5614]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:09:46.788764 systemd-logind[1939]: New session 18 of user core. Dec 16 13:09:46.799075 systemd[1]: Started session-18.scope - Session 18 of User core. Dec 16 13:09:46.806000 audit[5614]: USER_START pid=5614 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 13:09:46.813000 audit[5619]: CRED_ACQ pid=5619 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 13:09:47.070314 containerd[1969]: time="2025-12-16T13:09:47.070004080Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:09:47.072635 containerd[1969]: time="2025-12-16T13:09:47.072543378Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Dec 16 13:09:47.072635 containerd[1969]: time="2025-12-16T13:09:47.072593592Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Dec 16 13:09:47.072975 kubelet[3309]: E1216 13:09:47.072901 3309 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 16 13:09:47.073134 kubelet[3309]: E1216 13:09:47.072984 3309 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 16 13:09:47.073406 kubelet[3309]: E1216 13:09:47.073195 3309 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-m7qnn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-h272q_calico-system(c808a4b9-6eee-4490-92c6-5f208009c5e7): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Dec 16 13:09:47.077001 containerd[1969]: time="2025-12-16T13:09:47.076909804Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Dec 16 13:09:47.380488 containerd[1969]: time="2025-12-16T13:09:47.380288431Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:09:47.382717 containerd[1969]: time="2025-12-16T13:09:47.382542632Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Dec 16 13:09:47.382717 containerd[1969]: time="2025-12-16T13:09:47.382679865Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Dec 16 13:09:47.383787 kubelet[3309]: E1216 13:09:47.383193 3309 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 16 13:09:47.384971 kubelet[3309]: E1216 13:09:47.384426 3309 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 16 13:09:47.385313 kubelet[3309]: E1216 13:09:47.385106 3309 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-m7qnn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-h272q_calico-system(c808a4b9-6eee-4490-92c6-5f208009c5e7): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Dec 16 13:09:47.386786 kubelet[3309]: E1216 13:09:47.386690 3309 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-h272q" podUID="c808a4b9-6eee-4490-92c6-5f208009c5e7" Dec 16 13:09:47.721000 audit[5629]: NETFILTER_CFG table=filter:140 family=2 entries=26 op=nft_register_rule pid=5629 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 13:09:47.721000 audit[5629]: SYSCALL arch=c000003e syscall=46 success=yes exit=14176 a0=3 a1=7ffc312f78a0 a2=0 a3=7ffc312f788c items=0 ppid=3636 pid=5629 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:09:47.721000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 13:09:47.725000 audit[5629]: NETFILTER_CFG table=nat:141 family=2 entries=20 op=nft_register_rule pid=5629 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 13:09:47.725000 audit[5629]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffc312f78a0 a2=0 a3=0 items=0 ppid=3636 pid=5629 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:09:47.725000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 13:09:47.749000 audit[5631]: NETFILTER_CFG table=filter:142 family=2 entries=38 op=nft_register_rule pid=5631 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 13:09:47.749000 audit[5631]: SYSCALL arch=c000003e syscall=46 success=yes exit=14176 a0=3 a1=7ffcd1ee3c50 a2=0 a3=7ffcd1ee3c3c items=0 ppid=3636 pid=5631 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:09:47.749000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 13:09:47.752238 sshd[5619]: Connection closed by 139.178.89.65 port 58614 Dec 16 13:09:47.752718 sshd-session[5614]: pam_unix(sshd:session): session closed for user core Dec 16 13:09:47.759000 audit[5614]: USER_END pid=5614 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 13:09:47.759000 audit[5614]: CRED_DISP pid=5614 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 13:09:47.763071 containerd[1969]: time="2025-12-16T13:09:47.763010041Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Dec 16 13:09:47.769023 systemd[1]: sshd@17-172.31.28.98:22-139.178.89.65:58614.service: Deactivated successfully. Dec 16 13:09:47.770000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-172.31.28.98:22-139.178.89.65:58614 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:09:47.756000 audit[5631]: NETFILTER_CFG table=nat:143 family=2 entries=20 op=nft_register_rule pid=5631 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 13:09:47.756000 audit[5631]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffcd1ee3c50 a2=0 a3=0 items=0 ppid=3636 pid=5631 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:09:47.756000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 13:09:47.784826 systemd[1]: session-18.scope: Deactivated successfully. Dec 16 13:09:47.790354 systemd-logind[1939]: Session 18 logged out. Waiting for processes to exit. Dec 16 13:09:47.828000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-172.31.28.98:22-139.178.89.65:58620 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:09:47.828541 systemd[1]: Started sshd@18-172.31.28.98:22-139.178.89.65:58620.service - OpenSSH per-connection server daemon (139.178.89.65:58620). Dec 16 13:09:47.835284 systemd-logind[1939]: Removed session 18. Dec 16 13:09:48.086000 audit[5636]: USER_ACCT pid=5636 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 13:09:48.086770 sshd[5636]: Accepted publickey for core from 139.178.89.65 port 58620 ssh2: RSA SHA256:KHLvalz0pEUwMHEW+CYnePnCR/HY9aPnYIRYzgcsWEk Dec 16 13:09:48.088335 containerd[1969]: time="2025-12-16T13:09:48.088136127Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:09:48.088000 audit[5636]: CRED_ACQ pid=5636 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 13:09:48.088000 audit[5636]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fffd500a0e0 a2=3 a3=0 items=0 ppid=1 pid=5636 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:09:48.088000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 13:09:48.089684 sshd-session[5636]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:09:48.090512 containerd[1969]: time="2025-12-16T13:09:48.090375472Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Dec 16 13:09:48.090726 containerd[1969]: time="2025-12-16T13:09:48.090554048Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Dec 16 13:09:48.091262 kubelet[3309]: E1216 13:09:48.091140 3309 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 13:09:48.091262 kubelet[3309]: E1216 13:09:48.091232 3309 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 13:09:48.092202 kubelet[3309]: E1216 13:09:48.091796 3309 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:561161844d8542869bf93b20f103b053,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-v44l9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-58f99f576c-h7p64_calico-system(f4a8c05f-aa26-454c-a381-75bd59548a78): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Dec 16 13:09:48.097019 containerd[1969]: time="2025-12-16T13:09:48.096950444Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Dec 16 13:09:48.106180 systemd-logind[1939]: New session 19 of user core. Dec 16 13:09:48.110707 systemd[1]: Started session-19.scope - Session 19 of User core. Dec 16 13:09:48.118000 audit[5636]: USER_START pid=5636 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 13:09:48.122000 audit[5639]: CRED_ACQ pid=5639 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 13:09:48.358320 containerd[1969]: time="2025-12-16T13:09:48.358178771Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:09:48.360329 containerd[1969]: time="2025-12-16T13:09:48.360262472Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Dec 16 13:09:48.360481 containerd[1969]: time="2025-12-16T13:09:48.360384682Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Dec 16 13:09:48.360835 kubelet[3309]: E1216 13:09:48.360782 3309 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 13:09:48.360962 kubelet[3309]: E1216 13:09:48.360838 3309 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 13:09:48.361241 kubelet[3309]: E1216 13:09:48.361187 3309 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-v44l9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-58f99f576c-h7p64_calico-system(f4a8c05f-aa26-454c-a381-75bd59548a78): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Dec 16 13:09:48.363313 kubelet[3309]: E1216 13:09:48.363243 3309 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-58f99f576c-h7p64" podUID="f4a8c05f-aa26-454c-a381-75bd59548a78" Dec 16 13:09:48.759678 containerd[1969]: time="2025-12-16T13:09:48.759473126Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Dec 16 13:09:48.823317 sshd[5639]: Connection closed by 139.178.89.65 port 58620 Dec 16 13:09:48.826023 sshd-session[5636]: pam_unix(sshd:session): session closed for user core Dec 16 13:09:48.836000 audit[5636]: USER_END pid=5636 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 13:09:48.836000 audit[5636]: CRED_DISP pid=5636 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 13:09:48.842949 systemd[1]: sshd@18-172.31.28.98:22-139.178.89.65:58620.service: Deactivated successfully. Dec 16 13:09:48.845000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-172.31.28.98:22-139.178.89.65:58620 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:09:48.851868 systemd[1]: session-19.scope: Deactivated successfully. Dec 16 13:09:48.855858 systemd-logind[1939]: Session 19 logged out. Waiting for processes to exit. Dec 16 13:09:48.874921 systemd[1]: Started sshd@19-172.31.28.98:22-139.178.89.65:58636.service - OpenSSH per-connection server daemon (139.178.89.65:58636). Dec 16 13:09:48.875000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-172.31.28.98:22-139.178.89.65:58636 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:09:48.878142 systemd-logind[1939]: Removed session 19. Dec 16 13:09:49.040826 containerd[1969]: time="2025-12-16T13:09:49.040567090Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:09:49.043191 containerd[1969]: time="2025-12-16T13:09:49.042920605Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Dec 16 13:09:49.043342 containerd[1969]: time="2025-12-16T13:09:49.042971104Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Dec 16 13:09:49.044635 kubelet[3309]: E1216 13:09:49.044291 3309 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 16 13:09:49.044635 kubelet[3309]: E1216 13:09:49.044472 3309 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 16 13:09:49.045165 kubelet[3309]: E1216 13:09:49.044790 3309 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-q6qrs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-7bcdd655bc-b4pqw_calico-system(eef40561-fc3a-47f4-ab5c-0482b5980a8d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Dec 16 13:09:49.046644 kubelet[3309]: E1216 13:09:49.046457 3309 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7bcdd655bc-b4pqw" podUID="eef40561-fc3a-47f4-ab5c-0482b5980a8d" Dec 16 13:09:49.047117 containerd[1969]: time="2025-12-16T13:09:49.046971473Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 13:09:49.065000 audit[5649]: USER_ACCT pid=5649 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 13:09:49.065798 sshd[5649]: Accepted publickey for core from 139.178.89.65 port 58636 ssh2: RSA SHA256:KHLvalz0pEUwMHEW+CYnePnCR/HY9aPnYIRYzgcsWEk Dec 16 13:09:49.068208 sshd-session[5649]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:09:49.067000 audit[5649]: CRED_ACQ pid=5649 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 13:09:49.067000 audit[5649]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc1ff6bed0 a2=3 a3=0 items=0 ppid=1 pid=5649 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=20 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:09:49.067000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 13:09:49.078127 systemd-logind[1939]: New session 20 of user core. Dec 16 13:09:49.083533 systemd[1]: Started session-20.scope - Session 20 of User core. Dec 16 13:09:49.088000 audit[5649]: USER_START pid=5649 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 13:09:49.091000 audit[5652]: CRED_ACQ pid=5652 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 13:09:49.270790 sshd[5652]: Connection closed by 139.178.89.65 port 58636 Dec 16 13:09:49.271556 sshd-session[5649]: pam_unix(sshd:session): session closed for user core Dec 16 13:09:49.273000 audit[5649]: USER_END pid=5649 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 13:09:49.273000 audit[5649]: CRED_DISP pid=5649 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 13:09:49.275615 systemd[1]: sshd@19-172.31.28.98:22-139.178.89.65:58636.service: Deactivated successfully. Dec 16 13:09:49.276000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-172.31.28.98:22-139.178.89.65:58636 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:09:49.278451 systemd[1]: session-20.scope: Deactivated successfully. Dec 16 13:09:49.282628 systemd-logind[1939]: Session 20 logged out. Waiting for processes to exit. Dec 16 13:09:49.284005 systemd-logind[1939]: Removed session 20. Dec 16 13:09:49.325952 containerd[1969]: time="2025-12-16T13:09:49.325894400Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:09:49.328237 containerd[1969]: time="2025-12-16T13:09:49.328179141Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 13:09:49.328423 containerd[1969]: time="2025-12-16T13:09:49.328297725Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 16 13:09:49.328594 kubelet[3309]: E1216 13:09:49.328546 3309 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 13:09:49.328671 kubelet[3309]: E1216 13:09:49.328608 3309 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 13:09:49.354334 kubelet[3309]: E1216 13:09:49.328808 3309 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6h55z,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6d7fb6ffdb-x9w4j_calico-apiserver(17fc83ee-aaa8-428d-ba14-4fb4545cfe65): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 13:09:49.354334 kubelet[3309]: E1216 13:09:49.330102 3309 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6d7fb6ffdb-x9w4j" podUID="17fc83ee-aaa8-428d-ba14-4fb4545cfe65" Dec 16 13:09:51.424380 update_engine[1942]: I20251216 13:09:51.424295 1942 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Dec 16 13:09:51.424380 update_engine[1942]: I20251216 13:09:51.424368 1942 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Dec 16 13:09:51.426845 update_engine[1942]: I20251216 13:09:51.426792 1942 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Dec 16 13:09:51.427340 update_engine[1942]: I20251216 13:09:51.427304 1942 omaha_request_params.cc:62] Current group set to beta Dec 16 13:09:51.428045 update_engine[1942]: I20251216 13:09:51.427807 1942 update_attempter.cc:499] Already updated boot flags. Skipping. Dec 16 13:09:51.428045 update_engine[1942]: I20251216 13:09:51.427835 1942 update_attempter.cc:643] Scheduling an action processor start. Dec 16 13:09:51.428045 update_engine[1942]: I20251216 13:09:51.427862 1942 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Dec 16 13:09:51.428045 update_engine[1942]: I20251216 13:09:51.427938 1942 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Dec 16 13:09:51.428045 update_engine[1942]: I20251216 13:09:51.428018 1942 omaha_request_action.cc:271] Posting an Omaha request to disabled Dec 16 13:09:51.428045 update_engine[1942]: I20251216 13:09:51.428030 1942 omaha_request_action.cc:272] Request: Dec 16 13:09:51.428045 update_engine[1942]: Dec 16 13:09:51.428045 update_engine[1942]: Dec 16 13:09:51.428045 update_engine[1942]: Dec 16 13:09:51.428045 update_engine[1942]: Dec 16 13:09:51.428045 update_engine[1942]: Dec 16 13:09:51.428045 update_engine[1942]: Dec 16 13:09:51.428045 update_engine[1942]: Dec 16 13:09:51.428045 update_engine[1942]: Dec 16 13:09:51.428045 update_engine[1942]: I20251216 13:09:51.428037 1942 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Dec 16 13:09:51.470365 update_engine[1942]: I20251216 13:09:51.467820 1942 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Dec 16 13:09:51.470365 update_engine[1942]: I20251216 13:09:51.468776 1942 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Dec 16 13:09:51.477115 locksmithd[2019]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Dec 16 13:09:51.494917 update_engine[1942]: E20251216 13:09:51.494832 1942 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled (Domain name not found) Dec 16 13:09:51.495106 update_engine[1942]: I20251216 13:09:51.494971 1942 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Dec 16 13:09:53.962000 audit[5666]: NETFILTER_CFG table=filter:144 family=2 entries=26 op=nft_register_rule pid=5666 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 13:09:53.964590 kernel: kauditd_printk_skb: 57 callbacks suppressed Dec 16 13:09:53.964637 kernel: audit: type=1325 audit(1765890593.962:873): table=filter:144 family=2 entries=26 op=nft_register_rule pid=5666 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 13:09:53.962000 audit[5666]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7fffd0237c40 a2=0 a3=7fffd0237c2c items=0 ppid=3636 pid=5666 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:09:53.973662 kernel: audit: type=1300 audit(1765890593.962:873): arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7fffd0237c40 a2=0 a3=7fffd0237c2c items=0 ppid=3636 pid=5666 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:09:53.974116 kernel: audit: type=1327 audit(1765890593.962:873): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 13:09:53.962000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 13:09:53.972000 audit[5666]: NETFILTER_CFG table=nat:145 family=2 entries=104 op=nft_register_chain pid=5666 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 13:09:53.981193 kernel: audit: type=1325 audit(1765890593.972:874): table=nat:145 family=2 entries=104 op=nft_register_chain pid=5666 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 13:09:53.972000 audit[5666]: SYSCALL arch=c000003e syscall=46 success=yes exit=48684 a0=3 a1=7fffd0237c40 a2=0 a3=7fffd0237c2c items=0 ppid=3636 pid=5666 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:09:53.990259 kernel: audit: type=1300 audit(1765890593.972:874): arch=c000003e syscall=46 success=yes exit=48684 a0=3 a1=7fffd0237c40 a2=0 a3=7fffd0237c2c items=0 ppid=3636 pid=5666 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:09:53.993639 kernel: audit: type=1327 audit(1765890593.972:874): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 13:09:53.972000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 13:09:54.307744 systemd[1]: Started sshd@20-172.31.28.98:22-139.178.89.65:51990.service - OpenSSH per-connection server daemon (139.178.89.65:51990). Dec 16 13:09:54.315225 kernel: audit: type=1130 audit(1765890594.307:875): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-172.31.28.98:22-139.178.89.65:51990 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:09:54.307000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-172.31.28.98:22-139.178.89.65:51990 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:09:54.480085 kernel: audit: type=1101 audit(1765890594.473:876): pid=5668 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 13:09:54.473000 audit[5668]: USER_ACCT pid=5668 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 13:09:54.477636 sshd-session[5668]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:09:54.480740 sshd[5668]: Accepted publickey for core from 139.178.89.65 port 51990 ssh2: RSA SHA256:KHLvalz0pEUwMHEW+CYnePnCR/HY9aPnYIRYzgcsWEk Dec 16 13:09:54.476000 audit[5668]: CRED_ACQ pid=5668 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 13:09:54.485627 systemd-logind[1939]: New session 21 of user core. Dec 16 13:09:54.490053 kernel: audit: type=1103 audit(1765890594.476:877): pid=5668 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 13:09:54.490211 kernel: audit: type=1006 audit(1765890594.477:878): pid=5668 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=21 res=1 Dec 16 13:09:54.477000 audit[5668]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe959b5250 a2=3 a3=0 items=0 ppid=1 pid=5668 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=21 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:09:54.477000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 13:09:54.493530 systemd[1]: Started session-21.scope - Session 21 of User core. Dec 16 13:09:54.498000 audit[5668]: USER_START pid=5668 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 13:09:54.500000 audit[5671]: CRED_ACQ pid=5671 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 13:09:54.688664 sshd[5671]: Connection closed by 139.178.89.65 port 51990 Dec 16 13:09:54.715134 sshd-session[5668]: pam_unix(sshd:session): session closed for user core Dec 16 13:09:54.718000 audit[5668]: USER_END pid=5668 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 13:09:54.720000 audit[5668]: CRED_DISP pid=5668 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 13:09:54.722877 systemd[1]: sshd@20-172.31.28.98:22-139.178.89.65:51990.service: Deactivated successfully. Dec 16 13:09:54.723000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-172.31.28.98:22-139.178.89.65:51990 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:09:54.727115 systemd[1]: session-21.scope: Deactivated successfully. Dec 16 13:09:54.730185 systemd-logind[1939]: Session 21 logged out. Waiting for processes to exit. Dec 16 13:09:54.732583 systemd-logind[1939]: Removed session 21. Dec 16 13:09:56.758760 kubelet[3309]: E1216 13:09:56.758289 3309 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-wpbz6" podUID="ea48f51b-a248-4d71-8caa-ed889e7f5fac" Dec 16 13:09:57.757289 kubelet[3309]: E1216 13:09:57.757117 3309 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6d7fb6ffdb-t947q" podUID="402c8f91-f505-4b31-ab8d-437df33aba9f" Dec 16 13:09:59.723019 systemd[1]: Started sshd@21-172.31.28.98:22-139.178.89.65:51992.service - OpenSSH per-connection server daemon (139.178.89.65:51992). Dec 16 13:09:59.725307 kernel: kauditd_printk_skb: 7 callbacks suppressed Dec 16 13:09:59.725372 kernel: audit: type=1130 audit(1765890599.723:884): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-172.31.28.98:22-139.178.89.65:51992 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:09:59.723000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-172.31.28.98:22-139.178.89.65:51992 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:09:59.761356 kubelet[3309]: E1216 13:09:59.761194 3309 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-h272q" podUID="c808a4b9-6eee-4490-92c6-5f208009c5e7" Dec 16 13:09:59.979000 audit[5709]: USER_ACCT pid=5709 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 13:09:59.980033 sshd[5709]: Accepted publickey for core from 139.178.89.65 port 51992 ssh2: RSA SHA256:KHLvalz0pEUwMHEW+CYnePnCR/HY9aPnYIRYzgcsWEk Dec 16 13:09:59.981000 audit[5709]: CRED_ACQ pid=5709 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 13:09:59.986486 sshd-session[5709]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:09:59.987899 kernel: audit: type=1101 audit(1765890599.979:885): pid=5709 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 13:09:59.987990 kernel: audit: type=1103 audit(1765890599.981:886): pid=5709 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 13:09:59.993134 kernel: audit: type=1006 audit(1765890599.981:887): pid=5709 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=22 res=1 Dec 16 13:09:59.981000 audit[5709]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fffe5be0970 a2=3 a3=0 items=0 ppid=1 pid=5709 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:09:59.997260 kernel: audit: type=1300 audit(1765890599.981:887): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fffe5be0970 a2=3 a3=0 items=0 ppid=1 pid=5709 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:09:59.981000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 13:10:00.013226 kernel: audit: type=1327 audit(1765890599.981:887): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 13:10:00.023905 systemd-logind[1939]: New session 22 of user core. Dec 16 13:10:00.045894 systemd[1]: Started session-22.scope - Session 22 of User core. Dec 16 13:10:00.059000 audit[5709]: USER_START pid=5709 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 13:10:00.072273 kernel: audit: type=1105 audit(1765890600.059:888): pid=5709 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 13:10:00.072410 kernel: audit: type=1103 audit(1765890600.062:889): pid=5712 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 13:10:00.062000 audit[5712]: CRED_ACQ pid=5712 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 13:10:00.658614 sshd[5712]: Connection closed by 139.178.89.65 port 51992 Dec 16 13:10:00.660454 sshd-session[5709]: pam_unix(sshd:session): session closed for user core Dec 16 13:10:00.663000 audit[5709]: USER_END pid=5709 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 13:10:00.670417 kernel: audit: type=1106 audit(1765890600.663:890): pid=5709 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 13:10:00.670581 kernel: audit: type=1104 audit(1765890600.663:891): pid=5709 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 13:10:00.663000 audit[5709]: CRED_DISP pid=5709 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 13:10:00.676257 systemd[1]: sshd@21-172.31.28.98:22-139.178.89.65:51992.service: Deactivated successfully. Dec 16 13:10:00.676000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-172.31.28.98:22-139.178.89.65:51992 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:10:00.679935 systemd[1]: session-22.scope: Deactivated successfully. Dec 16 13:10:00.683128 systemd-logind[1939]: Session 22 logged out. Waiting for processes to exit. Dec 16 13:10:00.689353 systemd-logind[1939]: Removed session 22. Dec 16 13:10:00.772562 kubelet[3309]: E1216 13:10:00.772345 3309 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6d7fb6ffdb-x9w4j" podUID="17fc83ee-aaa8-428d-ba14-4fb4545cfe65" Dec 16 13:10:01.340382 update_engine[1942]: I20251216 13:10:01.339157 1942 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Dec 16 13:10:01.340382 update_engine[1942]: I20251216 13:10:01.343513 1942 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Dec 16 13:10:01.351628 update_engine[1942]: I20251216 13:10:01.351117 1942 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Dec 16 13:10:01.380147 update_engine[1942]: E20251216 13:10:01.380071 1942 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled (Domain name not found) Dec 16 13:10:01.380329 update_engine[1942]: I20251216 13:10:01.380234 1942 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Dec 16 13:10:01.758170 kubelet[3309]: E1216 13:10:01.757979 3309 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7bcdd655bc-b4pqw" podUID="eef40561-fc3a-47f4-ab5c-0482b5980a8d" Dec 16 13:10:02.780615 kubelet[3309]: E1216 13:10:02.780541 3309 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-58f99f576c-h7p64" podUID="f4a8c05f-aa26-454c-a381-75bd59548a78" Dec 16 13:10:05.712000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-172.31.28.98:22-139.178.89.65:57690 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:10:05.712378 systemd[1]: Started sshd@22-172.31.28.98:22-139.178.89.65:57690.service - OpenSSH per-connection server daemon (139.178.89.65:57690). Dec 16 13:10:05.714580 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 16 13:10:05.714643 kernel: audit: type=1130 audit(1765890605.712:893): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-172.31.28.98:22-139.178.89.65:57690 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:10:06.007000 audit[5725]: USER_ACCT pid=5725 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 13:10:06.020710 sshd[5725]: Accepted publickey for core from 139.178.89.65 port 57690 ssh2: RSA SHA256:KHLvalz0pEUwMHEW+CYnePnCR/HY9aPnYIRYzgcsWEk Dec 16 13:10:06.021399 kernel: audit: type=1101 audit(1765890606.007:894): pid=5725 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 13:10:06.045024 kernel: audit: type=1103 audit(1765890606.023:895): pid=5725 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 13:10:06.023000 audit[5725]: CRED_ACQ pid=5725 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 13:10:06.027087 sshd-session[5725]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:10:06.065996 kernel: audit: type=1006 audit(1765890606.023:896): pid=5725 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=23 res=1 Dec 16 13:10:06.066167 kernel: audit: type=1300 audit(1765890606.023:896): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffdf4bcc1b0 a2=3 a3=0 items=0 ppid=1 pid=5725 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:10:06.023000 audit[5725]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffdf4bcc1b0 a2=3 a3=0 items=0 ppid=1 pid=5725 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:10:06.023000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 13:10:06.078898 kernel: audit: type=1327 audit(1765890606.023:896): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 13:10:06.095312 systemd-logind[1939]: New session 23 of user core. Dec 16 13:10:06.102772 systemd[1]: Started session-23.scope - Session 23 of User core. Dec 16 13:10:06.114000 audit[5725]: USER_START pid=5725 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 13:10:06.138127 kernel: audit: type=1105 audit(1765890606.114:897): pid=5725 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 13:10:06.138000 audit[5728]: CRED_ACQ pid=5728 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 13:10:06.154158 kernel: audit: type=1103 audit(1765890606.138:898): pid=5728 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 13:10:06.914036 sshd[5728]: Connection closed by 139.178.89.65 port 57690 Dec 16 13:10:06.916330 sshd-session[5725]: pam_unix(sshd:session): session closed for user core Dec 16 13:10:06.919000 audit[5725]: USER_END pid=5725 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 13:10:06.929121 kernel: audit: type=1106 audit(1765890606.919:899): pid=5725 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 13:10:06.930800 systemd[1]: sshd@22-172.31.28.98:22-139.178.89.65:57690.service: Deactivated successfully. Dec 16 13:10:06.919000 audit[5725]: CRED_DISP pid=5725 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 13:10:06.941107 kernel: audit: type=1104 audit(1765890606.919:900): pid=5725 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 13:10:06.942764 systemd[1]: session-23.scope: Deactivated successfully. Dec 16 13:10:06.945283 systemd-logind[1939]: Session 23 logged out. Waiting for processes to exit. Dec 16 13:10:06.931000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-172.31.28.98:22-139.178.89.65:57690 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:10:06.952045 systemd-logind[1939]: Removed session 23. Dec 16 13:10:07.755719 kubelet[3309]: E1216 13:10:07.755659 3309 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-wpbz6" podUID="ea48f51b-a248-4d71-8caa-ed889e7f5fac" Dec 16 13:10:09.757151 kubelet[3309]: E1216 13:10:09.756824 3309 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6d7fb6ffdb-t947q" podUID="402c8f91-f505-4b31-ab8d-437df33aba9f" Dec 16 13:10:11.336664 update_engine[1942]: I20251216 13:10:11.336571 1942 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Dec 16 13:10:11.337207 update_engine[1942]: I20251216 13:10:11.336702 1942 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Dec 16 13:10:11.337999 update_engine[1942]: I20251216 13:10:11.337950 1942 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Dec 16 13:10:11.339460 update_engine[1942]: E20251216 13:10:11.339174 1942 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled (Domain name not found) Dec 16 13:10:11.339460 update_engine[1942]: I20251216 13:10:11.339304 1942 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Dec 16 13:10:11.950748 systemd[1]: Started sshd@23-172.31.28.98:22-139.178.89.65:48732.service - OpenSSH per-connection server daemon (139.178.89.65:48732). Dec 16 13:10:11.950000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-172.31.28.98:22-139.178.89.65:48732 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:10:11.953507 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 16 13:10:11.953618 kernel: audit: type=1130 audit(1765890611.950:902): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-172.31.28.98:22-139.178.89.65:48732 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:10:12.226000 audit[5740]: USER_ACCT pid=5740 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 13:10:12.227343 sshd[5740]: Accepted publickey for core from 139.178.89.65 port 48732 ssh2: RSA SHA256:KHLvalz0pEUwMHEW+CYnePnCR/HY9aPnYIRYzgcsWEk Dec 16 13:10:12.230923 sshd-session[5740]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:10:12.233098 kernel: audit: type=1101 audit(1765890612.226:903): pid=5740 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 13:10:12.228000 audit[5740]: CRED_ACQ pid=5740 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 13:10:12.244096 kernel: audit: type=1103 audit(1765890612.228:904): pid=5740 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 13:10:12.250095 kernel: audit: type=1006 audit(1765890612.228:905): pid=5740 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=24 res=1 Dec 16 13:10:12.252942 kernel: audit: type=1300 audit(1765890612.228:905): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffdd834cfc0 a2=3 a3=0 items=0 ppid=1 pid=5740 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:10:12.228000 audit[5740]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffdd834cfc0 a2=3 a3=0 items=0 ppid=1 pid=5740 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:10:12.250526 systemd-logind[1939]: New session 24 of user core. Dec 16 13:10:12.228000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 13:10:12.263156 kernel: audit: type=1327 audit(1765890612.228:905): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 13:10:12.265674 systemd[1]: Started session-24.scope - Session 24 of User core. Dec 16 13:10:12.272000 audit[5740]: USER_START pid=5740 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 13:10:12.280832 kernel: audit: type=1105 audit(1765890612.272:906): pid=5740 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 13:10:12.280000 audit[5743]: CRED_ACQ pid=5743 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 13:10:12.288115 kernel: audit: type=1103 audit(1765890612.280:907): pid=5743 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 13:10:12.763901 kubelet[3309]: E1216 13:10:12.763833 3309 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-h272q" podUID="c808a4b9-6eee-4490-92c6-5f208009c5e7" Dec 16 13:10:12.898985 sshd[5743]: Connection closed by 139.178.89.65 port 48732 Dec 16 13:10:12.901313 sshd-session[5740]: pam_unix(sshd:session): session closed for user core Dec 16 13:10:12.904000 audit[5740]: USER_END pid=5740 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 13:10:12.910458 systemd[1]: sshd@23-172.31.28.98:22-139.178.89.65:48732.service: Deactivated successfully. Dec 16 13:10:12.912285 kernel: audit: type=1106 audit(1765890612.904:908): pid=5740 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 13:10:12.918192 kernel: audit: type=1104 audit(1765890612.905:909): pid=5740 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 13:10:12.905000 audit[5740]: CRED_DISP pid=5740 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 13:10:12.915815 systemd[1]: session-24.scope: Deactivated successfully. Dec 16 13:10:12.911000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-172.31.28.98:22-139.178.89.65:48732 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:10:12.920482 systemd-logind[1939]: Session 24 logged out. Waiting for processes to exit. Dec 16 13:10:12.924139 systemd-logind[1939]: Removed session 24. Dec 16 13:10:13.757202 kubelet[3309]: E1216 13:10:13.757139 3309 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-58f99f576c-h7p64" podUID="f4a8c05f-aa26-454c-a381-75bd59548a78" Dec 16 13:10:14.759769 kubelet[3309]: E1216 13:10:14.759593 3309 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6d7fb6ffdb-x9w4j" podUID="17fc83ee-aaa8-428d-ba14-4fb4545cfe65" Dec 16 13:10:16.758864 kubelet[3309]: E1216 13:10:16.758794 3309 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7bcdd655bc-b4pqw" podUID="eef40561-fc3a-47f4-ab5c-0482b5980a8d" Dec 16 13:10:17.943837 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 16 13:10:17.943974 kernel: audit: type=1130 audit(1765890617.936:911): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-172.31.28.98:22-139.178.89.65:48734 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:10:17.936000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-172.31.28.98:22-139.178.89.65:48734 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:10:17.936258 systemd[1]: Started sshd@24-172.31.28.98:22-139.178.89.65:48734.service - OpenSSH per-connection server daemon (139.178.89.65:48734). Dec 16 13:10:18.143104 kernel: audit: type=1101 audit(1765890618.136:912): pid=5755 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 13:10:18.136000 audit[5755]: USER_ACCT pid=5755 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 13:10:18.139242 sshd-session[5755]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:10:18.143792 sshd[5755]: Accepted publickey for core from 139.178.89.65 port 48734 ssh2: RSA SHA256:KHLvalz0pEUwMHEW+CYnePnCR/HY9aPnYIRYzgcsWEk Dec 16 13:10:18.138000 audit[5755]: CRED_ACQ pid=5755 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 13:10:18.152340 kernel: audit: type=1103 audit(1765890618.138:913): pid=5755 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 13:10:18.154150 systemd-logind[1939]: New session 25 of user core. Dec 16 13:10:18.160099 kernel: audit: type=1006 audit(1765890618.138:914): pid=5755 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=25 res=1 Dec 16 13:10:18.138000 audit[5755]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffeb1852480 a2=3 a3=0 items=0 ppid=1 pid=5755 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:10:18.167091 kernel: audit: type=1300 audit(1765890618.138:914): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffeb1852480 a2=3 a3=0 items=0 ppid=1 pid=5755 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:10:18.169392 systemd[1]: Started session-25.scope - Session 25 of User core. Dec 16 13:10:18.138000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 13:10:18.173094 kernel: audit: type=1327 audit(1765890618.138:914): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 13:10:18.180000 audit[5755]: USER_START pid=5755 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 13:10:18.187083 kernel: audit: type=1105 audit(1765890618.180:915): pid=5755 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 13:10:18.183000 audit[5758]: CRED_ACQ pid=5758 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 13:10:18.194112 kernel: audit: type=1103 audit(1765890618.183:916): pid=5758 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 13:10:18.413051 sshd[5758]: Connection closed by 139.178.89.65 port 48734 Dec 16 13:10:18.416851 sshd-session[5755]: pam_unix(sshd:session): session closed for user core Dec 16 13:10:18.428000 audit[5755]: USER_END pid=5755 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 13:10:18.433439 systemd[1]: sshd@24-172.31.28.98:22-139.178.89.65:48734.service: Deactivated successfully. Dec 16 13:10:18.436387 kernel: audit: type=1106 audit(1765890618.428:917): pid=5755 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 13:10:18.438525 systemd[1]: session-25.scope: Deactivated successfully. Dec 16 13:10:18.428000 audit[5755]: CRED_DISP pid=5755 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 13:10:18.441670 systemd-logind[1939]: Session 25 logged out. Waiting for processes to exit. Dec 16 13:10:18.444342 systemd-logind[1939]: Removed session 25. Dec 16 13:10:18.447832 kernel: audit: type=1104 audit(1765890618.428:918): pid=5755 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 13:10:18.430000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-172.31.28.98:22-139.178.89.65:48734 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:10:18.757533 kubelet[3309]: E1216 13:10:18.756401 3309 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-wpbz6" podUID="ea48f51b-a248-4d71-8caa-ed889e7f5fac" Dec 16 13:10:21.338220 update_engine[1942]: I20251216 13:10:21.338128 1942 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Dec 16 13:10:21.338765 update_engine[1942]: I20251216 13:10:21.338259 1942 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Dec 16 13:10:21.338765 update_engine[1942]: I20251216 13:10:21.338734 1942 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Dec 16 13:10:21.340933 update_engine[1942]: E20251216 13:10:21.339982 1942 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled (Domain name not found) Dec 16 13:10:21.340933 update_engine[1942]: I20251216 13:10:21.340111 1942 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Dec 16 13:10:21.340933 update_engine[1942]: I20251216 13:10:21.340126 1942 omaha_request_action.cc:617] Omaha request response: Dec 16 13:10:21.340933 update_engine[1942]: E20251216 13:10:21.340232 1942 omaha_request_action.cc:636] Omaha request network transfer failed. Dec 16 13:10:21.340933 update_engine[1942]: I20251216 13:10:21.340265 1942 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Dec 16 13:10:21.340933 update_engine[1942]: I20251216 13:10:21.340273 1942 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Dec 16 13:10:21.340933 update_engine[1942]: I20251216 13:10:21.340281 1942 update_attempter.cc:306] Processing Done. Dec 16 13:10:21.340933 update_engine[1942]: E20251216 13:10:21.340305 1942 update_attempter.cc:619] Update failed. Dec 16 13:10:21.340933 update_engine[1942]: I20251216 13:10:21.340317 1942 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Dec 16 13:10:21.340933 update_engine[1942]: I20251216 13:10:21.340325 1942 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Dec 16 13:10:21.340933 update_engine[1942]: I20251216 13:10:21.340332 1942 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Dec 16 13:10:21.340933 update_engine[1942]: I20251216 13:10:21.340435 1942 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Dec 16 13:10:21.340933 update_engine[1942]: I20251216 13:10:21.340476 1942 omaha_request_action.cc:271] Posting an Omaha request to disabled Dec 16 13:10:21.340933 update_engine[1942]: I20251216 13:10:21.340483 1942 omaha_request_action.cc:272] Request: Dec 16 13:10:21.340933 update_engine[1942]: Dec 16 13:10:21.340933 update_engine[1942]: Dec 16 13:10:21.342259 locksmithd[2019]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Dec 16 13:10:21.342616 update_engine[1942]: Dec 16 13:10:21.342616 update_engine[1942]: Dec 16 13:10:21.342616 update_engine[1942]: Dec 16 13:10:21.342616 update_engine[1942]: Dec 16 13:10:21.342616 update_engine[1942]: I20251216 13:10:21.340492 1942 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Dec 16 13:10:21.342616 update_engine[1942]: I20251216 13:10:21.340520 1942 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Dec 16 13:10:21.342616 update_engine[1942]: I20251216 13:10:21.340889 1942 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Dec 16 13:10:21.343539 update_engine[1942]: E20251216 13:10:21.342868 1942 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled (Domain name not found) Dec 16 13:10:21.343539 update_engine[1942]: I20251216 13:10:21.342957 1942 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Dec 16 13:10:21.343539 update_engine[1942]: I20251216 13:10:21.342969 1942 omaha_request_action.cc:617] Omaha request response: Dec 16 13:10:21.343539 update_engine[1942]: I20251216 13:10:21.342978 1942 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Dec 16 13:10:21.343539 update_engine[1942]: I20251216 13:10:21.342985 1942 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Dec 16 13:10:21.343539 update_engine[1942]: I20251216 13:10:21.342991 1942 update_attempter.cc:306] Processing Done. Dec 16 13:10:21.343539 update_engine[1942]: I20251216 13:10:21.342999 1942 update_attempter.cc:310] Error event sent. Dec 16 13:10:21.343539 update_engine[1942]: I20251216 13:10:21.343012 1942 update_check_scheduler.cc:74] Next update check in 43m18s Dec 16 13:10:21.344681 locksmithd[2019]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Dec 16 13:10:24.756890 kubelet[3309]: E1216 13:10:24.756406 3309 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6d7fb6ffdb-t947q" podUID="402c8f91-f505-4b31-ab8d-437df33aba9f" Dec 16 13:10:24.758650 kubelet[3309]: E1216 13:10:24.758582 3309 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-h272q" podUID="c808a4b9-6eee-4490-92c6-5f208009c5e7" Dec 16 13:10:25.756887 kubelet[3309]: E1216 13:10:25.756831 3309 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-58f99f576c-h7p64" podUID="f4a8c05f-aa26-454c-a381-75bd59548a78" Dec 16 13:10:28.758964 kubelet[3309]: E1216 13:10:28.758273 3309 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6d7fb6ffdb-x9w4j" podUID="17fc83ee-aaa8-428d-ba14-4fb4545cfe65" Dec 16 13:10:29.756112 containerd[1969]: time="2025-12-16T13:10:29.755733886Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Dec 16 13:10:30.051732 containerd[1969]: time="2025-12-16T13:10:30.051575442Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:10:30.054729 containerd[1969]: time="2025-12-16T13:10:30.054646276Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Dec 16 13:10:30.054729 containerd[1969]: time="2025-12-16T13:10:30.054662861Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Dec 16 13:10:30.055190 kubelet[3309]: E1216 13:10:30.055136 3309 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 16 13:10:30.055901 kubelet[3309]: E1216 13:10:30.055203 3309 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 16 13:10:30.055901 kubelet[3309]: E1216 13:10:30.055786 3309 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-q6qrs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-7bcdd655bc-b4pqw_calico-system(eef40561-fc3a-47f4-ab5c-0482b5980a8d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Dec 16 13:10:30.057283 kubelet[3309]: E1216 13:10:30.057214 3309 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7bcdd655bc-b4pqw" podUID="eef40561-fc3a-47f4-ab5c-0482b5980a8d" Dec 16 13:10:32.755131 containerd[1969]: time="2025-12-16T13:10:32.755054522Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Dec 16 13:10:33.082739 containerd[1969]: time="2025-12-16T13:10:33.082676491Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:10:33.085681 containerd[1969]: time="2025-12-16T13:10:33.085600929Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Dec 16 13:10:33.086456 containerd[1969]: time="2025-12-16T13:10:33.085741939Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Dec 16 13:10:33.086530 kubelet[3309]: E1216 13:10:33.086031 3309 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 16 13:10:33.086530 kubelet[3309]: E1216 13:10:33.086126 3309 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 16 13:10:33.086530 kubelet[3309]: E1216 13:10:33.086310 3309 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wps7z,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-wpbz6_calico-system(ea48f51b-a248-4d71-8caa-ed889e7f5fac): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Dec 16 13:10:33.087628 kubelet[3309]: E1216 13:10:33.087584 3309 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-wpbz6" podUID="ea48f51b-a248-4d71-8caa-ed889e7f5fac" Dec 16 13:10:33.292965 systemd[1]: cri-containerd-34c183f686846855558f02b4e4c1917c88cf0d46bbe335097b745799edc963ab.scope: Deactivated successfully. Dec 16 13:10:33.293362 systemd[1]: cri-containerd-34c183f686846855558f02b4e4c1917c88cf0d46bbe335097b745799edc963ab.scope: Consumed 12.453s CPU time, 105.6M memory peak, 45M read from disk. Dec 16 13:10:33.298000 audit: BPF prog-id=153 op=UNLOAD Dec 16 13:10:33.300369 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 16 13:10:33.300487 kernel: audit: type=1334 audit(1765890633.298:920): prog-id=153 op=UNLOAD Dec 16 13:10:33.298000 audit: BPF prog-id=157 op=UNLOAD Dec 16 13:10:33.305245 kernel: audit: type=1334 audit(1765890633.298:921): prog-id=157 op=UNLOAD Dec 16 13:10:33.402246 containerd[1969]: time="2025-12-16T13:10:33.402083746Z" level=info msg="received container exit event container_id:\"34c183f686846855558f02b4e4c1917c88cf0d46bbe335097b745799edc963ab\" id:\"34c183f686846855558f02b4e4c1917c88cf0d46bbe335097b745799edc963ab\" pid:3683 exit_status:1 exited_at:{seconds:1765890633 nanos:314228398}" Dec 16 13:10:33.526607 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-34c183f686846855558f02b4e4c1917c88cf0d46bbe335097b745799edc963ab-rootfs.mount: Deactivated successfully. Dec 16 13:10:33.992166 kubelet[3309]: E1216 13:10:33.992038 3309 controller.go:195] "Failed to update lease" err="Put \"https://172.31.28.98:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-28-98?timeout=10s\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 16 13:10:34.129767 systemd[1]: cri-containerd-7eec761e253c7a9a6543c49a0937eb40f70d759ba0b3100a8861a5371f8dfd84.scope: Deactivated successfully. Dec 16 13:10:34.132330 systemd[1]: cri-containerd-7eec761e253c7a9a6543c49a0937eb40f70d759ba0b3100a8861a5371f8dfd84.scope: Consumed 5.740s CPU time, 111.8M memory peak, 96.9M read from disk. Dec 16 13:10:34.134000 audit: BPF prog-id=113 op=UNLOAD Dec 16 13:10:34.137086 kernel: audit: type=1334 audit(1765890634.134:922): prog-id=113 op=UNLOAD Dec 16 13:10:34.134000 audit: BPF prog-id=119 op=UNLOAD Dec 16 13:10:34.138881 containerd[1969]: time="2025-12-16T13:10:34.138327423Z" level=info msg="received container exit event container_id:\"7eec761e253c7a9a6543c49a0937eb40f70d759ba0b3100a8861a5371f8dfd84\" id:\"7eec761e253c7a9a6543c49a0937eb40f70d759ba0b3100a8861a5371f8dfd84\" pid:3145 exit_status:1 exited_at:{seconds:1765890634 nanos:132014528}" Dec 16 13:10:34.139261 kernel: audit: type=1334 audit(1765890634.134:923): prog-id=119 op=UNLOAD Dec 16 13:10:34.134000 audit: BPF prog-id=263 op=LOAD Dec 16 13:10:34.143084 kernel: audit: type=1334 audit(1765890634.134:924): prog-id=263 op=LOAD Dec 16 13:10:34.143181 kernel: audit: type=1334 audit(1765890634.135:925): prog-id=98 op=UNLOAD Dec 16 13:10:34.135000 audit: BPF prog-id=98 op=UNLOAD Dec 16 13:10:34.173888 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7eec761e253c7a9a6543c49a0937eb40f70d759ba0b3100a8861a5371f8dfd84-rootfs.mount: Deactivated successfully. Dec 16 13:10:34.416502 kubelet[3309]: I1216 13:10:34.416443 3309 scope.go:117] "RemoveContainer" containerID="7eec761e253c7a9a6543c49a0937eb40f70d759ba0b3100a8861a5371f8dfd84" Dec 16 13:10:34.417854 kubelet[3309]: I1216 13:10:34.416886 3309 scope.go:117] "RemoveContainer" containerID="34c183f686846855558f02b4e4c1917c88cf0d46bbe335097b745799edc963ab" Dec 16 13:10:34.433526 containerd[1969]: time="2025-12-16T13:10:34.433449126Z" level=info msg="CreateContainer within sandbox \"c1a9e38de2a9788a11f7d2af36536b8b38648736793982f56d5d89a5ec006699\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Dec 16 13:10:34.437277 containerd[1969]: time="2025-12-16T13:10:34.437017976Z" level=info msg="CreateContainer within sandbox \"303b963e9f3a9454df4c233abd6e61ce073c66a319a3d021d2947670f2aae156\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Dec 16 13:10:34.550906 containerd[1969]: time="2025-12-16T13:10:34.550847734Z" level=info msg="Container 4a8f746e22a7c0627496810293b01287dac8b40663438e4079fb81833bba138b: CDI devices from CRI Config.CDIDevices: []" Dec 16 13:10:34.568893 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1761926402.mount: Deactivated successfully. Dec 16 13:10:34.575774 containerd[1969]: time="2025-12-16T13:10:34.574826542Z" level=info msg="Container 1b3546a3d94799f98a969d7768d5888345b56e0cfe53947b38267f3dfa4b1688: CDI devices from CRI Config.CDIDevices: []" Dec 16 13:10:34.576537 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount671369298.mount: Deactivated successfully. Dec 16 13:10:34.660899 containerd[1969]: time="2025-12-16T13:10:34.660842608Z" level=info msg="CreateContainer within sandbox \"303b963e9f3a9454df4c233abd6e61ce073c66a319a3d021d2947670f2aae156\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"4a8f746e22a7c0627496810293b01287dac8b40663438e4079fb81833bba138b\"" Dec 16 13:10:34.661691 containerd[1969]: time="2025-12-16T13:10:34.661657169Z" level=info msg="StartContainer for \"4a8f746e22a7c0627496810293b01287dac8b40663438e4079fb81833bba138b\"" Dec 16 13:10:34.662964 containerd[1969]: time="2025-12-16T13:10:34.662928666Z" level=info msg="connecting to shim 4a8f746e22a7c0627496810293b01287dac8b40663438e4079fb81833bba138b" address="unix:///run/containerd/s/3ffc3b25b520b353944ec4494e1e1dc1078f08281c0bd0f328649cef92868711" protocol=ttrpc version=3 Dec 16 13:10:34.676959 containerd[1969]: time="2025-12-16T13:10:34.676821168Z" level=info msg="CreateContainer within sandbox \"c1a9e38de2a9788a11f7d2af36536b8b38648736793982f56d5d89a5ec006699\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"1b3546a3d94799f98a969d7768d5888345b56e0cfe53947b38267f3dfa4b1688\"" Dec 16 13:10:34.678168 containerd[1969]: time="2025-12-16T13:10:34.677735275Z" level=info msg="StartContainer for \"1b3546a3d94799f98a969d7768d5888345b56e0cfe53947b38267f3dfa4b1688\"" Dec 16 13:10:34.679815 containerd[1969]: time="2025-12-16T13:10:34.679762438Z" level=info msg="connecting to shim 1b3546a3d94799f98a969d7768d5888345b56e0cfe53947b38267f3dfa4b1688" address="unix:///run/containerd/s/1a12de6008f564d8f5007f887466ff78bc4ef8abe82954ed7d95e775aca38b05" protocol=ttrpc version=3 Dec 16 13:10:34.728481 systemd[1]: Started cri-containerd-1b3546a3d94799f98a969d7768d5888345b56e0cfe53947b38267f3dfa4b1688.scope - libcontainer container 1b3546a3d94799f98a969d7768d5888345b56e0cfe53947b38267f3dfa4b1688. Dec 16 13:10:34.739889 systemd[1]: Started cri-containerd-4a8f746e22a7c0627496810293b01287dac8b40663438e4079fb81833bba138b.scope - libcontainer container 4a8f746e22a7c0627496810293b01287dac8b40663438e4079fb81833bba138b. Dec 16 13:10:34.784000 audit: BPF prog-id=264 op=LOAD Dec 16 13:10:34.789846 kernel: audit: type=1334 audit(1765890634.784:926): prog-id=264 op=LOAD Dec 16 13:10:34.789980 kernel: audit: type=1334 audit(1765890634.787:927): prog-id=265 op=LOAD Dec 16 13:10:34.787000 audit: BPF prog-id=265 op=LOAD Dec 16 13:10:34.787000 audit: BPF prog-id=266 op=LOAD Dec 16 13:10:34.793718 kernel: audit: type=1334 audit(1765890634.787:928): prog-id=266 op=LOAD Dec 16 13:10:34.787000 audit[5839]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000178238 a2=98 a3=0 items=0 ppid=3594 pid=5839 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:10:34.800094 kernel: audit: type=1300 audit(1765890634.787:928): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000178238 a2=98 a3=0 items=0 ppid=3594 pid=5839 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:10:34.787000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3461386637343665323261376330363237343936383130323933623031 Dec 16 13:10:34.788000 audit: BPF prog-id=266 op=UNLOAD Dec 16 13:10:34.788000 audit[5839]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3594 pid=5839 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:10:34.788000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3461386637343665323261376330363237343936383130323933623031 Dec 16 13:10:34.788000 audit: BPF prog-id=267 op=LOAD Dec 16 13:10:34.788000 audit[5839]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000178488 a2=98 a3=0 items=0 ppid=3594 pid=5839 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:10:34.788000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3461386637343665323261376330363237343936383130323933623031 Dec 16 13:10:34.788000 audit: BPF prog-id=268 op=LOAD Dec 16 13:10:34.788000 audit[5839]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000178218 a2=98 a3=0 items=0 ppid=3594 pid=5839 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:10:34.788000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3461386637343665323261376330363237343936383130323933623031 Dec 16 13:10:34.788000 audit: BPF prog-id=268 op=UNLOAD Dec 16 13:10:34.788000 audit[5839]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3594 pid=5839 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:10:34.788000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3461386637343665323261376330363237343936383130323933623031 Dec 16 13:10:34.788000 audit: BPF prog-id=267 op=UNLOAD Dec 16 13:10:34.788000 audit[5839]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3594 pid=5839 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:10:34.788000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3461386637343665323261376330363237343936383130323933623031 Dec 16 13:10:34.788000 audit: BPF prog-id=269 op=LOAD Dec 16 13:10:34.788000 audit[5839]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001786e8 a2=98 a3=0 items=0 ppid=3594 pid=5839 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:10:34.788000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3461386637343665323261376330363237343936383130323933623031 Dec 16 13:10:34.790000 audit: BPF prog-id=270 op=LOAD Dec 16 13:10:34.790000 audit[5838]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000220238 a2=98 a3=0 items=0 ppid=2979 pid=5838 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:10:34.790000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3162333534366133643934373939663938613936396437373638643538 Dec 16 13:10:34.790000 audit: BPF prog-id=270 op=UNLOAD Dec 16 13:10:34.790000 audit[5838]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2979 pid=5838 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:10:34.790000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3162333534366133643934373939663938613936396437373638643538 Dec 16 13:10:34.790000 audit: BPF prog-id=271 op=LOAD Dec 16 13:10:34.790000 audit[5838]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000220488 a2=98 a3=0 items=0 ppid=2979 pid=5838 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:10:34.790000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3162333534366133643934373939663938613936396437373638643538 Dec 16 13:10:34.790000 audit: BPF prog-id=272 op=LOAD Dec 16 13:10:34.790000 audit[5838]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000220218 a2=98 a3=0 items=0 ppid=2979 pid=5838 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:10:34.790000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3162333534366133643934373939663938613936396437373638643538 Dec 16 13:10:34.790000 audit: BPF prog-id=272 op=UNLOAD Dec 16 13:10:34.790000 audit[5838]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2979 pid=5838 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:10:34.790000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3162333534366133643934373939663938613936396437373638643538 Dec 16 13:10:34.790000 audit: BPF prog-id=271 op=UNLOAD Dec 16 13:10:34.790000 audit[5838]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2979 pid=5838 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:10:34.790000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3162333534366133643934373939663938613936396437373638643538 Dec 16 13:10:34.790000 audit: BPF prog-id=273 op=LOAD Dec 16 13:10:34.790000 audit[5838]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0002206e8 a2=98 a3=0 items=0 ppid=2979 pid=5838 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:10:34.790000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3162333534366133643934373939663938613936396437373638643538 Dec 16 13:10:34.854852 containerd[1969]: time="2025-12-16T13:10:34.854372908Z" level=info msg="StartContainer for \"4a8f746e22a7c0627496810293b01287dac8b40663438e4079fb81833bba138b\" returns successfully" Dec 16 13:10:34.884137 containerd[1969]: time="2025-12-16T13:10:34.884088694Z" level=info msg="StartContainer for \"1b3546a3d94799f98a969d7768d5888345b56e0cfe53947b38267f3dfa4b1688\" returns successfully" Dec 16 13:10:37.756572 containerd[1969]: time="2025-12-16T13:10:37.756513946Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Dec 16 13:10:38.060360 containerd[1969]: time="2025-12-16T13:10:38.060207697Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:10:38.062581 containerd[1969]: time="2025-12-16T13:10:38.062436913Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Dec 16 13:10:38.062581 containerd[1969]: time="2025-12-16T13:10:38.062506154Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Dec 16 13:10:38.062960 kubelet[3309]: E1216 13:10:38.062914 3309 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 16 13:10:38.063627 kubelet[3309]: E1216 13:10:38.062974 3309 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 16 13:10:38.063627 kubelet[3309]: E1216 13:10:38.063145 3309 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-m7qnn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-h272q_calico-system(c808a4b9-6eee-4490-92c6-5f208009c5e7): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Dec 16 13:10:38.072274 containerd[1969]: time="2025-12-16T13:10:38.072216601Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Dec 16 13:10:38.374944 containerd[1969]: time="2025-12-16T13:10:38.374877187Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:10:38.377391 containerd[1969]: time="2025-12-16T13:10:38.377302965Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Dec 16 13:10:38.377589 containerd[1969]: time="2025-12-16T13:10:38.377450111Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Dec 16 13:10:38.377729 kubelet[3309]: E1216 13:10:38.377683 3309 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 16 13:10:38.378433 kubelet[3309]: E1216 13:10:38.377747 3309 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 16 13:10:38.378433 kubelet[3309]: E1216 13:10:38.377910 3309 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-m7qnn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-h272q_calico-system(c808a4b9-6eee-4490-92c6-5f208009c5e7): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Dec 16 13:10:38.379540 kubelet[3309]: E1216 13:10:38.379481 3309 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-h272q" podUID="c808a4b9-6eee-4490-92c6-5f208009c5e7" Dec 16 13:10:39.186887 systemd[1]: cri-containerd-a9336930e8964e5fc3f1507907df79056c3b5a5fb11ab34df92a37fe0b237de1.scope: Deactivated successfully. Dec 16 13:10:39.187471 systemd[1]: cri-containerd-a9336930e8964e5fc3f1507907df79056c3b5a5fb11ab34df92a37fe0b237de1.scope: Consumed 2.827s CPU time, 41.5M memory peak, 37.5M read from disk. Dec 16 13:10:39.190148 kernel: kauditd_printk_skb: 40 callbacks suppressed Dec 16 13:10:39.190782 kernel: audit: type=1334 audit(1765890639.188:942): prog-id=274 op=LOAD Dec 16 13:10:39.188000 audit: BPF prog-id=274 op=LOAD Dec 16 13:10:39.193194 kernel: audit: type=1334 audit(1765890639.191:943): prog-id=90 op=UNLOAD Dec 16 13:10:39.191000 audit: BPF prog-id=90 op=UNLOAD Dec 16 13:10:39.193000 audit: BPF prog-id=105 op=UNLOAD Dec 16 13:10:39.195660 containerd[1969]: time="2025-12-16T13:10:39.193982052Z" level=info msg="received container exit event container_id:\"a9336930e8964e5fc3f1507907df79056c3b5a5fb11ab34df92a37fe0b237de1\" id:\"a9336930e8964e5fc3f1507907df79056c3b5a5fb11ab34df92a37fe0b237de1\" pid:3108 exit_status:1 exited_at:{seconds:1765890639 nanos:193307063}" Dec 16 13:10:39.196158 kernel: audit: type=1334 audit(1765890639.193:944): prog-id=105 op=UNLOAD Dec 16 13:10:39.193000 audit: BPF prog-id=109 op=UNLOAD Dec 16 13:10:39.198088 kernel: audit: type=1334 audit(1765890639.193:945): prog-id=109 op=UNLOAD Dec 16 13:10:39.230258 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a9336930e8964e5fc3f1507907df79056c3b5a5fb11ab34df92a37fe0b237de1-rootfs.mount: Deactivated successfully. Dec 16 13:10:39.462386 kubelet[3309]: I1216 13:10:39.460634 3309 scope.go:117] "RemoveContainer" containerID="a9336930e8964e5fc3f1507907df79056c3b5a5fb11ab34df92a37fe0b237de1" Dec 16 13:10:39.466133 containerd[1969]: time="2025-12-16T13:10:39.466091781Z" level=info msg="CreateContainer within sandbox \"f7ee51aeafe865e2d5c2f57188737520951d3d42eb95ed1e3218d87868bc2b81\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Dec 16 13:10:39.560819 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3215886839.mount: Deactivated successfully. Dec 16 13:10:39.564302 containerd[1969]: time="2025-12-16T13:10:39.561283899Z" level=info msg="Container 7e74a58f11a17927213d8df6b77b5bcffd18c98867b5db05e760e33b4bba365b: CDI devices from CRI Config.CDIDevices: []" Dec 16 13:10:39.585831 containerd[1969]: time="2025-12-16T13:10:39.585776021Z" level=info msg="CreateContainer within sandbox \"f7ee51aeafe865e2d5c2f57188737520951d3d42eb95ed1e3218d87868bc2b81\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"7e74a58f11a17927213d8df6b77b5bcffd18c98867b5db05e760e33b4bba365b\"" Dec 16 13:10:39.587155 containerd[1969]: time="2025-12-16T13:10:39.587013301Z" level=info msg="StartContainer for \"7e74a58f11a17927213d8df6b77b5bcffd18c98867b5db05e760e33b4bba365b\"" Dec 16 13:10:39.589138 containerd[1969]: time="2025-12-16T13:10:39.589094390Z" level=info msg="connecting to shim 7e74a58f11a17927213d8df6b77b5bcffd18c98867b5db05e760e33b4bba365b" address="unix:///run/containerd/s/f43ced74c4374acacc8fc97eca0c299a7390c4b6591e026df475f41f1a44ce5c" protocol=ttrpc version=3 Dec 16 13:10:39.653608 systemd[1]: Started cri-containerd-7e74a58f11a17927213d8df6b77b5bcffd18c98867b5db05e760e33b4bba365b.scope - libcontainer container 7e74a58f11a17927213d8df6b77b5bcffd18c98867b5db05e760e33b4bba365b. Dec 16 13:10:39.688828 kernel: audit: type=1334 audit(1765890639.682:946): prog-id=275 op=LOAD Dec 16 13:10:39.689017 kernel: audit: type=1334 audit(1765890639.684:947): prog-id=276 op=LOAD Dec 16 13:10:39.689103 kernel: audit: type=1300 audit(1765890639.684:947): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=2978 pid=5914 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:10:39.682000 audit: BPF prog-id=275 op=LOAD Dec 16 13:10:39.684000 audit: BPF prog-id=276 op=LOAD Dec 16 13:10:39.684000 audit[5914]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=2978 pid=5914 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:10:39.701582 kernel: audit: type=1327 audit(1765890639.684:947): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3765373461353866313161313739323732313364386466366237376235 Dec 16 13:10:39.684000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3765373461353866313161313739323732313364386466366237376235 Dec 16 13:10:39.703558 kernel: audit: type=1334 audit(1765890639.684:948): prog-id=276 op=UNLOAD Dec 16 13:10:39.684000 audit: BPF prog-id=276 op=UNLOAD Dec 16 13:10:39.709301 kernel: audit: type=1300 audit(1765890639.684:948): arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2978 pid=5914 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:10:39.684000 audit[5914]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2978 pid=5914 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:10:39.684000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3765373461353866313161313739323732313364386466366237376235 Dec 16 13:10:39.684000 audit: BPF prog-id=277 op=LOAD Dec 16 13:10:39.684000 audit[5914]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=2978 pid=5914 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:10:39.684000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3765373461353866313161313739323732313364386466366237376235 Dec 16 13:10:39.684000 audit: BPF prog-id=278 op=LOAD Dec 16 13:10:39.684000 audit[5914]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=2978 pid=5914 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:10:39.684000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3765373461353866313161313739323732313364386466366237376235 Dec 16 13:10:39.684000 audit: BPF prog-id=278 op=UNLOAD Dec 16 13:10:39.684000 audit[5914]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2978 pid=5914 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:10:39.684000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3765373461353866313161313739323732313364386466366237376235 Dec 16 13:10:39.684000 audit: BPF prog-id=277 op=UNLOAD Dec 16 13:10:39.684000 audit[5914]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2978 pid=5914 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:10:39.684000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3765373461353866313161313739323732313364386466366237376235 Dec 16 13:10:39.684000 audit: BPF prog-id=279 op=LOAD Dec 16 13:10:39.684000 audit[5914]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=2978 pid=5914 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:10:39.684000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3765373461353866313161313739323732313364386466366237376235 Dec 16 13:10:39.757499 containerd[1969]: time="2025-12-16T13:10:39.757375844Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 13:10:39.774894 containerd[1969]: time="2025-12-16T13:10:39.774821949Z" level=info msg="StartContainer for \"7e74a58f11a17927213d8df6b77b5bcffd18c98867b5db05e760e33b4bba365b\" returns successfully" Dec 16 13:10:40.060098 containerd[1969]: time="2025-12-16T13:10:40.059912434Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:10:40.062336 containerd[1969]: time="2025-12-16T13:10:40.062230902Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 13:10:40.062336 containerd[1969]: time="2025-12-16T13:10:40.062278660Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 16 13:10:40.062943 kubelet[3309]: E1216 13:10:40.062646 3309 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 13:10:40.062943 kubelet[3309]: E1216 13:10:40.062714 3309 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 13:10:40.063109 kubelet[3309]: E1216 13:10:40.063016 3309 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6h55z,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6d7fb6ffdb-x9w4j_calico-apiserver(17fc83ee-aaa8-428d-ba14-4fb4545cfe65): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 13:10:40.063648 containerd[1969]: time="2025-12-16T13:10:40.063554826Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 13:10:40.064897 kubelet[3309]: E1216 13:10:40.064844 3309 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6d7fb6ffdb-x9w4j" podUID="17fc83ee-aaa8-428d-ba14-4fb4545cfe65" Dec 16 13:10:40.337092 containerd[1969]: time="2025-12-16T13:10:40.336907019Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:10:40.340113 containerd[1969]: time="2025-12-16T13:10:40.340004618Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 16 13:10:40.340465 containerd[1969]: time="2025-12-16T13:10:40.340292551Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 13:10:40.341079 kubelet[3309]: E1216 13:10:40.340813 3309 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 13:10:40.341079 kubelet[3309]: E1216 13:10:40.340874 3309 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 13:10:40.341378 kubelet[3309]: E1216 13:10:40.341054 3309 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vqwxt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6d7fb6ffdb-t947q_calico-apiserver(402c8f91-f505-4b31-ab8d-437df33aba9f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 13:10:40.342596 kubelet[3309]: E1216 13:10:40.342548 3309 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6d7fb6ffdb-t947q" podUID="402c8f91-f505-4b31-ab8d-437df33aba9f" Dec 16 13:10:40.761094 containerd[1969]: time="2025-12-16T13:10:40.759307265Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Dec 16 13:10:41.053722 containerd[1969]: time="2025-12-16T13:10:41.053328632Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:10:41.055893 containerd[1969]: time="2025-12-16T13:10:41.055845437Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Dec 16 13:10:41.056179 containerd[1969]: time="2025-12-16T13:10:41.056071082Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Dec 16 13:10:41.056675 kubelet[3309]: E1216 13:10:41.056613 3309 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 13:10:41.057201 kubelet[3309]: E1216 13:10:41.057175 3309 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 13:10:41.057565 kubelet[3309]: E1216 13:10:41.057521 3309 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:561161844d8542869bf93b20f103b053,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-v44l9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-58f99f576c-h7p64_calico-system(f4a8c05f-aa26-454c-a381-75bd59548a78): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Dec 16 13:10:41.060504 containerd[1969]: time="2025-12-16T13:10:41.060442641Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Dec 16 13:10:41.329979 containerd[1969]: time="2025-12-16T13:10:41.329756768Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:10:41.332144 containerd[1969]: time="2025-12-16T13:10:41.332082590Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Dec 16 13:10:41.332329 containerd[1969]: time="2025-12-16T13:10:41.332091539Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Dec 16 13:10:41.334607 kubelet[3309]: E1216 13:10:41.334299 3309 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 13:10:41.334607 kubelet[3309]: E1216 13:10:41.334369 3309 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 13:10:41.334607 kubelet[3309]: E1216 13:10:41.334531 3309 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-v44l9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-58f99f576c-h7p64_calico-system(f4a8c05f-aa26-454c-a381-75bd59548a78): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Dec 16 13:10:41.335789 kubelet[3309]: E1216 13:10:41.335732 3309 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-58f99f576c-h7p64" podUID="f4a8c05f-aa26-454c-a381-75bd59548a78" Dec 16 13:10:43.757299 kubelet[3309]: E1216 13:10:43.757116 3309 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7bcdd655bc-b4pqw" podUID="eef40561-fc3a-47f4-ab5c-0482b5980a8d" Dec 16 13:10:43.761493 kubelet[3309]: E1216 13:10:43.761441 3309 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-wpbz6" podUID="ea48f51b-a248-4d71-8caa-ed889e7f5fac" Dec 16 13:10:43.999664 kubelet[3309]: E1216 13:10:43.999435 3309 controller.go:195] "Failed to update lease" err="Put \"https://172.31.28.98:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-28-98?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 16 13:10:46.590151 systemd[1]: cri-containerd-4a8f746e22a7c0627496810293b01287dac8b40663438e4079fb81833bba138b.scope: Deactivated successfully. Dec 16 13:10:46.590911 systemd[1]: cri-containerd-4a8f746e22a7c0627496810293b01287dac8b40663438e4079fb81833bba138b.scope: Consumed 377ms CPU time, 67.4M memory peak, 31.1M read from disk. Dec 16 13:10:46.594000 audit: BPF prog-id=264 op=UNLOAD Dec 16 13:10:46.594997 containerd[1969]: time="2025-12-16T13:10:46.594517720Z" level=info msg="received container exit event container_id:\"4a8f746e22a7c0627496810293b01287dac8b40663438e4079fb81833bba138b\" id:\"4a8f746e22a7c0627496810293b01287dac8b40663438e4079fb81833bba138b\" pid:5869 exit_status:1 exited_at:{seconds:1765890646 nanos:592330390}" Dec 16 13:10:46.595701 kernel: kauditd_printk_skb: 16 callbacks suppressed Dec 16 13:10:46.595767 kernel: audit: type=1334 audit(1765890646.594:954): prog-id=264 op=UNLOAD Dec 16 13:10:46.594000 audit: BPF prog-id=269 op=UNLOAD Dec 16 13:10:46.599134 kernel: audit: type=1334 audit(1765890646.594:955): prog-id=269 op=UNLOAD Dec 16 13:10:46.628674 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4a8f746e22a7c0627496810293b01287dac8b40663438e4079fb81833bba138b-rootfs.mount: Deactivated successfully. Dec 16 13:10:47.489873 kubelet[3309]: I1216 13:10:47.489833 3309 scope.go:117] "RemoveContainer" containerID="34c183f686846855558f02b4e4c1917c88cf0d46bbe335097b745799edc963ab" Dec 16 13:10:47.490505 kubelet[3309]: I1216 13:10:47.490367 3309 scope.go:117] "RemoveContainer" containerID="4a8f746e22a7c0627496810293b01287dac8b40663438e4079fb81833bba138b" Dec 16 13:10:47.510496 kubelet[3309]: E1216 13:10:47.510288 3309 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tigera-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=tigera-operator pod=tigera-operator-7dcd859c48-mtmv7_tigera-operator(7a4792f6-b125-4976-aefd-49f96ccab0c9)\"" pod="tigera-operator/tigera-operator-7dcd859c48-mtmv7" podUID="7a4792f6-b125-4976-aefd-49f96ccab0c9" Dec 16 13:10:47.611510 containerd[1969]: time="2025-12-16T13:10:47.611448966Z" level=info msg="RemoveContainer for \"34c183f686846855558f02b4e4c1917c88cf0d46bbe335097b745799edc963ab\"" Dec 16 13:10:47.646166 containerd[1969]: time="2025-12-16T13:10:47.646110180Z" level=info msg="RemoveContainer for \"34c183f686846855558f02b4e4c1917c88cf0d46bbe335097b745799edc963ab\" returns successfully" Dec 16 13:10:51.757825 kubelet[3309]: E1216 13:10:51.757564 3309 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6d7fb6ffdb-t947q" podUID="402c8f91-f505-4b31-ab8d-437df33aba9f" Dec 16 13:10:53.755336 kubelet[3309]: E1216 13:10:53.755074 3309 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6d7fb6ffdb-x9w4j" podUID="17fc83ee-aaa8-428d-ba14-4fb4545cfe65" Dec 16 13:10:53.755780 kubelet[3309]: E1216 13:10:53.755638 3309 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-h272q" podUID="c808a4b9-6eee-4490-92c6-5f208009c5e7" Dec 16 13:10:54.015996 kubelet[3309]: E1216 13:10:54.015555 3309 controller.go:195] "Failed to update lease" err="Put \"https://172.31.28.98:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-28-98?timeout=10s\": context deadline exceeded"