Jan 14 00:56:50.449379 kernel: Linux version 6.12.65-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.1_p20250801 p4) 14.3.1 20250801, GNU ld (Gentoo 2.45 p3) 2.45.0) #1 SMP PREEMPT_DYNAMIC Tue Jan 13 22:15:29 -00 2026 Jan 14 00:56:50.449415 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=6d34ab71a3dc5a0ab37eb2c851228af18a1e24f648223df9a1099dbd7db2cfcf Jan 14 00:56:50.449434 kernel: BIOS-provided physical RAM map: Jan 14 00:56:50.449465 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Jan 14 00:56:50.449476 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000786cdfff] usable Jan 14 00:56:50.449486 kernel: BIOS-e820: [mem 0x00000000786ce000-0x000000007894dfff] reserved Jan 14 00:56:50.449499 kernel: BIOS-e820: [mem 0x000000007894e000-0x000000007895dfff] ACPI data Jan 14 00:56:50.449509 kernel: BIOS-e820: [mem 0x000000007895e000-0x00000000789ddfff] ACPI NVS Jan 14 00:56:50.449516 kernel: BIOS-e820: [mem 0x00000000789de000-0x000000007c97bfff] usable Jan 14 00:56:50.449523 kernel: BIOS-e820: [mem 0x000000007c97c000-0x000000007c9fffff] reserved Jan 14 00:56:50.449533 kernel: NX (Execute Disable) protection: active Jan 14 00:56:50.449540 kernel: APIC: Static calls initialized Jan 14 00:56:50.449547 kernel: e820: update [mem 0x768c0018-0x768c8e57] usable ==> usable Jan 14 00:56:50.449555 kernel: extended physical RAM map: Jan 14 00:56:50.449564 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000009ffff] usable Jan 14 00:56:50.449574 kernel: reserve setup_data: [mem 0x0000000000100000-0x00000000768c0017] usable Jan 14 00:56:50.449582 kernel: reserve setup_data: [mem 0x00000000768c0018-0x00000000768c8e57] usable Jan 14 00:56:50.449590 kernel: reserve setup_data: [mem 0x00000000768c8e58-0x00000000786cdfff] usable Jan 14 00:56:50.449598 kernel: reserve setup_data: [mem 0x00000000786ce000-0x000000007894dfff] reserved Jan 14 00:56:50.449605 kernel: reserve setup_data: [mem 0x000000007894e000-0x000000007895dfff] ACPI data Jan 14 00:56:50.449613 kernel: reserve setup_data: [mem 0x000000007895e000-0x00000000789ddfff] ACPI NVS Jan 14 00:56:50.449621 kernel: reserve setup_data: [mem 0x00000000789de000-0x000000007c97bfff] usable Jan 14 00:56:50.449629 kernel: reserve setup_data: [mem 0x000000007c97c000-0x000000007c9fffff] reserved Jan 14 00:56:50.449637 kernel: efi: EFI v2.7 by EDK II Jan 14 00:56:50.449645 kernel: efi: SMBIOS=0x7886a000 ACPI=0x7895d000 ACPI 2.0=0x7895d014 MEMATTR=0x77015518 Jan 14 00:56:50.449655 kernel: secureboot: Secure boot disabled Jan 14 00:56:50.449663 kernel: SMBIOS 2.7 present. Jan 14 00:56:50.449670 kernel: DMI: Amazon EC2 t3.small/, BIOS 1.0 10/16/2017 Jan 14 00:56:50.449678 kernel: DMI: Memory slots populated: 1/1 Jan 14 00:56:50.449686 kernel: Hypervisor detected: KVM Jan 14 00:56:50.449694 kernel: last_pfn = 0x7c97c max_arch_pfn = 0x400000000 Jan 14 00:56:50.449702 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 14 00:56:50.449710 kernel: kvm-clock: using sched offset of 6574947385 cycles Jan 14 00:56:50.449719 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 14 00:56:50.449727 kernel: tsc: Detected 2499.998 MHz processor Jan 14 00:56:50.449738 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 14 00:56:50.449746 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 14 00:56:50.449754 kernel: last_pfn = 0x7c97c max_arch_pfn = 0x400000000 Jan 14 00:56:50.449763 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Jan 14 00:56:50.449771 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 14 00:56:50.449783 kernel: Using GB pages for direct mapping Jan 14 00:56:50.449794 kernel: ACPI: Early table checksum verification disabled Jan 14 00:56:50.449802 kernel: ACPI: RSDP 0x000000007895D014 000024 (v02 AMAZON) Jan 14 00:56:50.449811 kernel: ACPI: XSDT 0x000000007895C0E8 00006C (v01 AMAZON AMZNFACP 00000001 01000013) Jan 14 00:56:50.449820 kernel: ACPI: FACP 0x0000000078955000 000114 (v01 AMAZON AMZNFACP 00000001 AMZN 00000001) Jan 14 00:56:50.449828 kernel: ACPI: DSDT 0x0000000078956000 00115A (v01 AMAZON AMZNDSDT 00000001 AMZN 00000001) Jan 14 00:56:50.449837 kernel: ACPI: FACS 0x00000000789D0000 000040 Jan 14 00:56:50.449883 kernel: ACPI: WAET 0x000000007895B000 000028 (v01 AMAZON AMZNWAET 00000001 AMZN 00000001) Jan 14 00:56:50.449896 kernel: ACPI: SLIT 0x000000007895A000 00006C (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Jan 14 00:56:50.449909 kernel: ACPI: APIC 0x0000000078959000 000076 (v01 AMAZON AMZNAPIC 00000001 AMZN 00000001) Jan 14 00:56:50.451501 kernel: ACPI: SRAT 0x0000000078958000 0000A0 (v01 AMAZON AMZNSRAT 00000001 AMZN 00000001) Jan 14 00:56:50.451519 kernel: ACPI: HPET 0x0000000078954000 000038 (v01 AMAZON AMZNHPET 00000001 AMZN 00000001) Jan 14 00:56:50.451533 kernel: ACPI: SSDT 0x0000000078953000 000759 (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Jan 14 00:56:50.451547 kernel: ACPI: SSDT 0x0000000078952000 00007F (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Jan 14 00:56:50.451565 kernel: ACPI: BGRT 0x0000000078951000 000038 (v01 AMAZON AMAZON 00000002 01000013) Jan 14 00:56:50.451579 kernel: ACPI: Reserving FACP table memory at [mem 0x78955000-0x78955113] Jan 14 00:56:50.451592 kernel: ACPI: Reserving DSDT table memory at [mem 0x78956000-0x78957159] Jan 14 00:56:50.453907 kernel: ACPI: Reserving FACS table memory at [mem 0x789d0000-0x789d003f] Jan 14 00:56:50.453929 kernel: ACPI: Reserving WAET table memory at [mem 0x7895b000-0x7895b027] Jan 14 00:56:50.453945 kernel: ACPI: Reserving SLIT table memory at [mem 0x7895a000-0x7895a06b] Jan 14 00:56:50.453960 kernel: ACPI: Reserving APIC table memory at [mem 0x78959000-0x78959075] Jan 14 00:56:50.453981 kernel: ACPI: Reserving SRAT table memory at [mem 0x78958000-0x7895809f] Jan 14 00:56:50.453996 kernel: ACPI: Reserving HPET table memory at [mem 0x78954000-0x78954037] Jan 14 00:56:50.454011 kernel: ACPI: Reserving SSDT table memory at [mem 0x78953000-0x78953758] Jan 14 00:56:50.454026 kernel: ACPI: Reserving SSDT table memory at [mem 0x78952000-0x7895207e] Jan 14 00:56:50.454041 kernel: ACPI: Reserving BGRT table memory at [mem 0x78951000-0x78951037] Jan 14 00:56:50.454056 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x7fffffff] Jan 14 00:56:50.454072 kernel: NUMA: Initialized distance table, cnt=1 Jan 14 00:56:50.454090 kernel: NODE_DATA(0) allocated [mem 0x7a8eedc0-0x7a8f5fff] Jan 14 00:56:50.454105 kernel: Zone ranges: Jan 14 00:56:50.454119 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 14 00:56:50.454134 kernel: DMA32 [mem 0x0000000001000000-0x000000007c97bfff] Jan 14 00:56:50.454149 kernel: Normal empty Jan 14 00:56:50.454164 kernel: Device empty Jan 14 00:56:50.454179 kernel: Movable zone start for each node Jan 14 00:56:50.454193 kernel: Early memory node ranges Jan 14 00:56:50.454212 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Jan 14 00:56:50.454227 kernel: node 0: [mem 0x0000000000100000-0x00000000786cdfff] Jan 14 00:56:50.454242 kernel: node 0: [mem 0x00000000789de000-0x000000007c97bfff] Jan 14 00:56:50.454257 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007c97bfff] Jan 14 00:56:50.454272 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 14 00:56:50.454288 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Jan 14 00:56:50.454303 kernel: On node 0, zone DMA32: 784 pages in unavailable ranges Jan 14 00:56:50.454318 kernel: On node 0, zone DMA32: 13956 pages in unavailable ranges Jan 14 00:56:50.454336 kernel: ACPI: PM-Timer IO Port: 0xb008 Jan 14 00:56:50.454351 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 14 00:56:50.454365 kernel: IOAPIC[0]: apic_id 0, version 32, address 0xfec00000, GSI 0-23 Jan 14 00:56:50.454378 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 14 00:56:50.454391 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 14 00:56:50.454430 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 14 00:56:50.454445 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 14 00:56:50.454462 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 14 00:56:50.454476 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jan 14 00:56:50.454489 kernel: TSC deadline timer available Jan 14 00:56:50.454503 kernel: CPU topo: Max. logical packages: 1 Jan 14 00:56:50.454517 kernel: CPU topo: Max. logical dies: 1 Jan 14 00:56:50.454531 kernel: CPU topo: Max. dies per package: 1 Jan 14 00:56:50.454544 kernel: CPU topo: Max. threads per core: 2 Jan 14 00:56:50.454560 kernel: CPU topo: Num. cores per package: 1 Jan 14 00:56:50.454574 kernel: CPU topo: Num. threads per package: 2 Jan 14 00:56:50.454588 kernel: CPU topo: Allowing 2 present CPUs plus 0 hotplug CPUs Jan 14 00:56:50.454601 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 14 00:56:50.454615 kernel: [mem 0x7ca00000-0xffffffff] available for PCI devices Jan 14 00:56:50.454629 kernel: Booting paravirtualized kernel on KVM Jan 14 00:56:50.454643 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 14 00:56:50.454657 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jan 14 00:56:50.454674 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u1048576 Jan 14 00:56:50.454688 kernel: pcpu-alloc: s207832 r8192 d29736 u1048576 alloc=1*2097152 Jan 14 00:56:50.454701 kernel: pcpu-alloc: [0] 0 1 Jan 14 00:56:50.454714 kernel: kvm-guest: PV spinlocks enabled Jan 14 00:56:50.454728 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 14 00:56:50.454744 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=6d34ab71a3dc5a0ab37eb2c851228af18a1e24f648223df9a1099dbd7db2cfcf Jan 14 00:56:50.454761 kernel: random: crng init done Jan 14 00:56:50.454774 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 14 00:56:50.454788 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jan 14 00:56:50.454802 kernel: Fallback order for Node 0: 0 Jan 14 00:56:50.454816 kernel: Built 1 zonelists, mobility grouping on. Total pages: 509451 Jan 14 00:56:50.454830 kernel: Policy zone: DMA32 Jan 14 00:56:50.454872 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 14 00:56:50.454887 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 14 00:56:50.454901 kernel: Kernel/User page tables isolation: enabled Jan 14 00:56:50.454916 kernel: ftrace: allocating 40097 entries in 157 pages Jan 14 00:56:50.454933 kernel: ftrace: allocated 157 pages with 5 groups Jan 14 00:56:50.454947 kernel: Dynamic Preempt: voluntary Jan 14 00:56:50.454961 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 14 00:56:50.454976 kernel: rcu: RCU event tracing is enabled. Jan 14 00:56:50.454991 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 14 00:56:50.455008 kernel: Trampoline variant of Tasks RCU enabled. Jan 14 00:56:50.455022 kernel: Rude variant of Tasks RCU enabled. Jan 14 00:56:50.455037 kernel: Tracing variant of Tasks RCU enabled. Jan 14 00:56:50.455051 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 14 00:56:50.455065 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 14 00:56:50.455078 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 14 00:56:50.455096 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 14 00:56:50.455110 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 14 00:56:50.455125 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Jan 14 00:56:50.455139 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 14 00:56:50.455153 kernel: Console: colour dummy device 80x25 Jan 14 00:56:50.455167 kernel: printk: legacy console [tty0] enabled Jan 14 00:56:50.455181 kernel: printk: legacy console [ttyS0] enabled Jan 14 00:56:50.455198 kernel: ACPI: Core revision 20240827 Jan 14 00:56:50.455212 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 30580167144 ns Jan 14 00:56:50.455227 kernel: APIC: Switch to symmetric I/O mode setup Jan 14 00:56:50.455241 kernel: x2apic enabled Jan 14 00:56:50.455256 kernel: APIC: Switched APIC routing to: physical x2apic Jan 14 00:56:50.455270 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x240937b9988, max_idle_ns: 440795218083 ns Jan 14 00:56:50.455284 kernel: Calibrating delay loop (skipped) preset value.. 4999.99 BogoMIPS (lpj=2499998) Jan 14 00:56:50.455301 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Jan 14 00:56:50.455316 kernel: Last level dTLB entries: 4KB 64, 2MB 32, 4MB 32, 1GB 4 Jan 14 00:56:50.455330 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 14 00:56:50.455343 kernel: Spectre V2 : Mitigation: Retpolines Jan 14 00:56:50.455356 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Jan 14 00:56:50.455370 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Jan 14 00:56:50.455384 kernel: RETBleed: Vulnerable Jan 14 00:56:50.455398 kernel: Speculative Store Bypass: Vulnerable Jan 14 00:56:50.455411 kernel: MDS: Vulnerable: Clear CPU buffers attempted, no microcode Jan 14 00:56:50.455425 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Jan 14 00:56:50.455441 kernel: GDS: Unknown: Dependent on hypervisor status Jan 14 00:56:50.455454 kernel: active return thunk: its_return_thunk Jan 14 00:56:50.455467 kernel: ITS: Mitigation: Aligned branch/return thunks Jan 14 00:56:50.455481 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 14 00:56:50.455495 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 14 00:56:50.455509 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 14 00:56:50.455523 kernel: x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers' Jan 14 00:56:50.455536 kernel: x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR' Jan 14 00:56:50.455551 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Jan 14 00:56:50.455564 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Jan 14 00:56:50.455581 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Jan 14 00:56:50.455594 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' Jan 14 00:56:50.455608 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 14 00:56:50.455622 kernel: x86/fpu: xstate_offset[3]: 832, xstate_sizes[3]: 64 Jan 14 00:56:50.455636 kernel: x86/fpu: xstate_offset[4]: 896, xstate_sizes[4]: 64 Jan 14 00:56:50.455650 kernel: x86/fpu: xstate_offset[5]: 960, xstate_sizes[5]: 64 Jan 14 00:56:50.455664 kernel: x86/fpu: xstate_offset[6]: 1024, xstate_sizes[6]: 512 Jan 14 00:56:50.455678 kernel: x86/fpu: xstate_offset[7]: 1536, xstate_sizes[7]: 1024 Jan 14 00:56:50.455691 kernel: x86/fpu: xstate_offset[9]: 2560, xstate_sizes[9]: 8 Jan 14 00:56:50.455705 kernel: x86/fpu: Enabled xstate features 0x2ff, context size is 2568 bytes, using 'compacted' format. Jan 14 00:56:50.455719 kernel: Freeing SMP alternatives memory: 32K Jan 14 00:56:50.455735 kernel: pid_max: default: 32768 minimum: 301 Jan 14 00:56:50.455748 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Jan 14 00:56:50.455762 kernel: landlock: Up and running. Jan 14 00:56:50.455776 kernel: SELinux: Initializing. Jan 14 00:56:50.455790 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jan 14 00:56:50.455804 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jan 14 00:56:50.455818 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8259CL CPU @ 2.50GHz (family: 0x6, model: 0x55, stepping: 0x7) Jan 14 00:56:50.455831 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Jan 14 00:56:50.457932 kernel: signal: max sigframe size: 3632 Jan 14 00:56:50.457952 kernel: rcu: Hierarchical SRCU implementation. Jan 14 00:56:50.457973 kernel: rcu: Max phase no-delay instances is 400. Jan 14 00:56:50.457989 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Jan 14 00:56:50.458004 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jan 14 00:56:50.458020 kernel: smp: Bringing up secondary CPUs ... Jan 14 00:56:50.458035 kernel: smpboot: x86: Booting SMP configuration: Jan 14 00:56:50.458051 kernel: .... node #0, CPUs: #1 Jan 14 00:56:50.458067 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Jan 14 00:56:50.458086 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Jan 14 00:56:50.458101 kernel: smp: Brought up 1 node, 2 CPUs Jan 14 00:56:50.458115 kernel: smpboot: Total of 2 processors activated (9999.99 BogoMIPS) Jan 14 00:56:50.458130 kernel: Memory: 1924436K/2037804K available (14336K kernel code, 2445K rwdata, 31636K rodata, 15536K init, 2504K bss, 108804K reserved, 0K cma-reserved) Jan 14 00:56:50.458145 kernel: devtmpfs: initialized Jan 14 00:56:50.458160 kernel: x86/mm: Memory block size: 128MB Jan 14 00:56:50.458179 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x7895e000-0x789ddfff] (524288 bytes) Jan 14 00:56:50.458194 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 14 00:56:50.458208 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 14 00:56:50.458224 kernel: pinctrl core: initialized pinctrl subsystem Jan 14 00:56:50.458238 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 14 00:56:50.458254 kernel: audit: initializing netlink subsys (disabled) Jan 14 00:56:50.458270 kernel: audit: type=2000 audit(1768352207.314:1): state=initialized audit_enabled=0 res=1 Jan 14 00:56:50.458290 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 14 00:56:50.458304 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 14 00:56:50.458320 kernel: cpuidle: using governor menu Jan 14 00:56:50.458336 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 14 00:56:50.458353 kernel: dca service started, version 1.12.1 Jan 14 00:56:50.458367 kernel: PCI: Using configuration type 1 for base access Jan 14 00:56:50.458382 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 14 00:56:50.458399 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 14 00:56:50.458421 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 14 00:56:50.458436 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 14 00:56:50.458451 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 14 00:56:50.458467 kernel: ACPI: Added _OSI(Module Device) Jan 14 00:56:50.458488 kernel: ACPI: Added _OSI(Processor Device) Jan 14 00:56:50.458505 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 14 00:56:50.458522 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Jan 14 00:56:50.458536 kernel: ACPI: Interpreter enabled Jan 14 00:56:50.458549 kernel: ACPI: PM: (supports S0 S5) Jan 14 00:56:50.458564 kernel: ACPI: Using IOAPIC for interrupt routing Jan 14 00:56:50.458579 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 14 00:56:50.458594 kernel: PCI: Using E820 reservations for host bridge windows Jan 14 00:56:50.458636 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Jan 14 00:56:50.458651 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 14 00:56:50.458927 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Jan 14 00:56:50.459140 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Jan 14 00:56:50.459343 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Jan 14 00:56:50.459365 kernel: acpiphp: Slot [3] registered Jan 14 00:56:50.459383 kernel: acpiphp: Slot [4] registered Jan 14 00:56:50.459405 kernel: acpiphp: Slot [5] registered Jan 14 00:56:50.459423 kernel: acpiphp: Slot [6] registered Jan 14 00:56:50.459441 kernel: acpiphp: Slot [7] registered Jan 14 00:56:50.459457 kernel: acpiphp: Slot [8] registered Jan 14 00:56:50.459474 kernel: acpiphp: Slot [9] registered Jan 14 00:56:50.459490 kernel: acpiphp: Slot [10] registered Jan 14 00:56:50.459507 kernel: acpiphp: Slot [11] registered Jan 14 00:56:50.459524 kernel: acpiphp: Slot [12] registered Jan 14 00:56:50.459545 kernel: acpiphp: Slot [13] registered Jan 14 00:56:50.459562 kernel: acpiphp: Slot [14] registered Jan 14 00:56:50.459578 kernel: acpiphp: Slot [15] registered Jan 14 00:56:50.459595 kernel: acpiphp: Slot [16] registered Jan 14 00:56:50.459612 kernel: acpiphp: Slot [17] registered Jan 14 00:56:50.459628 kernel: acpiphp: Slot [18] registered Jan 14 00:56:50.459646 kernel: acpiphp: Slot [19] registered Jan 14 00:56:50.459665 kernel: acpiphp: Slot [20] registered Jan 14 00:56:50.459682 kernel: acpiphp: Slot [21] registered Jan 14 00:56:50.459699 kernel: acpiphp: Slot [22] registered Jan 14 00:56:50.459715 kernel: acpiphp: Slot [23] registered Jan 14 00:56:50.459732 kernel: acpiphp: Slot [24] registered Jan 14 00:56:50.459749 kernel: acpiphp: Slot [25] registered Jan 14 00:56:50.459764 kernel: acpiphp: Slot [26] registered Jan 14 00:56:50.459777 kernel: acpiphp: Slot [27] registered Jan 14 00:56:50.459798 kernel: acpiphp: Slot [28] registered Jan 14 00:56:50.459815 kernel: acpiphp: Slot [29] registered Jan 14 00:56:50.459831 kernel: acpiphp: Slot [30] registered Jan 14 00:56:50.460769 kernel: acpiphp: Slot [31] registered Jan 14 00:56:50.460790 kernel: PCI host bridge to bus 0000:00 Jan 14 00:56:50.461021 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 14 00:56:50.461188 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 14 00:56:50.461356 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 14 00:56:50.461513 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Jan 14 00:56:50.461668 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x2000ffffffff window] Jan 14 00:56:50.461824 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 14 00:56:50.462027 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 conventional PCI endpoint Jan 14 00:56:50.462209 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 conventional PCI endpoint Jan 14 00:56:50.462387 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x000000 conventional PCI endpoint Jan 14 00:56:50.462573 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Jan 14 00:56:50.462743 kernel: pci 0000:00:01.3: PIIX4 devres E PIO at fff0-ffff Jan 14 00:56:50.464466 kernel: pci 0000:00:01.3: PIIX4 devres F MMIO at ffc00000-ffffffff Jan 14 00:56:50.464698 kernel: pci 0000:00:01.3: PIIX4 devres G PIO at fff0-ffff Jan 14 00:56:50.464918 kernel: pci 0000:00:01.3: PIIX4 devres H MMIO at ffc00000-ffffffff Jan 14 00:56:50.465121 kernel: pci 0000:00:01.3: PIIX4 devres I PIO at fff0-ffff Jan 14 00:56:50.465325 kernel: pci 0000:00:01.3: PIIX4 devres J PIO at fff0-ffff Jan 14 00:56:50.465523 kernel: pci 0000:00:03.0: [1d0f:1111] type 00 class 0x030000 conventional PCI endpoint Jan 14 00:56:50.465710 kernel: pci 0000:00:03.0: BAR 0 [mem 0x80000000-0x803fffff pref] Jan 14 00:56:50.465936 kernel: pci 0000:00:03.0: ROM [mem 0xffff0000-0xffffffff pref] Jan 14 00:56:50.466150 kernel: pci 0000:00:03.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 14 00:56:50.466358 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 PCIe Endpoint Jan 14 00:56:50.466563 kernel: pci 0000:00:04.0: BAR 0 [mem 0x80404000-0x80407fff] Jan 14 00:56:50.466759 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 PCIe Endpoint Jan 14 00:56:50.466969 kernel: pci 0000:00:05.0: BAR 0 [mem 0x80400000-0x80403fff] Jan 14 00:56:50.466994 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 14 00:56:50.467012 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 14 00:56:50.467029 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 14 00:56:50.467047 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 14 00:56:50.467063 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Jan 14 00:56:50.467080 kernel: iommu: Default domain type: Translated Jan 14 00:56:50.467099 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 14 00:56:50.467116 kernel: efivars: Registered efivars operations Jan 14 00:56:50.467133 kernel: PCI: Using ACPI for IRQ routing Jan 14 00:56:50.467150 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 14 00:56:50.467165 kernel: e820: reserve RAM buffer [mem 0x768c0018-0x77ffffff] Jan 14 00:56:50.467180 kernel: e820: reserve RAM buffer [mem 0x786ce000-0x7bffffff] Jan 14 00:56:50.467196 kernel: e820: reserve RAM buffer [mem 0x7c97c000-0x7fffffff] Jan 14 00:56:50.467395 kernel: pci 0000:00:03.0: vgaarb: setting as boot VGA device Jan 14 00:56:50.467602 kernel: pci 0000:00:03.0: vgaarb: bridge control possible Jan 14 00:56:50.467804 kernel: pci 0000:00:03.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 14 00:56:50.467827 kernel: vgaarb: loaded Jan 14 00:56:50.467865 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0 Jan 14 00:56:50.467883 kernel: hpet0: 8 comparators, 32-bit 62.500000 MHz counter Jan 14 00:56:50.467900 kernel: clocksource: Switched to clocksource kvm-clock Jan 14 00:56:50.467922 kernel: VFS: Disk quotas dquot_6.6.0 Jan 14 00:56:50.467937 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 14 00:56:50.467953 kernel: pnp: PnP ACPI init Jan 14 00:56:50.467970 kernel: pnp: PnP ACPI: found 5 devices Jan 14 00:56:50.467986 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 14 00:56:50.468003 kernel: NET: Registered PF_INET protocol family Jan 14 00:56:50.468019 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 14 00:56:50.468040 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Jan 14 00:56:50.468055 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 14 00:56:50.468071 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Jan 14 00:56:50.468087 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Jan 14 00:56:50.468103 kernel: TCP: Hash tables configured (established 16384 bind 16384) Jan 14 00:56:50.468121 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Jan 14 00:56:50.468139 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Jan 14 00:56:50.468160 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 14 00:56:50.468179 kernel: NET: Registered PF_XDP protocol family Jan 14 00:56:50.468373 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 14 00:56:50.468535 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 14 00:56:50.468693 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 14 00:56:50.468914 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Jan 14 00:56:50.469086 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x2000ffffffff window] Jan 14 00:56:50.469276 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Jan 14 00:56:50.469298 kernel: PCI: CLS 0 bytes, default 64 Jan 14 00:56:50.469315 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Jan 14 00:56:50.469332 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x240937b9988, max_idle_ns: 440795218083 ns Jan 14 00:56:50.469348 kernel: clocksource: Switched to clocksource tsc Jan 14 00:56:50.469364 kernel: Initialise system trusted keyrings Jan 14 00:56:50.469384 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Jan 14 00:56:50.469400 kernel: Key type asymmetric registered Jan 14 00:56:50.469416 kernel: Asymmetric key parser 'x509' registered Jan 14 00:56:50.469432 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jan 14 00:56:50.469449 kernel: io scheduler mq-deadline registered Jan 14 00:56:50.469466 kernel: io scheduler kyber registered Jan 14 00:56:50.469483 kernel: io scheduler bfq registered Jan 14 00:56:50.469499 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 14 00:56:50.469518 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 14 00:56:50.469534 kernel: 00:04: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 14 00:56:50.469551 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 14 00:56:50.469567 kernel: i8042: Warning: Keylock active Jan 14 00:56:50.469582 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 14 00:56:50.469598 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 14 00:56:50.469797 kernel: rtc_cmos 00:00: RTC can wake from S4 Jan 14 00:56:50.470006 kernel: rtc_cmos 00:00: registered as rtc0 Jan 14 00:56:50.470178 kernel: rtc_cmos 00:00: setting system clock to 2026-01-14T00:56:47 UTC (1768352207) Jan 14 00:56:50.470347 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Jan 14 00:56:50.470394 kernel: intel_pstate: CPU model not supported Jan 14 00:56:50.470423 kernel: efifb: probing for efifb Jan 14 00:56:50.470439 kernel: efifb: framebuffer at 0x80000000, using 1876k, total 1875k Jan 14 00:56:50.470460 kernel: efifb: mode is 800x600x32, linelength=3200, pages=1 Jan 14 00:56:50.470476 kernel: efifb: scrolling: redraw Jan 14 00:56:50.470492 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Jan 14 00:56:50.470509 kernel: Console: switching to colour frame buffer device 100x37 Jan 14 00:56:50.470525 kernel: fb0: EFI VGA frame buffer device Jan 14 00:56:50.470541 kernel: pstore: Using crash dump compression: deflate Jan 14 00:56:50.470556 kernel: pstore: Registered efi_pstore as persistent store backend Jan 14 00:56:50.470574 kernel: NET: Registered PF_INET6 protocol family Jan 14 00:56:50.470590 kernel: Segment Routing with IPv6 Jan 14 00:56:50.470606 kernel: In-situ OAM (IOAM) with IPv6 Jan 14 00:56:50.470622 kernel: NET: Registered PF_PACKET protocol family Jan 14 00:56:50.470639 kernel: Key type dns_resolver registered Jan 14 00:56:50.470655 kernel: IPI shorthand broadcast: enabled Jan 14 00:56:50.470671 kernel: sched_clock: Marking stable (1348001536, 138560298)->(1552618983, -66057149) Jan 14 00:56:50.470691 kernel: registered taskstats version 1 Jan 14 00:56:50.470709 kernel: Loading compiled-in X.509 certificates Jan 14 00:56:50.470726 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.65-flatcar: 58a78462583b088d099087e6f2d97e37d80e06bb' Jan 14 00:56:50.470742 kernel: Demotion targets for Node 0: null Jan 14 00:56:50.470758 kernel: Key type .fscrypt registered Jan 14 00:56:50.470774 kernel: Key type fscrypt-provisioning registered Jan 14 00:56:50.470790 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 14 00:56:50.470812 kernel: ima: Allocated hash algorithm: sha1 Jan 14 00:56:50.470829 kernel: ima: No architecture policies found Jan 14 00:56:50.470863 kernel: clk: Disabling unused clocks Jan 14 00:56:50.470879 kernel: Freeing unused kernel image (initmem) memory: 15536K Jan 14 00:56:50.470893 kernel: Write protecting the kernel read-only data: 47104k Jan 14 00:56:50.470916 kernel: Freeing unused kernel image (rodata/data gap) memory: 1132K Jan 14 00:56:50.470933 kernel: Run /init as init process Jan 14 00:56:50.470950 kernel: with arguments: Jan 14 00:56:50.470966 kernel: /init Jan 14 00:56:50.470983 kernel: with environment: Jan 14 00:56:50.470999 kernel: HOME=/ Jan 14 00:56:50.471016 kernel: TERM=linux Jan 14 00:56:50.471184 kernel: nvme nvme0: pci function 0000:00:04.0 Jan 14 00:56:50.471211 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Jan 14 00:56:50.471357 kernel: nvme nvme0: 2/0/0 default/read/poll queues Jan 14 00:56:50.471382 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 14 00:56:50.471400 kernel: GPT:25804799 != 33554431 Jan 14 00:56:50.471420 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 14 00:56:50.471441 kernel: GPT:25804799 != 33554431 Jan 14 00:56:50.471458 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 14 00:56:50.471475 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 14 00:56:50.471493 kernel: SCSI subsystem initialized Jan 14 00:56:50.471510 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 14 00:56:50.471526 kernel: device-mapper: uevent: version 1.0.3 Jan 14 00:56:50.471543 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Jan 14 00:56:50.471562 kernel: device-mapper: verity: sha256 using shash "sha256-generic" Jan 14 00:56:50.471579 kernel: raid6: avx512x4 gen() 15288 MB/s Jan 14 00:56:50.471596 kernel: raid6: avx512x2 gen() 15449 MB/s Jan 14 00:56:50.471612 kernel: raid6: avx512x1 gen() 15549 MB/s Jan 14 00:56:50.471628 kernel: raid6: avx2x4 gen() 15398 MB/s Jan 14 00:56:50.471645 kernel: raid6: avx2x2 gen() 15410 MB/s Jan 14 00:56:50.471662 kernel: raid6: avx2x1 gen() 11583 MB/s Jan 14 00:56:50.471682 kernel: raid6: using algorithm avx512x1 gen() 15549 MB/s Jan 14 00:56:50.471698 kernel: raid6: .... xor() 21500 MB/s, rmw enabled Jan 14 00:56:50.471715 kernel: raid6: using avx512x2 recovery algorithm Jan 14 00:56:50.471732 kernel: xor: automatically using best checksumming function avx Jan 14 00:56:50.471749 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 14 00:56:50.471766 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 14 00:56:50.471784 kernel: BTRFS: device fsid 315c4ba2-2b68-4ff5-9a58-ddeab520c9ac devid 1 transid 33 /dev/mapper/usr (254:0) scanned by mount (153) Jan 14 00:56:50.471804 kernel: BTRFS info (device dm-0): first mount of filesystem 315c4ba2-2b68-4ff5-9a58-ddeab520c9ac Jan 14 00:56:50.471821 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 14 00:56:50.471838 kernel: BTRFS info (device dm-0): enabling ssd optimizations Jan 14 00:56:50.471878 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 14 00:56:50.471895 kernel: BTRFS info (device dm-0): enabling free space tree Jan 14 00:56:50.471911 kernel: loop: module loaded Jan 14 00:56:50.471927 kernel: loop0: detected capacity change from 0 to 100552 Jan 14 00:56:50.471947 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 14 00:56:50.471964 systemd[1]: Successfully made /usr/ read-only. Jan 14 00:56:50.471984 systemd[1]: systemd 257.9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 14 00:56:50.472001 systemd[1]: Detected virtualization amazon. Jan 14 00:56:50.472017 systemd[1]: Detected architecture x86-64. Jan 14 00:56:50.472033 systemd[1]: Running in initrd. Jan 14 00:56:50.472053 systemd[1]: No hostname configured, using default hostname. Jan 14 00:56:50.472071 systemd[1]: Hostname set to . Jan 14 00:56:50.472088 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Jan 14 00:56:50.472105 systemd[1]: Queued start job for default target initrd.target. Jan 14 00:56:50.472121 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Jan 14 00:56:50.472139 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 14 00:56:50.472160 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 14 00:56:50.472179 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 14 00:56:50.472196 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 14 00:56:50.472216 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 14 00:56:50.472234 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 14 00:56:50.472252 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 14 00:56:50.472272 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 14 00:56:50.472288 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Jan 14 00:56:50.472304 systemd[1]: Reached target paths.target - Path Units. Jan 14 00:56:50.472320 systemd[1]: Reached target slices.target - Slice Units. Jan 14 00:56:50.472337 systemd[1]: Reached target swap.target - Swaps. Jan 14 00:56:50.472354 systemd[1]: Reached target timers.target - Timer Units. Jan 14 00:56:50.472370 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 14 00:56:50.472391 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 14 00:56:50.472409 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Jan 14 00:56:50.472429 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 14 00:56:50.472446 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jan 14 00:56:50.472463 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 14 00:56:50.472479 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 14 00:56:50.472498 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 14 00:56:50.472515 systemd[1]: Reached target sockets.target - Socket Units. Jan 14 00:56:50.472533 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 14 00:56:50.472550 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 14 00:56:50.472567 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 14 00:56:50.472583 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 14 00:56:50.472600 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Jan 14 00:56:50.472620 systemd[1]: Starting systemd-fsck-usr.service... Jan 14 00:56:50.472636 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 14 00:56:50.473017 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 14 00:56:50.473040 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 14 00:56:50.473063 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 14 00:56:50.473081 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 14 00:56:50.473129 systemd-journald[288]: Collecting audit messages is enabled. Jan 14 00:56:50.473171 systemd[1]: Finished systemd-fsck-usr.service. Jan 14 00:56:50.473190 kernel: audit: type=1130 audit(1768352210.451:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:56:50.473208 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 14 00:56:50.473227 systemd-journald[288]: Journal started Jan 14 00:56:50.473261 systemd-journald[288]: Runtime Journal (/run/log/journal/ec245a641ab6539964072d3280fdd716) is 4.7M, max 38M, 33.2M free. Jan 14 00:56:50.451000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:56:50.475873 systemd[1]: Started systemd-journald.service - Journal Service. Jan 14 00:56:50.475000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:56:50.481575 kernel: audit: type=1130 audit(1768352210.475:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:56:50.481707 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 14 00:56:50.522874 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 14 00:56:50.533184 kernel: Bridge firewalling registered Jan 14 00:56:50.532398 systemd-modules-load[293]: Inserted module 'br_netfilter' Jan 14 00:56:50.534280 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 14 00:56:50.543870 kernel: audit: type=1130 audit(1768352210.535:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:56:50.535000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:56:50.537980 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 14 00:56:50.598749 systemd-tmpfiles[303]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Jan 14 00:56:50.602000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:56:50.603057 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 14 00:56:50.611611 kernel: audit: type=1130 audit(1768352210.602:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:56:50.614029 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 14 00:56:50.616003 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 14 00:56:50.619491 kernel: audit: type=1130 audit(1768352210.616:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:56:50.616000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:56:50.617653 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 14 00:56:50.627769 kernel: audit: type=1130 audit(1768352210.622:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:56:50.622000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:56:50.629077 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 14 00:56:50.628000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:56:50.633998 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 14 00:56:50.638928 kernel: audit: type=1130 audit(1768352210.628:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:56:50.637000 audit: BPF prog-id=6 op=LOAD Jan 14 00:56:50.640125 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 14 00:56:50.642394 kernel: audit: type=1334 audit(1768352210.637:9): prog-id=6 op=LOAD Jan 14 00:56:50.647927 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 14 00:56:50.654348 kernel: audit: type=1130 audit(1768352210.647:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:56:50.647000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:56:50.674957 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 14 00:56:50.676000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:56:50.679041 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 14 00:56:50.746504 systemd-resolved[317]: Positive Trust Anchors: Jan 14 00:56:50.747218 dracut-cmdline[330]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=6d34ab71a3dc5a0ab37eb2c851228af18a1e24f648223df9a1099dbd7db2cfcf Jan 14 00:56:50.748890 systemd-resolved[317]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 14 00:56:50.748896 systemd-resolved[317]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Jan 14 00:56:50.748936 systemd-resolved[317]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 14 00:56:50.776781 systemd-resolved[317]: Defaulting to hostname 'linux'. Jan 14 00:56:50.777000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:56:50.777948 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 14 00:56:50.778461 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 14 00:56:50.965873 kernel: Loading iSCSI transport class v2.0-870. Jan 14 00:56:51.045871 kernel: iscsi: registered transport (tcp) Jan 14 00:56:51.069156 kernel: iscsi: registered transport (qla4xxx) Jan 14 00:56:51.069227 kernel: QLogic iSCSI HBA Driver Jan 14 00:56:51.094091 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 14 00:56:51.113325 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 14 00:56:51.113000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:56:51.114179 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 14 00:56:51.161338 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 14 00:56:51.160000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:56:51.164001 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 14 00:56:51.166978 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 14 00:56:51.194000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:56:51.195000 audit: BPF prog-id=7 op=LOAD Jan 14 00:56:51.195000 audit: BPF prog-id=8 op=LOAD Jan 14 00:56:51.195337 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 14 00:56:51.197980 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 14 00:56:51.228090 systemd-udevd[573]: Using default interface naming scheme 'v257'. Jan 14 00:56:51.239401 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 14 00:56:51.241528 kernel: kauditd_printk_skb: 7 callbacks suppressed Jan 14 00:56:51.241566 kernel: audit: type=1130 audit(1768352211.239:18): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:56:51.239000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:56:51.242989 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 14 00:56:51.263956 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 14 00:56:51.263000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:56:51.269000 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 14 00:56:51.269962 kernel: audit: type=1130 audit(1768352211.263:19): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:56:51.264000 audit: BPF prog-id=9 op=LOAD Jan 14 00:56:51.271862 kernel: audit: type=1334 audit(1768352211.264:20): prog-id=9 op=LOAD Jan 14 00:56:51.272460 dracut-pre-trigger[649]: rd.md=0: removing MD RAID activation Jan 14 00:56:51.298512 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 14 00:56:51.304979 kernel: audit: type=1130 audit(1768352211.298:21): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:56:51.298000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:56:51.301986 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 14 00:56:51.319729 systemd-networkd[675]: lo: Link UP Jan 14 00:56:51.320345 systemd-networkd[675]: lo: Gained carrier Jan 14 00:56:51.321000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:56:51.321382 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 14 00:56:51.326546 kernel: audit: type=1130 audit(1768352211.321:22): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:56:51.322053 systemd[1]: Reached target network.target - Network. Jan 14 00:56:51.364638 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 14 00:56:51.364000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:56:51.371875 kernel: audit: type=1130 audit(1768352211.364:23): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:56:51.372555 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 14 00:56:51.449394 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 14 00:56:51.449655 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 14 00:56:51.455499 kernel: audit: type=1131 audit(1768352211.449:24): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:56:51.449000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:56:51.450336 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 14 00:56:51.456871 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 14 00:56:51.493870 kernel: ena 0000:00:05.0: ENA device version: 0.10 Jan 14 00:56:51.494223 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Jan 14 00:56:51.502904 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 14 00:56:51.513592 kernel: ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy. Jan 14 00:56:51.514142 kernel: audit: type=1130 audit(1768352211.502:25): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:56:51.502000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:56:51.517370 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80400000, mac addr 06:d8:8a:e4:3e:55 Jan 14 00:56:51.519152 (udev-worker)[710]: Network interface NamePolicy= disabled on kernel command line. Jan 14 00:56:51.544961 kernel: cryptd: max_cpu_qlen set to 1000 Jan 14 00:56:51.549637 systemd-networkd[675]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 14 00:56:51.549652 systemd-networkd[675]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 14 00:56:51.554403 systemd-networkd[675]: eth0: Link UP Jan 14 00:56:51.555554 systemd-networkd[675]: eth0: Gained carrier Jan 14 00:56:51.555581 systemd-networkd[675]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 14 00:56:51.566923 systemd-networkd[675]: eth0: DHCPv4 address 172.31.19.12/20, gateway 172.31.16.1 acquired from 172.31.16.1 Jan 14 00:56:51.627907 kernel: AES CTR mode by8 optimization enabled Jan 14 00:56:51.627994 kernel: nvme nvme0: using unchecked data buffer Jan 14 00:56:51.628195 kernel: input: ImPS/2 Generic Wheel Mouse as /devices/platform/i8042/serio1/input/input3 Jan 14 00:56:51.729545 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Jan 14 00:56:51.730925 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 14 00:56:51.751622 disk-uuid[818]: Primary Header is updated. Jan 14 00:56:51.751622 disk-uuid[818]: Secondary Entries is updated. Jan 14 00:56:51.751622 disk-uuid[818]: Secondary Header is updated. Jan 14 00:56:51.821102 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Jan 14 00:56:51.876900 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Jan 14 00:56:51.897629 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Jan 14 00:56:52.150976 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 14 00:56:52.157759 kernel: audit: type=1130 audit(1768352212.150:26): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:56:52.150000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:56:52.158541 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 14 00:56:52.159057 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 14 00:56:52.160219 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 14 00:56:52.162115 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 14 00:56:52.183122 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 14 00:56:52.188280 kernel: audit: type=1130 audit(1768352212.182:27): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:56:52.182000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:56:52.853717 disk-uuid[819]: Warning: The kernel is still using the old partition table. Jan 14 00:56:52.853717 disk-uuid[819]: The new table will be used at the next reboot or after you Jan 14 00:56:52.853717 disk-uuid[819]: run partprobe(8) or kpartx(8) Jan 14 00:56:52.853717 disk-uuid[819]: The operation has completed successfully. Jan 14 00:56:52.864779 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 14 00:56:52.864000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:56:52.864000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:56:52.864892 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 14 00:56:52.866326 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 14 00:56:52.922177 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 (259:5) scanned by mount (1075) Jan 14 00:56:52.922232 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 87cf3d96-2540-4b91-98c0-7ae2e759a282 Jan 14 00:56:52.925587 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Jan 14 00:56:52.964055 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jan 14 00:56:52.964123 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Jan 14 00:56:52.972859 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 87cf3d96-2540-4b91-98c0-7ae2e759a282 Jan 14 00:56:52.973288 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 14 00:56:52.972000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:56:52.974809 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 14 00:56:53.589019 systemd-networkd[675]: eth0: Gained IPv6LL Jan 14 00:56:54.269305 ignition[1094]: Ignition 2.24.0 Jan 14 00:56:54.269319 ignition[1094]: Stage: fetch-offline Jan 14 00:56:54.269384 ignition[1094]: no configs at "/usr/lib/ignition/base.d" Jan 14 00:56:54.269393 ignition[1094]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 14 00:56:54.269663 ignition[1094]: Ignition finished successfully Jan 14 00:56:54.272550 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 14 00:56:54.272000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:56:54.274028 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 14 00:56:54.298538 ignition[1100]: Ignition 2.24.0 Jan 14 00:56:54.298550 ignition[1100]: Stage: fetch Jan 14 00:56:54.298744 ignition[1100]: no configs at "/usr/lib/ignition/base.d" Jan 14 00:56:54.298755 ignition[1100]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 14 00:56:54.298821 ignition[1100]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 14 00:56:54.307479 ignition[1100]: PUT result: OK Jan 14 00:56:54.309346 ignition[1100]: parsed url from cmdline: "" Jan 14 00:56:54.309355 ignition[1100]: no config URL provided Jan 14 00:56:54.309361 ignition[1100]: reading system config file "/usr/lib/ignition/user.ign" Jan 14 00:56:54.309378 ignition[1100]: no config at "/usr/lib/ignition/user.ign" Jan 14 00:56:54.309393 ignition[1100]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 14 00:56:54.310545 ignition[1100]: PUT result: OK Jan 14 00:56:54.310590 ignition[1100]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Jan 14 00:56:54.311491 ignition[1100]: GET result: OK Jan 14 00:56:54.311561 ignition[1100]: parsing config with SHA512: 71ff03b60c5e447d1d04809a4fdab0e920a7ee10f1f9bb50ffd9268b6f622548458a7643d49419de120fb75d7a5d4f53d493e6ce96ec9bd686e103ada1a7b4ee Jan 14 00:56:54.316902 unknown[1100]: fetched base config from "system" Jan 14 00:56:54.316914 unknown[1100]: fetched base config from "system" Jan 14 00:56:54.317246 ignition[1100]: fetch: fetch complete Jan 14 00:56:54.316920 unknown[1100]: fetched user config from "aws" Jan 14 00:56:54.317250 ignition[1100]: fetch: fetch passed Jan 14 00:56:54.317287 ignition[1100]: Ignition finished successfully Jan 14 00:56:54.319440 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 14 00:56:54.318000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:56:54.320687 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 14 00:56:54.341706 ignition[1107]: Ignition 2.24.0 Jan 14 00:56:54.341722 ignition[1107]: Stage: kargs Jan 14 00:56:54.341905 ignition[1107]: no configs at "/usr/lib/ignition/base.d" Jan 14 00:56:54.341912 ignition[1107]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 14 00:56:54.341978 ignition[1107]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 14 00:56:54.343185 ignition[1107]: PUT result: OK Jan 14 00:56:54.347461 ignition[1107]: kargs: kargs passed Jan 14 00:56:54.347537 ignition[1107]: Ignition finished successfully Jan 14 00:56:54.349142 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 14 00:56:54.348000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:56:54.350507 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 14 00:56:54.379720 ignition[1113]: Ignition 2.24.0 Jan 14 00:56:54.379734 ignition[1113]: Stage: disks Jan 14 00:56:54.379947 ignition[1113]: no configs at "/usr/lib/ignition/base.d" Jan 14 00:56:54.379954 ignition[1113]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 14 00:56:54.380026 ignition[1113]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 14 00:56:54.381108 ignition[1113]: PUT result: OK Jan 14 00:56:54.385237 ignition[1113]: disks: disks passed Jan 14 00:56:54.385296 ignition[1113]: Ignition finished successfully Jan 14 00:56:54.386817 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 14 00:56:54.386000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:56:54.387318 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 14 00:56:54.387635 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 14 00:56:54.388101 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 14 00:56:54.388582 systemd[1]: Reached target sysinit.target - System Initialization. Jan 14 00:56:54.389079 systemd[1]: Reached target basic.target - Basic System. Jan 14 00:56:54.390583 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 14 00:56:54.493871 systemd-fsck[1121]: ROOT: clean, 15/1631200 files, 112378/1617920 blocks Jan 14 00:56:54.496000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:56:54.496912 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 14 00:56:54.499611 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 14 00:56:54.704867 kernel: EXT4-fs (nvme0n1p9): mounted filesystem 6efdc615-0e3c-4caf-8d0b-1f38e5c59ef0 r/w with ordered data mode. Quota mode: none. Jan 14 00:56:54.706335 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 14 00:56:54.706992 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 14 00:56:54.764392 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 14 00:56:54.766086 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 14 00:56:54.768583 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 14 00:56:54.769275 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 14 00:56:54.769958 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 14 00:56:54.775041 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 14 00:56:54.776748 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 14 00:56:54.787860 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 (259:5) scanned by mount (1140) Jan 14 00:56:54.791396 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 87cf3d96-2540-4b91-98c0-7ae2e759a282 Jan 14 00:56:54.791424 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Jan 14 00:56:54.796108 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jan 14 00:56:54.796138 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Jan 14 00:56:54.798177 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 14 00:56:56.875838 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 14 00:56:56.881272 kernel: kauditd_printk_skb: 8 callbacks suppressed Jan 14 00:56:56.881298 kernel: audit: type=1130 audit(1768352216.875:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:56:56.875000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:56:56.879938 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 14 00:56:56.882626 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 14 00:56:56.901088 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 14 00:56:56.902950 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 87cf3d96-2540-4b91-98c0-7ae2e759a282 Jan 14 00:56:56.928423 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 14 00:56:56.933088 kernel: audit: type=1130 audit(1768352216.928:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:56:56.928000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:56:56.933169 ignition[1236]: INFO : Ignition 2.24.0 Jan 14 00:56:56.933169 ignition[1236]: INFO : Stage: mount Jan 14 00:56:56.933169 ignition[1236]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 14 00:56:56.933169 ignition[1236]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 14 00:56:56.933169 ignition[1236]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 14 00:56:56.933169 ignition[1236]: INFO : PUT result: OK Jan 14 00:56:56.936175 ignition[1236]: INFO : mount: mount passed Jan 14 00:56:56.936175 ignition[1236]: INFO : Ignition finished successfully Jan 14 00:56:56.937832 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 14 00:56:56.937000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:56:56.939425 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 14 00:56:56.943189 kernel: audit: type=1130 audit(1768352216.937:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:56:56.968231 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 14 00:56:56.991865 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 (259:5) scanned by mount (1248) Jan 14 00:56:56.995920 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 87cf3d96-2540-4b91-98c0-7ae2e759a282 Jan 14 00:56:56.995974 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Jan 14 00:56:57.002395 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jan 14 00:56:57.002523 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Jan 14 00:56:57.004449 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 14 00:56:57.033185 ignition[1265]: INFO : Ignition 2.24.0 Jan 14 00:56:57.033185 ignition[1265]: INFO : Stage: files Jan 14 00:56:57.034222 ignition[1265]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 14 00:56:57.034222 ignition[1265]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 14 00:56:57.034222 ignition[1265]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 14 00:56:57.036655 ignition[1265]: INFO : PUT result: OK Jan 14 00:56:57.039093 ignition[1265]: DEBUG : files: compiled without relabeling support, skipping Jan 14 00:56:57.040832 ignition[1265]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 14 00:56:57.040832 ignition[1265]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 14 00:56:57.104100 ignition[1265]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 14 00:56:57.104874 ignition[1265]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 14 00:56:57.104874 ignition[1265]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 14 00:56:57.104553 unknown[1265]: wrote ssh authorized keys file for user: core Jan 14 00:56:57.106641 ignition[1265]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Jan 14 00:56:57.107240 ignition[1265]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Jan 14 00:56:57.199206 ignition[1265]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 14 00:56:57.545074 ignition[1265]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Jan 14 00:56:57.545074 ignition[1265]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 14 00:56:57.546627 ignition[1265]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 14 00:56:57.546627 ignition[1265]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 14 00:56:57.546627 ignition[1265]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 14 00:56:57.546627 ignition[1265]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 14 00:56:57.546627 ignition[1265]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 14 00:56:57.546627 ignition[1265]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 14 00:56:57.546627 ignition[1265]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 14 00:56:57.550863 ignition[1265]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 14 00:56:57.551502 ignition[1265]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 14 00:56:57.551502 ignition[1265]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jan 14 00:56:57.553204 ignition[1265]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jan 14 00:56:57.553204 ignition[1265]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jan 14 00:56:57.553204 ignition[1265]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-x86-64.raw: attempt #1 Jan 14 00:56:58.003677 ignition[1265]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jan 14 00:56:58.467013 ignition[1265]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jan 14 00:56:58.467013 ignition[1265]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jan 14 00:56:58.504417 ignition[1265]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 14 00:56:58.510085 ignition[1265]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 14 00:56:58.510085 ignition[1265]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jan 14 00:56:58.510085 ignition[1265]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Jan 14 00:56:58.519487 kernel: audit: type=1130 audit(1768352218.512:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:56:58.512000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:56:58.519598 ignition[1265]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Jan 14 00:56:58.519598 ignition[1265]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 14 00:56:58.519598 ignition[1265]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 14 00:56:58.519598 ignition[1265]: INFO : files: files passed Jan 14 00:56:58.519598 ignition[1265]: INFO : Ignition finished successfully Jan 14 00:56:58.512430 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 14 00:56:58.516018 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 14 00:56:58.523000 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 14 00:56:58.534610 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 14 00:56:58.535243 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 14 00:56:58.535000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:56:58.537000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:56:58.541177 kernel: audit: type=1130 audit(1768352218.535:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:56:58.541217 kernel: audit: type=1131 audit(1768352218.537:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:56:58.577643 initrd-setup-root-after-ignition[1297]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 14 00:56:58.577643 initrd-setup-root-after-ignition[1297]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 14 00:56:58.581250 initrd-setup-root-after-ignition[1301]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 14 00:56:58.583420 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 14 00:56:58.588858 kernel: audit: type=1130 audit(1768352218.583:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:56:58.583000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:56:58.584096 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 14 00:56:58.590184 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 14 00:56:58.648763 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 14 00:56:58.648882 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 14 00:56:58.657940 kernel: audit: type=1130 audit(1768352218.648:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:56:58.657978 kernel: audit: type=1131 audit(1768352218.648:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:56:58.648000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:56:58.648000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:56:58.650205 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 14 00:56:58.658272 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 14 00:56:58.659172 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 14 00:56:58.660050 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 14 00:56:58.711884 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 14 00:56:58.713000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:56:58.718076 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 14 00:56:58.750976 kernel: audit: type=1130 audit(1768352218.713:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:56:58.817035 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Jan 14 00:56:58.817272 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 14 00:56:58.818497 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 14 00:56:58.819385 systemd[1]: Stopped target timers.target - Timer Units. Jan 14 00:56:58.820253 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 14 00:56:58.820000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:56:58.820437 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 14 00:56:58.821579 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 14 00:56:58.822496 systemd[1]: Stopped target basic.target - Basic System. Jan 14 00:56:58.823297 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 14 00:56:58.824109 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 14 00:56:58.824859 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 14 00:56:58.825570 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Jan 14 00:56:58.826393 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 14 00:56:58.827236 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 14 00:56:58.828098 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 14 00:56:58.829246 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 14 00:56:58.830045 systemd[1]: Stopped target swap.target - Swaps. Jan 14 00:56:58.830819 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 14 00:56:58.830000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:56:58.831078 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 14 00:56:58.832137 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 14 00:56:58.833038 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 14 00:56:58.833731 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 14 00:56:58.833881 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 14 00:56:58.834000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:56:58.834549 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 14 00:56:58.834723 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 14 00:56:58.836000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:56:58.836159 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 14 00:56:58.837000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:56:58.836391 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 14 00:56:58.837178 systemd[1]: ignition-files.service: Deactivated successfully. Jan 14 00:56:58.837376 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 14 00:56:58.839968 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 14 00:56:58.840370 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 14 00:56:58.840583 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 14 00:56:58.842000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:56:58.846119 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 14 00:56:58.846953 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 14 00:56:58.847765 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 14 00:56:58.848000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:56:58.849672 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 14 00:56:58.849887 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 14 00:56:58.851000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:56:58.851978 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 14 00:56:58.852915 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 14 00:56:58.853000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:56:58.862973 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 14 00:56:58.863119 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 14 00:56:58.863000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:56:58.863000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:56:58.874604 ignition[1321]: INFO : Ignition 2.24.0 Jan 14 00:56:58.875961 ignition[1321]: INFO : Stage: umount Jan 14 00:56:58.875961 ignition[1321]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 14 00:56:58.875961 ignition[1321]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 14 00:56:58.875961 ignition[1321]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 14 00:56:58.878215 ignition[1321]: INFO : PUT result: OK Jan 14 00:56:58.882723 ignition[1321]: INFO : umount: umount passed Jan 14 00:56:58.883459 ignition[1321]: INFO : Ignition finished successfully Jan 14 00:56:58.886765 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 14 00:56:58.887697 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 14 00:56:58.887000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:56:58.887836 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 14 00:56:58.889370 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 14 00:56:58.889000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:56:58.889477 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 14 00:56:58.890000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:56:58.890223 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 14 00:56:58.890000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:56:58.890283 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 14 00:56:58.890992 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 14 00:56:58.892000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:56:58.891066 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 14 00:56:58.891715 systemd[1]: Stopped target network.target - Network. Jan 14 00:56:58.892315 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 14 00:56:58.892388 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 14 00:56:58.893031 systemd[1]: Stopped target paths.target - Path Units. Jan 14 00:56:58.893652 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 14 00:56:58.896906 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 14 00:56:58.897255 systemd[1]: Stopped target slices.target - Slice Units. Jan 14 00:56:58.898173 systemd[1]: Stopped target sockets.target - Socket Units. Jan 14 00:56:58.898907 systemd[1]: iscsid.socket: Deactivated successfully. Jan 14 00:56:58.898969 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 14 00:56:58.899535 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 14 00:56:58.899582 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 14 00:56:58.900000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:56:58.900178 systemd[1]: systemd-journald-audit.socket: Deactivated successfully. Jan 14 00:56:58.901000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup-pre comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:56:58.900216 systemd[1]: Closed systemd-journald-audit.socket - Journal Audit Socket. Jan 14 00:56:58.900774 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 14 00:56:58.900937 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 14 00:56:58.901551 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 14 00:56:58.901608 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 14 00:56:58.902730 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 14 00:56:58.903226 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 14 00:56:58.910623 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 14 00:56:58.910000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:56:58.910759 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 14 00:56:58.913222 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 14 00:56:58.913366 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 14 00:56:58.913000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:56:58.915000 audit: BPF prog-id=6 op=UNLOAD Jan 14 00:56:58.916772 systemd[1]: Stopped target network-pre.target - Preparation for Network. Jan 14 00:56:58.916000 audit: BPF prog-id=9 op=UNLOAD Jan 14 00:56:58.917195 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 14 00:56:58.917232 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 14 00:56:58.919218 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 14 00:56:58.920971 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 14 00:56:58.921054 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 14 00:56:58.922000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:56:58.923397 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 14 00:56:58.924000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:56:58.923456 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 14 00:56:58.925000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:56:58.925195 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 14 00:56:58.925252 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 14 00:56:58.926520 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 14 00:56:58.935000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:56:58.935539 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 14 00:56:58.935721 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 14 00:56:58.938426 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 14 00:56:58.939000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:56:58.938522 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 14 00:56:58.939555 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 14 00:56:58.939600 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 14 00:56:58.940188 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 14 00:56:58.940247 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 14 00:56:58.943727 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 14 00:56:58.943000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:56:58.943789 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 14 00:56:58.944792 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 14 00:56:58.944000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:56:58.944880 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 14 00:56:58.947597 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 14 00:56:58.948177 systemd[1]: systemd-network-generator.service: Deactivated successfully. Jan 14 00:56:58.949000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:56:58.950000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:56:58.950000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:56:58.948243 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Jan 14 00:56:58.950111 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 14 00:56:58.950178 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 14 00:56:58.951191 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 14 00:56:58.951245 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 14 00:56:58.965324 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 14 00:56:58.965000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:56:58.965477 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 14 00:56:58.970700 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 14 00:56:58.970829 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 14 00:56:58.970000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:56:58.970000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:56:59.022561 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 14 00:56:59.022673 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 14 00:56:59.022000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:56:59.023840 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 14 00:56:59.024328 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 14 00:56:59.024000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:56:59.024387 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 14 00:56:59.026022 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 14 00:56:59.044639 systemd[1]: Switching root. Jan 14 00:56:59.111338 systemd-journald[288]: Journal stopped Jan 14 00:57:02.091836 systemd-journald[288]: Received SIGTERM from PID 1 (systemd). Jan 14 00:57:02.091958 kernel: SELinux: policy capability network_peer_controls=1 Jan 14 00:57:02.091988 kernel: SELinux: policy capability open_perms=1 Jan 14 00:57:02.092011 kernel: SELinux: policy capability extended_socket_class=1 Jan 14 00:57:02.092031 kernel: SELinux: policy capability always_check_network=0 Jan 14 00:57:02.092058 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 14 00:57:02.092081 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 14 00:57:02.092104 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 14 00:57:02.092127 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 14 00:57:02.092151 kernel: SELinux: policy capability userspace_initial_context=0 Jan 14 00:57:02.092170 systemd[1]: Successfully loaded SELinux policy in 154.443ms. Jan 14 00:57:02.092192 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 6.524ms. Jan 14 00:57:02.092215 systemd[1]: systemd 257.9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 14 00:57:02.092235 systemd[1]: Detected virtualization amazon. Jan 14 00:57:02.092259 systemd[1]: Detected architecture x86-64. Jan 14 00:57:02.092278 systemd[1]: Detected first boot. Jan 14 00:57:02.092302 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Jan 14 00:57:02.092322 zram_generator::config[1365]: No configuration found. Jan 14 00:57:02.092347 kernel: Guest personality initialized and is inactive Jan 14 00:57:02.092365 kernel: VMCI host device registered (name=vmci, major=10, minor=258) Jan 14 00:57:02.092383 kernel: Initialized host personality Jan 14 00:57:02.092400 kernel: NET: Registered PF_VSOCK protocol family Jan 14 00:57:02.092419 systemd[1]: Populated /etc with preset unit settings. Jan 14 00:57:02.092626 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 14 00:57:02.092645 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 14 00:57:02.092665 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 14 00:57:02.092693 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 14 00:57:02.092713 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 14 00:57:02.092733 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 14 00:57:02.092753 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 14 00:57:02.092773 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 14 00:57:02.092793 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 14 00:57:02.092816 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 14 00:57:02.092838 systemd[1]: Created slice user.slice - User and Session Slice. Jan 14 00:57:02.092876 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 14 00:57:02.092895 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 14 00:57:02.092917 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 14 00:57:02.092938 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 14 00:57:02.092959 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 14 00:57:02.092986 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 14 00:57:02.093008 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 14 00:57:02.093030 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 14 00:57:02.093050 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 14 00:57:02.093070 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 14 00:57:02.093091 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 14 00:57:02.093115 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 14 00:57:02.093134 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 14 00:57:02.093153 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 14 00:57:02.093175 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 14 00:57:02.093194 systemd[1]: Reached target remote-veritysetup.target - Remote Verity Protected Volumes. Jan 14 00:57:02.093213 systemd[1]: Reached target slices.target - Slice Units. Jan 14 00:57:02.093232 systemd[1]: Reached target swap.target - Swaps. Jan 14 00:57:02.093249 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 14 00:57:02.093273 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 14 00:57:02.093292 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jan 14 00:57:02.093311 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Jan 14 00:57:02.093331 systemd[1]: Listening on systemd-mountfsd.socket - DDI File System Mounter Socket. Jan 14 00:57:02.093349 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 14 00:57:02.093369 systemd[1]: Listening on systemd-nsresourced.socket - Namespace Resource Manager Socket. Jan 14 00:57:02.093389 systemd[1]: Listening on systemd-oomd.socket - Userspace Out-Of-Memory (OOM) Killer Socket. Jan 14 00:57:02.093412 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 14 00:57:02.093431 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 14 00:57:02.093451 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 14 00:57:02.093472 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 14 00:57:02.093491 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 14 00:57:02.093512 systemd[1]: Mounting media.mount - External Media Directory... Jan 14 00:57:02.093533 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 14 00:57:02.093556 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 14 00:57:02.093575 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 14 00:57:02.093595 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 14 00:57:02.093617 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 14 00:57:02.093639 systemd[1]: Reached target machines.target - Containers. Jan 14 00:57:02.093661 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 14 00:57:02.093685 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 14 00:57:02.093706 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 14 00:57:02.093726 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 14 00:57:02.093745 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 14 00:57:02.093763 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 14 00:57:02.093782 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 14 00:57:02.093801 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 14 00:57:02.093824 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 14 00:57:02.093867 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 14 00:57:02.093886 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 14 00:57:02.093905 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 14 00:57:02.093926 kernel: kauditd_printk_skb: 54 callbacks suppressed Jan 14 00:57:02.093949 kernel: audit: type=1131 audit(1768352221.960:100): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:57:02.093986 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 14 00:57:02.094013 systemd[1]: Stopped systemd-fsck-usr.service. Jan 14 00:57:02.094035 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 14 00:57:02.094055 kernel: audit: type=1131 audit(1768352221.971:101): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:57:02.094076 kernel: audit: type=1334 audit(1768352221.976:102): prog-id=14 op=UNLOAD Jan 14 00:57:02.094095 kernel: audit: type=1334 audit(1768352221.976:103): prog-id=13 op=UNLOAD Jan 14 00:57:02.094127 kernel: audit: type=1334 audit(1768352221.977:104): prog-id=15 op=LOAD Jan 14 00:57:02.094150 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 14 00:57:02.094174 kernel: audit: type=1334 audit(1768352221.980:105): prog-id=16 op=LOAD Jan 14 00:57:02.094195 kernel: audit: type=1334 audit(1768352221.981:106): prog-id=17 op=LOAD Jan 14 00:57:02.094216 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 14 00:57:02.094239 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 14 00:57:02.094262 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 14 00:57:02.094284 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jan 14 00:57:02.094306 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 14 00:57:02.094332 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 14 00:57:02.094354 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 14 00:57:02.094376 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 14 00:57:02.094400 systemd[1]: Mounted media.mount - External Media Directory. Jan 14 00:57:02.094435 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 14 00:57:02.094460 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 14 00:57:02.094483 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 14 00:57:02.094509 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 14 00:57:02.094532 kernel: fuse: init (API version 7.41) Jan 14 00:57:02.094553 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 14 00:57:02.094576 kernel: audit: type=1130 audit(1768352222.058:107): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:57:02.094604 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 14 00:57:02.094628 kernel: audit: type=1130 audit(1768352222.068:108): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:57:02.094649 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 14 00:57:02.094673 kernel: audit: type=1131 audit(1768352222.068:109): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:57:02.094695 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 14 00:57:02.094749 systemd-journald[1439]: Collecting audit messages is enabled. Jan 14 00:57:02.094797 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 14 00:57:02.094821 systemd-journald[1439]: Journal started Jan 14 00:57:02.094882 systemd-journald[1439]: Runtime Journal (/run/log/journal/ec245a641ab6539964072d3280fdd716) is 4.7M, max 38M, 33.2M free. Jan 14 00:57:02.094948 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 14 00:57:01.827000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Jan 14 00:57:01.960000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:57:01.971000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:57:01.976000 audit: BPF prog-id=14 op=UNLOAD Jan 14 00:57:01.976000 audit: BPF prog-id=13 op=UNLOAD Jan 14 00:57:01.977000 audit: BPF prog-id=15 op=LOAD Jan 14 00:57:01.980000 audit: BPF prog-id=16 op=LOAD Jan 14 00:57:01.981000 audit: BPF prog-id=17 op=LOAD Jan 14 00:57:02.058000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:57:02.068000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:57:02.068000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:57:02.088000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Jan 14 00:57:02.088000 audit[1439]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=4 a1=7ffc057123a0 a2=4000 a3=0 items=0 ppid=1 pid=1439 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:57:02.088000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Jan 14 00:57:02.089000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:57:02.089000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:57:01.741880 systemd[1]: Queued start job for default target multi-user.target. Jan 14 00:57:01.762099 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Jan 14 00:57:01.762710 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 14 00:57:02.098506 systemd[1]: Started systemd-journald.service - Journal Service. Jan 14 00:57:02.097000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:57:02.097000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:57:02.101000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:57:02.104608 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 14 00:57:02.105040 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 14 00:57:02.105000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:57:02.105000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:57:02.108452 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 14 00:57:02.108693 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 14 00:57:02.109000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:57:02.109000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:57:02.110719 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 14 00:57:02.111000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:57:02.112434 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 14 00:57:02.112000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:57:02.126697 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 14 00:57:02.126000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:57:02.134363 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 14 00:57:02.138458 systemd[1]: Listening on systemd-importd.socket - Disk Image Download Service Socket. Jan 14 00:57:02.143974 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 14 00:57:02.149975 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 14 00:57:02.151927 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 14 00:57:02.151983 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 14 00:57:02.158454 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jan 14 00:57:02.161919 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 14 00:57:02.162095 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Jan 14 00:57:02.174877 kernel: ACPI: bus type drm_connector registered Jan 14 00:57:02.178951 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 14 00:57:02.183252 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 14 00:57:02.184016 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 14 00:57:02.187129 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 14 00:57:02.188963 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 14 00:57:02.190592 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 14 00:57:02.201000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:57:02.201000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:57:02.198034 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 14 00:57:02.200197 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 14 00:57:02.203000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-load-credentials comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:57:02.200491 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 14 00:57:02.202839 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jan 14 00:57:02.207333 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 14 00:57:02.208242 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 14 00:57:02.234907 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 14 00:57:02.234000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:57:02.236174 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 14 00:57:02.240713 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jan 14 00:57:02.276459 systemd-journald[1439]: Time spent on flushing to /var/log/journal/ec245a641ab6539964072d3280fdd716 is 80.704ms for 1156 entries. Jan 14 00:57:02.276459 systemd-journald[1439]: System Journal (/var/log/journal/ec245a641ab6539964072d3280fdd716) is 8M, max 588.1M, 580.1M free. Jan 14 00:57:02.369129 systemd-journald[1439]: Received client request to flush runtime journal. Jan 14 00:57:02.369255 kernel: loop1: detected capacity change from 0 to 111560 Jan 14 00:57:02.312000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:57:02.312575 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 14 00:57:02.369000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:57:02.370072 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 14 00:57:02.371258 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 14 00:57:02.371000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:57:02.386664 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jan 14 00:57:02.386000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:57:02.456244 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 14 00:57:02.456000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:57:02.462547 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 14 00:57:02.508359 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 14 00:57:02.508000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:57:02.509000 audit: BPF prog-id=18 op=LOAD Jan 14 00:57:02.509000 audit: BPF prog-id=19 op=LOAD Jan 14 00:57:02.509000 audit: BPF prog-id=20 op=LOAD Jan 14 00:57:02.510974 systemd[1]: Starting systemd-oomd.service - Userspace Out-Of-Memory (OOM) Killer... Jan 14 00:57:02.512000 audit: BPF prog-id=21 op=LOAD Jan 14 00:57:02.516047 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 14 00:57:02.518211 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 14 00:57:02.524000 audit: BPF prog-id=22 op=LOAD Jan 14 00:57:02.524000 audit: BPF prog-id=23 op=LOAD Jan 14 00:57:02.524000 audit: BPF prog-id=24 op=LOAD Jan 14 00:57:02.528037 systemd[1]: Starting systemd-nsresourced.service - Namespace Resource Manager... Jan 14 00:57:02.532000 audit: BPF prog-id=25 op=LOAD Jan 14 00:57:02.532000 audit: BPF prog-id=26 op=LOAD Jan 14 00:57:02.533000 audit: BPF prog-id=27 op=LOAD Jan 14 00:57:02.534719 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 14 00:57:02.599259 systemd-nsresourced[1519]: Not setting up BPF subsystem, as functionality has been disabled at compile time. Jan 14 00:57:02.601650 systemd[1]: Started systemd-nsresourced.service - Namespace Resource Manager. Jan 14 00:57:02.601000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-nsresourced comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:57:02.624708 systemd-tmpfiles[1518]: ACLs are not supported, ignoring. Jan 14 00:57:02.626000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:57:02.625780 systemd-tmpfiles[1518]: ACLs are not supported, ignoring. Jan 14 00:57:02.625970 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 14 00:57:02.636509 kernel: loop2: detected capacity change from 0 to 229808 Jan 14 00:57:02.637020 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 14 00:57:02.637000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:57:02.765229 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 14 00:57:02.771989 systemd-oomd[1516]: No swap; memory pressure usage will be degraded Jan 14 00:57:02.772609 systemd[1]: Started systemd-oomd.service - Userspace Out-Of-Memory (OOM) Killer. Jan 14 00:57:02.774000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-oomd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:57:02.837621 systemd-resolved[1517]: Positive Trust Anchors: Jan 14 00:57:02.837640 systemd-resolved[1517]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 14 00:57:02.837646 systemd-resolved[1517]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Jan 14 00:57:02.837713 systemd-resolved[1517]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 14 00:57:02.844713 systemd-resolved[1517]: Defaulting to hostname 'linux'. Jan 14 00:57:02.846574 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 14 00:57:02.846000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:57:02.847630 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 14 00:57:02.939868 kernel: loop3: detected capacity change from 0 to 73176 Jan 14 00:57:03.089535 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 14 00:57:03.089000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:57:03.089000 audit: BPF prog-id=8 op=UNLOAD Jan 14 00:57:03.089000 audit: BPF prog-id=7 op=UNLOAD Jan 14 00:57:03.090000 audit: BPF prog-id=28 op=LOAD Jan 14 00:57:03.090000 audit: BPF prog-id=29 op=LOAD Jan 14 00:57:03.092067 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 14 00:57:03.135906 systemd-udevd[1542]: Using default interface naming scheme 'v257'. Jan 14 00:57:03.270399 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 14 00:57:03.270000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:57:03.271000 audit: BPF prog-id=30 op=LOAD Jan 14 00:57:03.273994 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 14 00:57:03.278913 kernel: loop4: detected capacity change from 0 to 50784 Jan 14 00:57:03.323859 (udev-worker)[1550]: Network interface NamePolicy= disabled on kernel command line. Jan 14 00:57:03.333182 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 14 00:57:03.344201 systemd-networkd[1548]: lo: Link UP Jan 14 00:57:03.344212 systemd-networkd[1548]: lo: Gained carrier Jan 14 00:57:03.345348 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 14 00:57:03.344000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:57:03.345890 systemd[1]: Reached target network.target - Network. Jan 14 00:57:03.348175 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jan 14 00:57:03.350066 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 14 00:57:03.378894 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jan 14 00:57:03.378000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd-persistent-storage comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:57:03.389047 systemd-networkd[1548]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 14 00:57:03.389055 systemd-networkd[1548]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 14 00:57:03.392604 systemd-networkd[1548]: eth0: Link UP Jan 14 00:57:03.392761 systemd-networkd[1548]: eth0: Gained carrier Jan 14 00:57:03.392787 systemd-networkd[1548]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 14 00:57:03.404082 systemd-networkd[1548]: eth0: DHCPv4 address 172.31.19.12/20, gateway 172.31.16.1 acquired from 172.31.16.1 Jan 14 00:57:03.404888 kernel: mousedev: PS/2 mouse device common for all mice Jan 14 00:57:03.417871 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input4 Jan 14 00:57:03.424893 kernel: piix4_smbus 0000:00:01.3: SMBus base address uninitialized - upgrade BIOS or use force_addr=0xaddr Jan 14 00:57:03.425202 kernel: ACPI: button: Power Button [PWRF] Jan 14 00:57:03.425225 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input5 Jan 14 00:57:03.426069 kernel: ACPI: button: Sleep Button [SLPF] Jan 14 00:57:03.518035 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 14 00:57:03.528876 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 14 00:57:03.529201 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 14 00:57:03.529000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:57:03.529000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:57:03.532315 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 14 00:57:03.588872 kernel: loop5: detected capacity change from 0 to 111560 Jan 14 00:57:03.605872 kernel: loop6: detected capacity change from 0 to 229808 Jan 14 00:57:03.631876 kernel: loop7: detected capacity change from 0 to 73176 Jan 14 00:57:03.647876 kernel: loop1: detected capacity change from 0 to 50784 Jan 14 00:57:03.672473 (sd-merge)[1593]: Using extensions 'containerd-flatcar.raw', 'docker-flatcar.raw', 'kubernetes.raw', 'oem-ami.raw'. Jan 14 00:57:03.676772 (sd-merge)[1593]: Merged extensions into '/usr'. Jan 14 00:57:03.681340 systemd[1]: Reload requested from client PID 1475 ('systemd-sysext') (unit systemd-sysext.service)... Jan 14 00:57:03.681358 systemd[1]: Reloading... Jan 14 00:57:03.753888 zram_generator::config[1628]: No configuration found. Jan 14 00:57:04.051770 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Jan 14 00:57:04.053069 systemd[1]: Reloading finished in 371 ms. Jan 14 00:57:04.078215 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 14 00:57:04.077000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:57:04.079249 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 14 00:57:04.078000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:57:04.116107 systemd[1]: Starting ensure-sysext.service... Jan 14 00:57:04.119993 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 14 00:57:04.128802 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 14 00:57:04.130000 audit: BPF prog-id=31 op=LOAD Jan 14 00:57:04.130000 audit: BPF prog-id=21 op=UNLOAD Jan 14 00:57:04.131000 audit: BPF prog-id=32 op=LOAD Jan 14 00:57:04.131000 audit: BPF prog-id=15 op=UNLOAD Jan 14 00:57:04.131000 audit: BPF prog-id=33 op=LOAD Jan 14 00:57:04.131000 audit: BPF prog-id=34 op=LOAD Jan 14 00:57:04.131000 audit: BPF prog-id=16 op=UNLOAD Jan 14 00:57:04.131000 audit: BPF prog-id=17 op=UNLOAD Jan 14 00:57:04.132000 audit: BPF prog-id=35 op=LOAD Jan 14 00:57:04.132000 audit: BPF prog-id=22 op=UNLOAD Jan 14 00:57:04.132000 audit: BPF prog-id=36 op=LOAD Jan 14 00:57:04.132000 audit: BPF prog-id=37 op=LOAD Jan 14 00:57:04.132000 audit: BPF prog-id=23 op=UNLOAD Jan 14 00:57:04.132000 audit: BPF prog-id=24 op=UNLOAD Jan 14 00:57:04.132000 audit: BPF prog-id=38 op=LOAD Jan 14 00:57:04.132000 audit: BPF prog-id=39 op=LOAD Jan 14 00:57:04.132000 audit: BPF prog-id=28 op=UNLOAD Jan 14 00:57:04.132000 audit: BPF prog-id=29 op=UNLOAD Jan 14 00:57:04.134000 audit: BPF prog-id=40 op=LOAD Jan 14 00:57:04.134000 audit: BPF prog-id=30 op=UNLOAD Jan 14 00:57:04.135000 audit: BPF prog-id=41 op=LOAD Jan 14 00:57:04.135000 audit: BPF prog-id=25 op=UNLOAD Jan 14 00:57:04.135000 audit: BPF prog-id=42 op=LOAD Jan 14 00:57:04.135000 audit: BPF prog-id=43 op=LOAD Jan 14 00:57:04.135000 audit: BPF prog-id=26 op=UNLOAD Jan 14 00:57:04.135000 audit: BPF prog-id=27 op=UNLOAD Jan 14 00:57:04.135000 audit: BPF prog-id=44 op=LOAD Jan 14 00:57:04.135000 audit: BPF prog-id=18 op=UNLOAD Jan 14 00:57:04.135000 audit: BPF prog-id=45 op=LOAD Jan 14 00:57:04.135000 audit: BPF prog-id=46 op=LOAD Jan 14 00:57:04.135000 audit: BPF prog-id=19 op=UNLOAD Jan 14 00:57:04.135000 audit: BPF prog-id=20 op=UNLOAD Jan 14 00:57:04.145226 systemd-tmpfiles[1765]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Jan 14 00:57:04.145506 systemd-tmpfiles[1765]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Jan 14 00:57:04.145831 systemd-tmpfiles[1765]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 14 00:57:04.146253 systemd[1]: Reload requested from client PID 1763 ('systemctl') (unit ensure-sysext.service)... Jan 14 00:57:04.146272 systemd[1]: Reloading... Jan 14 00:57:04.147575 systemd-tmpfiles[1765]: ACLs are not supported, ignoring. Jan 14 00:57:04.147718 systemd-tmpfiles[1765]: ACLs are not supported, ignoring. Jan 14 00:57:04.153343 systemd-tmpfiles[1765]: Detected autofs mount point /boot during canonicalization of boot. Jan 14 00:57:04.153429 systemd-tmpfiles[1765]: Skipping /boot Jan 14 00:57:04.162484 systemd-tmpfiles[1765]: Detected autofs mount point /boot during canonicalization of boot. Jan 14 00:57:04.162499 systemd-tmpfiles[1765]: Skipping /boot Jan 14 00:57:04.205912 zram_generator::config[1797]: No configuration found. Jan 14 00:57:04.452759 systemd[1]: Reloading finished in 306 ms. Jan 14 00:57:04.471000 audit: BPF prog-id=47 op=LOAD Jan 14 00:57:04.471000 audit: BPF prog-id=41 op=UNLOAD Jan 14 00:57:04.471000 audit: BPF prog-id=48 op=LOAD Jan 14 00:57:04.471000 audit: BPF prog-id=49 op=LOAD Jan 14 00:57:04.471000 audit: BPF prog-id=42 op=UNLOAD Jan 14 00:57:04.471000 audit: BPF prog-id=43 op=UNLOAD Jan 14 00:57:04.471000 audit: BPF prog-id=50 op=LOAD Jan 14 00:57:04.471000 audit: BPF prog-id=44 op=UNLOAD Jan 14 00:57:04.471000 audit: BPF prog-id=51 op=LOAD Jan 14 00:57:04.471000 audit: BPF prog-id=52 op=LOAD Jan 14 00:57:04.471000 audit: BPF prog-id=45 op=UNLOAD Jan 14 00:57:04.471000 audit: BPF prog-id=46 op=UNLOAD Jan 14 00:57:04.472000 audit: BPF prog-id=53 op=LOAD Jan 14 00:57:04.472000 audit: BPF prog-id=54 op=LOAD Jan 14 00:57:04.472000 audit: BPF prog-id=38 op=UNLOAD Jan 14 00:57:04.472000 audit: BPF prog-id=39 op=UNLOAD Jan 14 00:57:04.473000 audit: BPF prog-id=55 op=LOAD Jan 14 00:57:04.473000 audit: BPF prog-id=40 op=UNLOAD Jan 14 00:57:04.474000 audit: BPF prog-id=56 op=LOAD Jan 14 00:57:04.474000 audit: BPF prog-id=35 op=UNLOAD Jan 14 00:57:04.474000 audit: BPF prog-id=57 op=LOAD Jan 14 00:57:04.474000 audit: BPF prog-id=58 op=LOAD Jan 14 00:57:04.474000 audit: BPF prog-id=36 op=UNLOAD Jan 14 00:57:04.474000 audit: BPF prog-id=37 op=UNLOAD Jan 14 00:57:04.474000 audit: BPF prog-id=59 op=LOAD Jan 14 00:57:04.474000 audit: BPF prog-id=32 op=UNLOAD Jan 14 00:57:04.474000 audit: BPF prog-id=60 op=LOAD Jan 14 00:57:04.475000 audit: BPF prog-id=61 op=LOAD Jan 14 00:57:04.475000 audit: BPF prog-id=33 op=UNLOAD Jan 14 00:57:04.475000 audit: BPF prog-id=34 op=UNLOAD Jan 14 00:57:04.475000 audit: BPF prog-id=62 op=LOAD Jan 14 00:57:04.475000 audit: BPF prog-id=31 op=UNLOAD Jan 14 00:57:04.487911 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 14 00:57:04.487000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:57:04.488944 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 14 00:57:04.488000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:57:04.498059 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 14 00:57:04.501059 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 14 00:57:04.504354 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 14 00:57:04.510956 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 14 00:57:04.513126 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 14 00:57:04.517241 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 14 00:57:04.517423 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 14 00:57:04.527536 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 14 00:57:04.532979 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 14 00:57:04.535146 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 14 00:57:04.535647 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 14 00:57:04.535837 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Jan 14 00:57:04.535943 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 14 00:57:04.536035 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 14 00:57:04.539000 audit[1859]: SYSTEM_BOOT pid=1859 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Jan 14 00:57:04.550941 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 14 00:57:04.551403 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 14 00:57:04.555499 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 14 00:57:04.556099 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 14 00:57:04.556282 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Jan 14 00:57:04.556382 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 14 00:57:04.556533 systemd[1]: Reached target time-set.target - System Time Set. Jan 14 00:57:04.557033 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 14 00:57:04.558534 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 14 00:57:04.558000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:57:04.559899 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 14 00:57:04.560090 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 14 00:57:04.559000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:57:04.559000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:57:04.561376 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 14 00:57:04.561647 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 14 00:57:04.561000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:57:04.561000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:57:04.562000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:57:04.562000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:57:04.562721 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 14 00:57:04.563138 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 14 00:57:04.569336 systemd[1]: Finished ensure-sysext.service. Jan 14 00:57:04.568000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:57:04.572627 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 14 00:57:04.572837 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 14 00:57:04.572000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:57:04.572000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:57:04.574492 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 14 00:57:04.574555 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 14 00:57:04.580000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Jan 14 00:57:04.580000 audit[1890]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffff5d73000 a2=420 a3=0 items=0 ppid=1855 pid=1890 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:57:04.580000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jan 14 00:57:04.581263 augenrules[1890]: No rules Jan 14 00:57:04.582229 systemd[1]: audit-rules.service: Deactivated successfully. Jan 14 00:57:04.582521 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 14 00:57:04.600722 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 14 00:57:04.853003 systemd-networkd[1548]: eth0: Gained IPv6LL Jan 14 00:57:04.855730 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 14 00:57:04.856386 systemd[1]: Reached target network-online.target - Network is Online. Jan 14 00:57:04.863083 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 14 00:57:04.863712 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 14 00:57:07.238321 ldconfig[1857]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 14 00:57:07.242946 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 14 00:57:07.244455 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 14 00:57:07.274636 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 14 00:57:07.275344 systemd[1]: Reached target sysinit.target - System Initialization. Jan 14 00:57:07.275926 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 14 00:57:07.276347 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 14 00:57:07.276723 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Jan 14 00:57:07.277255 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 14 00:57:07.277699 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 14 00:57:07.278092 systemd[1]: Started systemd-sysupdate-reboot.timer - Reboot Automatically After System Update. Jan 14 00:57:07.278549 systemd[1]: Started systemd-sysupdate.timer - Automatic System Update. Jan 14 00:57:07.278917 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 14 00:57:07.279260 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 14 00:57:07.279303 systemd[1]: Reached target paths.target - Path Units. Jan 14 00:57:07.279650 systemd[1]: Reached target timers.target - Timer Units. Jan 14 00:57:07.281464 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 14 00:57:07.283237 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 14 00:57:07.285868 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jan 14 00:57:07.286360 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jan 14 00:57:07.286723 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jan 14 00:57:07.293690 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 14 00:57:07.294483 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jan 14 00:57:07.295597 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 14 00:57:07.296894 systemd[1]: Reached target sockets.target - Socket Units. Jan 14 00:57:07.297258 systemd[1]: Reached target basic.target - Basic System. Jan 14 00:57:07.297652 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 14 00:57:07.297692 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 14 00:57:07.298768 systemd[1]: Starting containerd.service - containerd container runtime... Jan 14 00:57:07.303020 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 14 00:57:07.307019 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 14 00:57:07.310055 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 14 00:57:07.314049 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 14 00:57:07.319561 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 14 00:57:07.320231 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 14 00:57:07.324112 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Jan 14 00:57:07.331684 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 14 00:57:07.338333 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 14 00:57:07.343415 systemd[1]: Started ntpd.service - Network Time Service. Jan 14 00:57:07.352359 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 14 00:57:07.360064 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 14 00:57:07.360236 oslogin_cache_refresh[1910]: Refreshing passwd entry cache Jan 14 00:57:07.365763 google_oslogin_nss_cache[1910]: oslogin_cache_refresh[1910]: Refreshing passwd entry cache Jan 14 00:57:07.368060 systemd[1]: Starting setup-oem.service - Setup OEM... Jan 14 00:57:07.371890 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 14 00:57:07.376412 jq[1908]: false Jan 14 00:57:07.376938 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 14 00:57:07.383243 oslogin_cache_refresh[1910]: Failure getting users, quitting Jan 14 00:57:07.384224 google_oslogin_nss_cache[1910]: oslogin_cache_refresh[1910]: Failure getting users, quitting Jan 14 00:57:07.384224 google_oslogin_nss_cache[1910]: oslogin_cache_refresh[1910]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jan 14 00:57:07.384224 google_oslogin_nss_cache[1910]: oslogin_cache_refresh[1910]: Refreshing group entry cache Jan 14 00:57:07.383267 oslogin_cache_refresh[1910]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jan 14 00:57:07.383315 oslogin_cache_refresh[1910]: Refreshing group entry cache Jan 14 00:57:07.385180 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 14 00:57:07.387558 oslogin_cache_refresh[1910]: Failure getting groups, quitting Jan 14 00:57:07.387788 google_oslogin_nss_cache[1910]: oslogin_cache_refresh[1910]: Failure getting groups, quitting Jan 14 00:57:07.387788 google_oslogin_nss_cache[1910]: oslogin_cache_refresh[1910]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jan 14 00:57:07.387572 oslogin_cache_refresh[1910]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jan 14 00:57:07.390984 extend-filesystems[1909]: Found /dev/nvme0n1p6 Jan 14 00:57:07.391975 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 14 00:57:07.392651 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 14 00:57:07.394157 systemd[1]: Starting update-engine.service - Update Engine... Jan 14 00:57:07.406272 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 14 00:57:07.433912 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 14 00:57:07.434840 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 14 00:57:07.435179 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 14 00:57:07.435549 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Jan 14 00:57:07.435819 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Jan 14 00:57:07.449684 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 14 00:57:07.450999 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 14 00:57:07.470149 extend-filesystems[1909]: Found /dev/nvme0n1p9 Jan 14 00:57:07.491063 extend-filesystems[1909]: Checking size of /dev/nvme0n1p9 Jan 14 00:57:07.495193 jq[1925]: true Jan 14 00:57:07.501186 systemd[1]: motdgen.service: Deactivated successfully. Jan 14 00:57:07.501534 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 14 00:57:07.535947 extend-filesystems[1909]: Resized partition /dev/nvme0n1p9 Jan 14 00:57:07.550679 jq[1958]: true Jan 14 00:57:07.580072 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 14 00:57:07.609678 ntpd[1913]: ntpd 4.2.8p18@1.4062-o Tue Jan 13 21:35:08 UTC 2026 (1): Starting Jan 14 00:57:07.610627 ntpd[1913]: 14 Jan 00:57:07 ntpd[1913]: ntpd 4.2.8p18@1.4062-o Tue Jan 13 21:35:08 UTC 2026 (1): Starting Jan 14 00:57:07.610627 ntpd[1913]: 14 Jan 00:57:07 ntpd[1913]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jan 14 00:57:07.610627 ntpd[1913]: 14 Jan 00:57:07 ntpd[1913]: ---------------------------------------------------- Jan 14 00:57:07.610627 ntpd[1913]: 14 Jan 00:57:07 ntpd[1913]: ntp-4 is maintained by Network Time Foundation, Jan 14 00:57:07.610627 ntpd[1913]: 14 Jan 00:57:07 ntpd[1913]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jan 14 00:57:07.610627 ntpd[1913]: 14 Jan 00:57:07 ntpd[1913]: corporation. Support and training for ntp-4 are Jan 14 00:57:07.610627 ntpd[1913]: 14 Jan 00:57:07 ntpd[1913]: available at https://www.nwtime.org/support Jan 14 00:57:07.610627 ntpd[1913]: 14 Jan 00:57:07 ntpd[1913]: ---------------------------------------------------- Jan 14 00:57:07.609753 ntpd[1913]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jan 14 00:57:07.609764 ntpd[1913]: ---------------------------------------------------- Jan 14 00:57:07.609774 ntpd[1913]: ntp-4 is maintained by Network Time Foundation, Jan 14 00:57:07.609783 ntpd[1913]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jan 14 00:57:07.609792 ntpd[1913]: corporation. Support and training for ntp-4 are Jan 14 00:57:07.609801 ntpd[1913]: available at https://www.nwtime.org/support Jan 14 00:57:07.609811 ntpd[1913]: ---------------------------------------------------- Jan 14 00:57:07.613479 update_engine[1924]: I20260114 00:57:07.612661 1924 main.cc:92] Flatcar Update Engine starting Jan 14 00:57:07.619202 ntpd[1913]: proto: precision = 0.059 usec (-24) Jan 14 00:57:07.621011 ntpd[1913]: 14 Jan 00:57:07 ntpd[1913]: proto: precision = 0.059 usec (-24) Jan 14 00:57:07.623630 ntpd[1913]: basedate set to 2026-01-01 Jan 14 00:57:07.623658 ntpd[1913]: gps base set to 2026-01-04 (week 2400) Jan 14 00:57:07.623804 ntpd[1913]: 14 Jan 00:57:07 ntpd[1913]: basedate set to 2026-01-01 Jan 14 00:57:07.623804 ntpd[1913]: 14 Jan 00:57:07 ntpd[1913]: gps base set to 2026-01-04 (week 2400) Jan 14 00:57:07.624473 dbus-daemon[1906]: [system] SELinux support is enabled Jan 14 00:57:07.624753 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 14 00:57:07.628765 ntpd[1913]: Listen and drop on 0 v6wildcard [::]:123 Jan 14 00:57:07.628823 ntpd[1913]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jan 14 00:57:07.628916 ntpd[1913]: 14 Jan 00:57:07 ntpd[1913]: Listen and drop on 0 v6wildcard [::]:123 Jan 14 00:57:07.628916 ntpd[1913]: 14 Jan 00:57:07 ntpd[1913]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jan 14 00:57:07.629074 ntpd[1913]: Listen normally on 2 lo 127.0.0.1:123 Jan 14 00:57:07.630469 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 14 00:57:07.634256 ntpd[1913]: 14 Jan 00:57:07 ntpd[1913]: Listen normally on 2 lo 127.0.0.1:123 Jan 14 00:57:07.634256 ntpd[1913]: 14 Jan 00:57:07 ntpd[1913]: Listen normally on 3 eth0 172.31.19.12:123 Jan 14 00:57:07.634256 ntpd[1913]: 14 Jan 00:57:07 ntpd[1913]: Listen normally on 4 lo [::1]:123 Jan 14 00:57:07.634256 ntpd[1913]: 14 Jan 00:57:07 ntpd[1913]: Listen normally on 5 eth0 [fe80::4d8:8aff:fee4:3e55%2]:123 Jan 14 00:57:07.634256 ntpd[1913]: 14 Jan 00:57:07 ntpd[1913]: Listening on routing socket on fd #22 for interface updates Jan 14 00:57:07.629109 ntpd[1913]: Listen normally on 3 eth0 172.31.19.12:123 Jan 14 00:57:07.630505 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 14 00:57:07.629141 ntpd[1913]: Listen normally on 4 lo [::1]:123 Jan 14 00:57:07.632144 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 14 00:57:07.629168 ntpd[1913]: Listen normally on 5 eth0 [fe80::4d8:8aff:fee4:3e55%2]:123 Jan 14 00:57:07.632167 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 14 00:57:07.629194 ntpd[1913]: Listening on routing socket on fd #22 for interface updates Jan 14 00:57:07.639708 extend-filesystems[1981]: resize2fs 1.47.3 (8-Jul-2025) Jan 14 00:57:07.651044 systemd[1]: Finished setup-oem.service - Setup OEM. Jan 14 00:57:07.654285 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Jan 14 00:57:07.657285 tar[1933]: linux-amd64/LICENSE Jan 14 00:57:07.657583 tar[1933]: linux-amd64/helm Jan 14 00:57:07.659081 ntpd[1913]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 14 00:57:07.659121 ntpd[1913]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 14 00:57:07.659236 ntpd[1913]: 14 Jan 00:57:07 ntpd[1913]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 14 00:57:07.659236 ntpd[1913]: 14 Jan 00:57:07 ntpd[1913]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 14 00:57:07.673739 dbus-daemon[1906]: [system] Activating via systemd: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.3' (uid=244 pid=1548 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Jan 14 00:57:07.681237 update_engine[1924]: I20260114 00:57:07.680068 1924 update_check_scheduler.cc:74] Next update check in 4m36s Jan 14 00:57:07.684281 systemd[1]: Started update-engine.service - Update Engine. Jan 14 00:57:07.695119 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Jan 14 00:57:07.711129 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 14 00:57:07.724651 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 1617920 to 2604027 blocks Jan 14 00:57:07.747645 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 2604027 Jan 14 00:57:07.766877 extend-filesystems[1981]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Jan 14 00:57:07.766877 extend-filesystems[1981]: old_desc_blocks = 1, new_desc_blocks = 2 Jan 14 00:57:07.766877 extend-filesystems[1981]: The filesystem on /dev/nvme0n1p9 is now 2604027 (4k) blocks long. Jan 14 00:57:07.804558 extend-filesystems[1909]: Resized filesystem in /dev/nvme0n1p9 Jan 14 00:57:07.849160 bash[2008]: Updated "/home/core/.ssh/authorized_keys" Jan 14 00:57:07.768773 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 14 00:57:07.849452 coreos-metadata[1905]: Jan 14 00:57:07.781 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Jan 14 00:57:07.849452 coreos-metadata[1905]: Jan 14 00:57:07.793 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Jan 14 00:57:07.849452 coreos-metadata[1905]: Jan 14 00:57:07.800 INFO Fetch successful Jan 14 00:57:07.849452 coreos-metadata[1905]: Jan 14 00:57:07.800 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Jan 14 00:57:07.849452 coreos-metadata[1905]: Jan 14 00:57:07.802 INFO Fetch successful Jan 14 00:57:07.849452 coreos-metadata[1905]: Jan 14 00:57:07.802 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Jan 14 00:57:07.849452 coreos-metadata[1905]: Jan 14 00:57:07.809 INFO Fetch successful Jan 14 00:57:07.849452 coreos-metadata[1905]: Jan 14 00:57:07.809 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Jan 14 00:57:07.849452 coreos-metadata[1905]: Jan 14 00:57:07.811 INFO Fetch successful Jan 14 00:57:07.849452 coreos-metadata[1905]: Jan 14 00:57:07.812 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Jan 14 00:57:07.849452 coreos-metadata[1905]: Jan 14 00:57:07.813 INFO Fetch failed with 404: resource not found Jan 14 00:57:07.849452 coreos-metadata[1905]: Jan 14 00:57:07.813 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Jan 14 00:57:07.849452 coreos-metadata[1905]: Jan 14 00:57:07.817 INFO Fetch successful Jan 14 00:57:07.849452 coreos-metadata[1905]: Jan 14 00:57:07.817 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Jan 14 00:57:07.849452 coreos-metadata[1905]: Jan 14 00:57:07.818 INFO Fetch successful Jan 14 00:57:07.849452 coreos-metadata[1905]: Jan 14 00:57:07.818 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Jan 14 00:57:07.849452 coreos-metadata[1905]: Jan 14 00:57:07.820 INFO Fetch successful Jan 14 00:57:07.849452 coreos-metadata[1905]: Jan 14 00:57:07.820 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Jan 14 00:57:07.849452 coreos-metadata[1905]: Jan 14 00:57:07.821 INFO Fetch successful Jan 14 00:57:07.849452 coreos-metadata[1905]: Jan 14 00:57:07.821 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Jan 14 00:57:07.849452 coreos-metadata[1905]: Jan 14 00:57:07.823 INFO Fetch successful Jan 14 00:57:07.769292 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 14 00:57:07.773770 systemd-logind[1922]: Watching system buttons on /dev/input/event2 (Power Button) Jan 14 00:57:07.773795 systemd-logind[1922]: Watching system buttons on /dev/input/event3 (Sleep Button) Jan 14 00:57:07.773821 systemd-logind[1922]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 14 00:57:07.778051 systemd-logind[1922]: New seat seat0. Jan 14 00:57:07.790773 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 14 00:57:07.791566 systemd[1]: Started systemd-logind.service - User Login Management. Jan 14 00:57:07.816025 systemd[1]: Starting sshkeys.service... Jan 14 00:57:07.912295 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Jan 14 00:57:07.936704 dbus-daemon[1906]: [system] Successfully activated service 'org.freedesktop.hostname1' Jan 14 00:57:07.917009 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jan 14 00:57:07.925417 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jan 14 00:57:07.948352 dbus-daemon[1906]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.6' (uid=0 pid=2005 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Jan 14 00:57:07.970446 systemd[1]: Starting polkit.service - Authorization Manager... Jan 14 00:57:08.000482 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 14 00:57:08.001804 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 14 00:57:08.125692 amazon-ssm-agent[1990]: Initializing new seelog logger Jan 14 00:57:08.131867 amazon-ssm-agent[1990]: New Seelog Logger Creation Complete Jan 14 00:57:08.131867 amazon-ssm-agent[1990]: 2026/01/14 00:57:08 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 14 00:57:08.131867 amazon-ssm-agent[1990]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 14 00:57:08.131867 amazon-ssm-agent[1990]: 2026/01/14 00:57:08 processing appconfig overrides Jan 14 00:57:08.137868 amazon-ssm-agent[1990]: 2026/01/14 00:57:08 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 14 00:57:08.137868 amazon-ssm-agent[1990]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 14 00:57:08.137868 amazon-ssm-agent[1990]: 2026/01/14 00:57:08 processing appconfig overrides Jan 14 00:57:08.139319 amazon-ssm-agent[1990]: 2026/01/14 00:57:08 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 14 00:57:08.139319 amazon-ssm-agent[1990]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 14 00:57:08.139423 amazon-ssm-agent[1990]: 2026/01/14 00:57:08 processing appconfig overrides Jan 14 00:57:08.139832 amazon-ssm-agent[1990]: 2026-01-14 00:57:08.1365 INFO Proxy environment variables: Jan 14 00:57:08.153417 amazon-ssm-agent[1990]: 2026/01/14 00:57:08 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 14 00:57:08.153417 amazon-ssm-agent[1990]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 14 00:57:08.153550 amazon-ssm-agent[1990]: 2026/01/14 00:57:08 processing appconfig overrides Jan 14 00:57:08.166149 coreos-metadata[2030]: Jan 14 00:57:08.165 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Jan 14 00:57:08.167724 coreos-metadata[2030]: Jan 14 00:57:08.167 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Jan 14 00:57:08.173325 coreos-metadata[2030]: Jan 14 00:57:08.173 INFO Fetch successful Jan 14 00:57:08.173412 coreos-metadata[2030]: Jan 14 00:57:08.173 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Jan 14 00:57:08.175409 coreos-metadata[2030]: Jan 14 00:57:08.175 INFO Fetch successful Jan 14 00:57:08.176983 unknown[2030]: wrote ssh authorized keys file for user: core Jan 14 00:57:08.241158 amazon-ssm-agent[1990]: 2026-01-14 00:57:08.1366 INFO no_proxy: Jan 14 00:57:08.252046 update-ssh-keys[2091]: Updated "/home/core/.ssh/authorized_keys" Jan 14 00:57:08.252984 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jan 14 00:57:08.258245 systemd[1]: Finished sshkeys.service. Jan 14 00:57:08.314392 locksmithd[2006]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 14 00:57:08.349311 amazon-ssm-agent[1990]: 2026-01-14 00:57:08.1366 INFO https_proxy: Jan 14 00:57:08.373031 polkitd[2039]: Started polkitd version 126 Jan 14 00:57:08.421466 polkitd[2039]: Loading rules from directory /etc/polkit-1/rules.d Jan 14 00:57:08.422701 polkitd[2039]: Loading rules from directory /run/polkit-1/rules.d Jan 14 00:57:08.423407 polkitd[2039]: Error opening rules directory: Error opening directory “/run/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Jan 14 00:57:08.426242 polkitd[2039]: Loading rules from directory /usr/local/share/polkit-1/rules.d Jan 14 00:57:08.430311 polkitd[2039]: Error opening rules directory: Error opening directory “/usr/local/share/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Jan 14 00:57:08.430370 polkitd[2039]: Loading rules from directory /usr/share/polkit-1/rules.d Jan 14 00:57:08.432910 polkitd[2039]: Finished loading, compiling and executing 2 rules Jan 14 00:57:08.438582 systemd[1]: Started polkit.service - Authorization Manager. Jan 14 00:57:08.445059 dbus-daemon[1906]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Jan 14 00:57:08.445951 polkitd[2039]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Jan 14 00:57:08.451101 amazon-ssm-agent[1990]: 2026-01-14 00:57:08.1366 INFO http_proxy: Jan 14 00:57:08.504541 systemd-hostnamed[2005]: Hostname set to (transient) Jan 14 00:57:08.504662 systemd-resolved[1517]: System hostname changed to 'ip-172-31-19-12'. Jan 14 00:57:08.518332 sshd_keygen[1980]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 14 00:57:08.553220 amazon-ssm-agent[1990]: 2026-01-14 00:57:08.1367 INFO Checking if agent identity type OnPrem can be assumed Jan 14 00:57:08.589581 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 14 00:57:08.591799 containerd[1939]: time="2026-01-14T00:57:08Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Jan 14 00:57:08.599034 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 14 00:57:08.602772 containerd[1939]: time="2026-01-14T00:57:08.602725349Z" level=info msg="starting containerd" revision=fcd43222d6b07379a4be9786bda52438f0dd16a1 version=v2.1.5 Jan 14 00:57:08.651525 amazon-ssm-agent[1990]: 2026-01-14 00:57:08.1391 INFO Checking if agent identity type EC2 can be assumed Jan 14 00:57:08.674282 systemd[1]: issuegen.service: Deactivated successfully. Jan 14 00:57:08.674835 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 14 00:57:08.682801 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 14 00:57:08.691601 containerd[1939]: time="2026-01-14T00:57:08.691554201Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="13.473µs" Jan 14 00:57:08.691701 containerd[1939]: time="2026-01-14T00:57:08.691598257Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Jan 14 00:57:08.691701 containerd[1939]: time="2026-01-14T00:57:08.691684936Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Jan 14 00:57:08.691761 containerd[1939]: time="2026-01-14T00:57:08.691701052Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Jan 14 00:57:08.693940 containerd[1939]: time="2026-01-14T00:57:08.693337998Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Jan 14 00:57:08.693940 containerd[1939]: time="2026-01-14T00:57:08.693391410Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jan 14 00:57:08.693940 containerd[1939]: time="2026-01-14T00:57:08.693469649Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jan 14 00:57:08.693940 containerd[1939]: time="2026-01-14T00:57:08.693485637Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jan 14 00:57:08.693940 containerd[1939]: time="2026-01-14T00:57:08.693756388Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jan 14 00:57:08.693940 containerd[1939]: time="2026-01-14T00:57:08.693775909Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jan 14 00:57:08.693940 containerd[1939]: time="2026-01-14T00:57:08.693792054Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jan 14 00:57:08.693940 containerd[1939]: time="2026-01-14T00:57:08.693815679Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Jan 14 00:57:08.695497 containerd[1939]: time="2026-01-14T00:57:08.695458306Z" level=info msg="skip loading plugin" error="EROFS unsupported, please `modprobe erofs`: skip plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Jan 14 00:57:08.695497 containerd[1939]: time="2026-01-14T00:57:08.695494238Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Jan 14 00:57:08.695618 containerd[1939]: time="2026-01-14T00:57:08.695594544Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Jan 14 00:57:08.696294 containerd[1939]: time="2026-01-14T00:57:08.695837479Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jan 14 00:57:08.696356 containerd[1939]: time="2026-01-14T00:57:08.696332919Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jan 14 00:57:08.696356 containerd[1939]: time="2026-01-14T00:57:08.696350533Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Jan 14 00:57:08.696424 containerd[1939]: time="2026-01-14T00:57:08.696390596Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Jan 14 00:57:08.696767 containerd[1939]: time="2026-01-14T00:57:08.696745927Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Jan 14 00:57:08.697576 containerd[1939]: time="2026-01-14T00:57:08.696839030Z" level=info msg="metadata content store policy set" policy=shared Jan 14 00:57:08.704722 containerd[1939]: time="2026-01-14T00:57:08.704687326Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Jan 14 00:57:08.704797 containerd[1939]: time="2026-01-14T00:57:08.704762953Z" level=info msg="loading plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Jan 14 00:57:08.704940 containerd[1939]: time="2026-01-14T00:57:08.704918025Z" level=info msg="skip loading plugin" error="could not find mkfs.erofs: exec: \"mkfs.erofs\": executable file not found in $PATH: skip plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Jan 14 00:57:08.704986 containerd[1939]: time="2026-01-14T00:57:08.704941241Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Jan 14 00:57:08.704986 containerd[1939]: time="2026-01-14T00:57:08.704966246Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Jan 14 00:57:08.705065 containerd[1939]: time="2026-01-14T00:57:08.704983506Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Jan 14 00:57:08.705065 containerd[1939]: time="2026-01-14T00:57:08.705001046Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Jan 14 00:57:08.705065 containerd[1939]: time="2026-01-14T00:57:08.705014826Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Jan 14 00:57:08.705065 containerd[1939]: time="2026-01-14T00:57:08.705033078Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Jan 14 00:57:08.705065 containerd[1939]: time="2026-01-14T00:57:08.705050884Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Jan 14 00:57:08.705275 containerd[1939]: time="2026-01-14T00:57:08.705067466Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Jan 14 00:57:08.705275 containerd[1939]: time="2026-01-14T00:57:08.705088258Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Jan 14 00:57:08.705275 containerd[1939]: time="2026-01-14T00:57:08.705102587Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Jan 14 00:57:08.705275 containerd[1939]: time="2026-01-14T00:57:08.705127226Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Jan 14 00:57:08.705275 containerd[1939]: time="2026-01-14T00:57:08.705259747Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Jan 14 00:57:08.705438 containerd[1939]: time="2026-01-14T00:57:08.705285327Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Jan 14 00:57:08.705438 containerd[1939]: time="2026-01-14T00:57:08.705325420Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Jan 14 00:57:08.705438 containerd[1939]: time="2026-01-14T00:57:08.705342315Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Jan 14 00:57:08.705438 containerd[1939]: time="2026-01-14T00:57:08.705358380Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Jan 14 00:57:08.705438 containerd[1939]: time="2026-01-14T00:57:08.705372939Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Jan 14 00:57:08.705438 containerd[1939]: time="2026-01-14T00:57:08.705389944Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Jan 14 00:57:08.705438 containerd[1939]: time="2026-01-14T00:57:08.705405209Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Jan 14 00:57:08.705438 containerd[1939]: time="2026-01-14T00:57:08.705427894Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Jan 14 00:57:08.705685 containerd[1939]: time="2026-01-14T00:57:08.705444239Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Jan 14 00:57:08.705685 containerd[1939]: time="2026-01-14T00:57:08.705459062Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Jan 14 00:57:08.705685 containerd[1939]: time="2026-01-14T00:57:08.705496111Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Jan 14 00:57:08.705685 containerd[1939]: time="2026-01-14T00:57:08.705552877Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Jan 14 00:57:08.705685 containerd[1939]: time="2026-01-14T00:57:08.705569670Z" level=info msg="Start snapshots syncer" Jan 14 00:57:08.705685 containerd[1939]: time="2026-01-14T00:57:08.705613256Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Jan 14 00:57:08.707867 containerd[1939]: time="2026-01-14T00:57:08.706288030Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"cgroupWritable\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"\",\"binDirs\":[\"/opt/cni/bin\"],\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogLineSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Jan 14 00:57:08.707867 containerd[1939]: time="2026-01-14T00:57:08.706363606Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Jan 14 00:57:08.708450 containerd[1939]: time="2026-01-14T00:57:08.708424107Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Jan 14 00:57:08.708618 containerd[1939]: time="2026-01-14T00:57:08.708594138Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Jan 14 00:57:08.708663 containerd[1939]: time="2026-01-14T00:57:08.708634495Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Jan 14 00:57:08.708700 containerd[1939]: time="2026-01-14T00:57:08.708665568Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Jan 14 00:57:08.708700 containerd[1939]: time="2026-01-14T00:57:08.708681509Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Jan 14 00:57:08.708775 containerd[1939]: time="2026-01-14T00:57:08.708698795Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Jan 14 00:57:08.708775 containerd[1939]: time="2026-01-14T00:57:08.708714850Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Jan 14 00:57:08.708775 containerd[1939]: time="2026-01-14T00:57:08.708730201Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Jan 14 00:57:08.708775 containerd[1939]: time="2026-01-14T00:57:08.708745036Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Jan 14 00:57:08.708775 containerd[1939]: time="2026-01-14T00:57:08.708761366Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Jan 14 00:57:08.708948 containerd[1939]: time="2026-01-14T00:57:08.708812601Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jan 14 00:57:08.708948 containerd[1939]: time="2026-01-14T00:57:08.708831925Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jan 14 00:57:08.709117 containerd[1939]: time="2026-01-14T00:57:08.709095439Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jan 14 00:57:08.709155 containerd[1939]: time="2026-01-14T00:57:08.709124481Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jan 14 00:57:08.709155 containerd[1939]: time="2026-01-14T00:57:08.709139030Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Jan 14 00:57:08.709221 containerd[1939]: time="2026-01-14T00:57:08.709153480Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Jan 14 00:57:08.709221 containerd[1939]: time="2026-01-14T00:57:08.709168675Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Jan 14 00:57:08.709221 containerd[1939]: time="2026-01-14T00:57:08.709190755Z" level=info msg="runtime interface created" Jan 14 00:57:08.709221 containerd[1939]: time="2026-01-14T00:57:08.709208218Z" level=info msg="created NRI interface" Jan 14 00:57:08.709348 containerd[1939]: time="2026-01-14T00:57:08.709220031Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Jan 14 00:57:08.709348 containerd[1939]: time="2026-01-14T00:57:08.709237631Z" level=info msg="Connect containerd service" Jan 14 00:57:08.709348 containerd[1939]: time="2026-01-14T00:57:08.709272073Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 14 00:57:08.712577 containerd[1939]: time="2026-01-14T00:57:08.712548949Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 14 00:57:08.725517 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 14 00:57:08.732482 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 14 00:57:08.741346 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 14 00:57:08.743713 systemd[1]: Reached target getty.target - Login Prompts. Jan 14 00:57:08.750869 amazon-ssm-agent[1990]: 2026-01-14 00:57:08.4405 INFO Agent will take identity from EC2 Jan 14 00:57:08.812511 amazon-ssm-agent[1990]: 2026/01/14 00:57:08 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 14 00:57:08.812511 amazon-ssm-agent[1990]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 14 00:57:08.812511 amazon-ssm-agent[1990]: 2026/01/14 00:57:08 processing appconfig overrides Jan 14 00:57:08.848437 amazon-ssm-agent[1990]: 2026-01-14 00:57:08.4508 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.3.0.0 Jan 14 00:57:08.848437 amazon-ssm-agent[1990]: 2026-01-14 00:57:08.4509 INFO [amazon-ssm-agent] OS: linux, Arch: amd64 Jan 14 00:57:08.848437 amazon-ssm-agent[1990]: 2026-01-14 00:57:08.4509 INFO [amazon-ssm-agent] Starting Core Agent Jan 14 00:57:08.848437 amazon-ssm-agent[1990]: 2026-01-14 00:57:08.4509 INFO [amazon-ssm-agent] Registrar detected. Attempting registration Jan 14 00:57:08.848653 amazon-ssm-agent[1990]: 2026-01-14 00:57:08.4509 INFO [Registrar] Starting registrar module Jan 14 00:57:08.848653 amazon-ssm-agent[1990]: 2026-01-14 00:57:08.4573 INFO [EC2Identity] Checking disk for registration info Jan 14 00:57:08.848653 amazon-ssm-agent[1990]: 2026-01-14 00:57:08.4574 INFO [EC2Identity] No registration info found for ec2 instance, attempting registration Jan 14 00:57:08.848653 amazon-ssm-agent[1990]: 2026-01-14 00:57:08.4574 INFO [EC2Identity] Generating registration keypair Jan 14 00:57:08.848653 amazon-ssm-agent[1990]: 2026-01-14 00:57:08.7677 INFO [EC2Identity] Checking write access before registering Jan 14 00:57:08.848653 amazon-ssm-agent[1990]: 2026-01-14 00:57:08.7681 INFO [EC2Identity] Registering EC2 instance with Systems Manager Jan 14 00:57:08.848653 amazon-ssm-agent[1990]: 2026-01-14 00:57:08.8103 INFO [EC2Identity] EC2 registration was successful. Jan 14 00:57:08.848653 amazon-ssm-agent[1990]: 2026-01-14 00:57:08.8103 INFO [amazon-ssm-agent] Registration attempted. Resuming core agent startup. Jan 14 00:57:08.848653 amazon-ssm-agent[1990]: 2026-01-14 00:57:08.8104 INFO [CredentialRefresher] credentialRefresher has started Jan 14 00:57:08.848653 amazon-ssm-agent[1990]: 2026-01-14 00:57:08.8105 INFO [CredentialRefresher] Starting credentials refresher loop Jan 14 00:57:08.848653 amazon-ssm-agent[1990]: 2026-01-14 00:57:08.8481 INFO EC2RoleProvider Successfully connected with instance profile role credentials Jan 14 00:57:08.848653 amazon-ssm-agent[1990]: 2026-01-14 00:57:08.8483 INFO [CredentialRefresher] Credentials ready Jan 14 00:57:08.849446 amazon-ssm-agent[1990]: 2026-01-14 00:57:08.8485 INFO [CredentialRefresher] Next credential rotation will be in 29.999993619166666 minutes Jan 14 00:57:08.976735 tar[1933]: linux-amd64/README.md Jan 14 00:57:08.997080 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 14 00:57:09.076456 containerd[1939]: time="2026-01-14T00:57:09.076328562Z" level=info msg="Start subscribing containerd event" Jan 14 00:57:09.076456 containerd[1939]: time="2026-01-14T00:57:09.076374199Z" level=info msg="Start recovering state" Jan 14 00:57:09.076567 containerd[1939]: time="2026-01-14T00:57:09.076492990Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 14 00:57:09.076567 containerd[1939]: time="2026-01-14T00:57:09.076539301Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 14 00:57:09.076864 containerd[1939]: time="2026-01-14T00:57:09.076742905Z" level=info msg="Start event monitor" Jan 14 00:57:09.076864 containerd[1939]: time="2026-01-14T00:57:09.076761412Z" level=info msg="Start cni network conf syncer for default" Jan 14 00:57:09.076864 containerd[1939]: time="2026-01-14T00:57:09.076781817Z" level=info msg="Start streaming server" Jan 14 00:57:09.076864 containerd[1939]: time="2026-01-14T00:57:09.076790895Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Jan 14 00:57:09.076864 containerd[1939]: time="2026-01-14T00:57:09.076797914Z" level=info msg="runtime interface starting up..." Jan 14 00:57:09.076864 containerd[1939]: time="2026-01-14T00:57:09.076803570Z" level=info msg="starting plugins..." Jan 14 00:57:09.076864 containerd[1939]: time="2026-01-14T00:57:09.076816273Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Jan 14 00:57:09.077832 containerd[1939]: time="2026-01-14T00:57:09.077133680Z" level=info msg="containerd successfully booted in 0.485859s" Jan 14 00:57:09.077349 systemd[1]: Started containerd.service - containerd container runtime. Jan 14 00:57:09.897195 amazon-ssm-agent[1990]: 2026-01-14 00:57:09.8969 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Jan 14 00:57:09.998552 amazon-ssm-agent[1990]: 2026-01-14 00:57:09.8998 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2189) started Jan 14 00:57:10.098983 amazon-ssm-agent[1990]: 2026-01-14 00:57:09.8998 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Jan 14 00:57:12.496239 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 00:57:12.498836 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 14 00:57:12.501937 systemd[1]: Startup finished in 3.444s (kernel) + 10.030s (initrd) + 12.859s (userspace) = 26.334s. Jan 14 00:57:12.511326 (kubelet)[2205]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 14 00:57:14.244574 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 14 00:57:14.247120 systemd[1]: Started sshd@0-172.31.19.12:22-68.220.241.50:38324.service - OpenSSH per-connection server daemon (68.220.241.50:38324). Jan 14 00:57:14.574012 kubelet[2205]: E0114 00:57:14.573889 2205 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 14 00:57:14.576569 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 14 00:57:14.576704 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 14 00:57:14.577095 systemd[1]: kubelet.service: Consumed 1.003s CPU time, 269.9M memory peak. Jan 14 00:57:15.912610 systemd-resolved[1517]: Clock change detected. Flushing caches. Jan 14 00:57:16.048398 sshd[2215]: Accepted publickey for core from 68.220.241.50 port 38324 ssh2: RSA SHA256:002NFNa+pW78QyiVemrE46QtSgcyjLzj0uNejAEHPK0 Jan 14 00:57:16.050333 sshd-session[2215]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 00:57:16.057796 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 14 00:57:16.059610 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 14 00:57:16.068311 systemd-logind[1922]: New session 1 of user core. Jan 14 00:57:16.081301 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 14 00:57:16.084895 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 14 00:57:16.100664 (systemd)[2222]: pam_unix(systemd-user:session): session opened for user core(uid=500) by core(uid=0) Jan 14 00:57:16.103773 systemd-logind[1922]: New session 2 of user core. Jan 14 00:57:16.293585 systemd[2222]: Queued start job for default target default.target. Jan 14 00:57:16.300241 systemd[2222]: Created slice app.slice - User Application Slice. Jan 14 00:57:16.300275 systemd[2222]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of User's Temporary Directories. Jan 14 00:57:16.300289 systemd[2222]: Reached target paths.target - Paths. Jan 14 00:57:16.300342 systemd[2222]: Reached target timers.target - Timers. Jan 14 00:57:16.301985 systemd[2222]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 14 00:57:16.302674 systemd[2222]: Starting systemd-tmpfiles-setup.service - Create User Files and Directories... Jan 14 00:57:16.314688 systemd[2222]: Finished systemd-tmpfiles-setup.service - Create User Files and Directories. Jan 14 00:57:16.315670 systemd[2222]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 14 00:57:16.315767 systemd[2222]: Reached target sockets.target - Sockets. Jan 14 00:57:16.315805 systemd[2222]: Reached target basic.target - Basic System. Jan 14 00:57:16.315841 systemd[2222]: Reached target default.target - Main User Target. Jan 14 00:57:16.315869 systemd[2222]: Startup finished in 205ms. Jan 14 00:57:16.316312 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 14 00:57:16.320557 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 14 00:57:16.573773 systemd[1]: Started sshd@1-172.31.19.12:22-68.220.241.50:38332.service - OpenSSH per-connection server daemon (68.220.241.50:38332). Jan 14 00:57:17.001761 sshd[2236]: Accepted publickey for core from 68.220.241.50 port 38332 ssh2: RSA SHA256:002NFNa+pW78QyiVemrE46QtSgcyjLzj0uNejAEHPK0 Jan 14 00:57:17.003118 sshd-session[2236]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 00:57:17.008177 systemd-logind[1922]: New session 3 of user core. Jan 14 00:57:17.015391 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 14 00:57:17.235637 sshd[2240]: Connection closed by 68.220.241.50 port 38332 Jan 14 00:57:17.237346 sshd-session[2236]: pam_unix(sshd:session): session closed for user core Jan 14 00:57:17.241145 systemd-logind[1922]: Session 3 logged out. Waiting for processes to exit. Jan 14 00:57:17.241420 systemd[1]: sshd@1-172.31.19.12:22-68.220.241.50:38332.service: Deactivated successfully. Jan 14 00:57:17.243251 systemd[1]: session-3.scope: Deactivated successfully. Jan 14 00:57:17.245046 systemd-logind[1922]: Removed session 3. Jan 14 00:57:17.326931 systemd[1]: Started sshd@2-172.31.19.12:22-68.220.241.50:38340.service - OpenSSH per-connection server daemon (68.220.241.50:38340). Jan 14 00:57:17.760285 sshd[2246]: Accepted publickey for core from 68.220.241.50 port 38340 ssh2: RSA SHA256:002NFNa+pW78QyiVemrE46QtSgcyjLzj0uNejAEHPK0 Jan 14 00:57:17.761367 sshd-session[2246]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 00:57:17.766479 systemd-logind[1922]: New session 4 of user core. Jan 14 00:57:17.771349 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 14 00:57:17.989597 sshd[2250]: Connection closed by 68.220.241.50 port 38340 Jan 14 00:57:17.990381 sshd-session[2246]: pam_unix(sshd:session): session closed for user core Jan 14 00:57:17.994613 systemd[1]: sshd@2-172.31.19.12:22-68.220.241.50:38340.service: Deactivated successfully. Jan 14 00:57:17.996317 systemd[1]: session-4.scope: Deactivated successfully. Jan 14 00:57:17.997925 systemd-logind[1922]: Session 4 logged out. Waiting for processes to exit. Jan 14 00:57:17.998677 systemd-logind[1922]: Removed session 4. Jan 14 00:57:18.077853 systemd[1]: Started sshd@3-172.31.19.12:22-68.220.241.50:38352.service - OpenSSH per-connection server daemon (68.220.241.50:38352). Jan 14 00:57:18.515287 sshd[2256]: Accepted publickey for core from 68.220.241.50 port 38352 ssh2: RSA SHA256:002NFNa+pW78QyiVemrE46QtSgcyjLzj0uNejAEHPK0 Jan 14 00:57:18.516365 sshd-session[2256]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 00:57:18.521458 systemd-logind[1922]: New session 5 of user core. Jan 14 00:57:18.527425 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 14 00:57:18.753077 sshd[2260]: Connection closed by 68.220.241.50 port 38352 Jan 14 00:57:18.753621 sshd-session[2256]: pam_unix(sshd:session): session closed for user core Jan 14 00:57:18.758288 systemd[1]: sshd@3-172.31.19.12:22-68.220.241.50:38352.service: Deactivated successfully. Jan 14 00:57:18.759876 systemd[1]: session-5.scope: Deactivated successfully. Jan 14 00:57:18.761022 systemd-logind[1922]: Session 5 logged out. Waiting for processes to exit. Jan 14 00:57:18.762014 systemd-logind[1922]: Removed session 5. Jan 14 00:57:18.843820 systemd[1]: Started sshd@4-172.31.19.12:22-68.220.241.50:38354.service - OpenSSH per-connection server daemon (68.220.241.50:38354). Jan 14 00:57:19.275602 sshd[2266]: Accepted publickey for core from 68.220.241.50 port 38354 ssh2: RSA SHA256:002NFNa+pW78QyiVemrE46QtSgcyjLzj0uNejAEHPK0 Jan 14 00:57:19.277056 sshd-session[2266]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 00:57:19.281480 systemd-logind[1922]: New session 6 of user core. Jan 14 00:57:19.288417 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 14 00:57:19.507676 sudo[2271]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 14 00:57:19.507974 sudo[2271]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 14 00:57:19.521372 sudo[2271]: pam_unix(sudo:session): session closed for user root Jan 14 00:57:19.598025 sshd[2270]: Connection closed by 68.220.241.50 port 38354 Jan 14 00:57:19.599878 sshd-session[2266]: pam_unix(sshd:session): session closed for user core Jan 14 00:57:19.604660 systemd[1]: sshd@4-172.31.19.12:22-68.220.241.50:38354.service: Deactivated successfully. Jan 14 00:57:19.604866 systemd-logind[1922]: Session 6 logged out. Waiting for processes to exit. Jan 14 00:57:19.606796 systemd[1]: session-6.scope: Deactivated successfully. Jan 14 00:57:19.608041 systemd-logind[1922]: Removed session 6. Jan 14 00:57:19.689021 systemd[1]: Started sshd@5-172.31.19.12:22-68.220.241.50:38356.service - OpenSSH per-connection server daemon (68.220.241.50:38356). Jan 14 00:57:20.113227 sshd[2278]: Accepted publickey for core from 68.220.241.50 port 38356 ssh2: RSA SHA256:002NFNa+pW78QyiVemrE46QtSgcyjLzj0uNejAEHPK0 Jan 14 00:57:20.114237 sshd-session[2278]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 00:57:20.120004 systemd-logind[1922]: New session 7 of user core. Jan 14 00:57:20.129384 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 14 00:57:20.271633 sudo[2284]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 14 00:57:20.271914 sudo[2284]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 14 00:57:20.274357 sudo[2284]: pam_unix(sudo:session): session closed for user root Jan 14 00:57:20.280457 sudo[2283]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jan 14 00:57:20.280731 sudo[2283]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 14 00:57:20.288723 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 14 00:57:20.331809 kernel: kauditd_printk_skb: 133 callbacks suppressed Jan 14 00:57:20.331915 kernel: audit: type=1305 audit(1768352240.327:239): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Jan 14 00:57:20.327000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Jan 14 00:57:20.332000 augenrules[2308]: No rules Jan 14 00:57:20.332488 kernel: audit: type=1300 audit(1768352240.327:239): arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffef93c41b0 a2=420 a3=0 items=0 ppid=2289 pid=2308 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:57:20.327000 audit[2308]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffef93c41b0 a2=420 a3=0 items=0 ppid=2289 pid=2308 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:57:20.333528 systemd[1]: audit-rules.service: Deactivated successfully. Jan 14 00:57:20.333864 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 14 00:57:20.336714 sudo[2283]: pam_unix(sudo:session): session closed for user root Jan 14 00:57:20.337334 kernel: audit: type=1327 audit(1768352240.327:239): proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jan 14 00:57:20.327000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jan 14 00:57:20.339233 kernel: audit: type=1130 audit(1768352240.333:240): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:57:20.333000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:57:20.333000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:57:20.344820 kernel: audit: type=1131 audit(1768352240.333:241): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:57:20.344904 kernel: audit: type=1106 audit(1768352240.335:242): pid=2283 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 14 00:57:20.335000 audit[2283]: USER_END pid=2283 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 14 00:57:20.348064 kernel: audit: type=1104 audit(1768352240.335:243): pid=2283 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 14 00:57:20.335000 audit[2283]: CRED_DISP pid=2283 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 14 00:57:20.413905 sshd[2282]: Connection closed by 68.220.241.50 port 38356 Jan 14 00:57:20.414830 sshd-session[2278]: pam_unix(sshd:session): session closed for user core Jan 14 00:57:20.414000 audit[2278]: USER_END pid=2278 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 00:57:20.422224 kernel: audit: type=1106 audit(1768352240.414:244): pid=2278 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 00:57:20.418131 systemd-logind[1922]: Session 7 logged out. Waiting for processes to exit. Jan 14 00:57:20.418763 systemd[1]: sshd@5-172.31.19.12:22-68.220.241.50:38356.service: Deactivated successfully. Jan 14 00:57:20.421308 systemd[1]: session-7.scope: Deactivated successfully. Jan 14 00:57:20.414000 audit[2278]: CRED_DISP pid=2278 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 00:57:20.424114 systemd-logind[1922]: Removed session 7. Jan 14 00:57:20.416000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-172.31.19.12:22-68.220.241.50:38356 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:57:20.427644 kernel: audit: type=1104 audit(1768352240.414:245): pid=2278 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 00:57:20.427718 kernel: audit: type=1131 audit(1768352240.416:246): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-172.31.19.12:22-68.220.241.50:38356 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:57:20.500816 systemd[1]: Started sshd@6-172.31.19.12:22-68.220.241.50:38372.service - OpenSSH per-connection server daemon (68.220.241.50:38372). Jan 14 00:57:20.499000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-172.31.19.12:22-68.220.241.50:38372 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:57:20.930000 audit[2317]: USER_ACCT pid=2317 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 00:57:20.931701 sshd[2317]: Accepted publickey for core from 68.220.241.50 port 38372 ssh2: RSA SHA256:002NFNa+pW78QyiVemrE46QtSgcyjLzj0uNejAEHPK0 Jan 14 00:57:20.931000 audit[2317]: CRED_ACQ pid=2317 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 00:57:20.931000 audit[2317]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffec003bba0 a2=3 a3=0 items=0 ppid=1 pid=2317 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=8 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:57:20.931000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 00:57:20.933021 sshd-session[2317]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 00:57:20.938089 systemd-logind[1922]: New session 8 of user core. Jan 14 00:57:20.944436 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 14 00:57:20.945000 audit[2317]: USER_START pid=2317 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 00:57:20.947000 audit[2321]: CRED_ACQ pid=2321 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 00:57:21.089000 audit[2322]: USER_ACCT pid=2322 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_unix,pam_faillock acct="core" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 14 00:57:21.091024 sudo[2322]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 14 00:57:21.089000 audit[2322]: CRED_REFR pid=2322 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 14 00:57:21.090000 audit[2322]: USER_START pid=2322 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 14 00:57:21.091344 sudo[2322]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 14 00:57:22.436279 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 14 00:57:22.451591 (dockerd)[2341]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 14 00:57:23.563533 dockerd[2341]: time="2026-01-14T00:57:23.563481495Z" level=info msg="Starting up" Jan 14 00:57:23.564490 dockerd[2341]: time="2026-01-14T00:57:23.564448083Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Jan 14 00:57:23.575273 dockerd[2341]: time="2026-01-14T00:57:23.575231291Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Jan 14 00:57:23.637759 dockerd[2341]: time="2026-01-14T00:57:23.637717247Z" level=info msg="Loading containers: start." Jan 14 00:57:23.648213 kernel: Initializing XFRM netlink socket Jan 14 00:57:23.828000 audit[2389]: NETFILTER_CFG table=nat:2 family=2 entries=2 op=nft_register_chain pid=2389 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 00:57:23.828000 audit[2389]: SYSCALL arch=c000003e syscall=46 success=yes exit=116 a0=3 a1=7ffd04cf0e70 a2=0 a3=0 items=0 ppid=2341 pid=2389 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:57:23.828000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Jan 14 00:57:23.830000 audit[2391]: NETFILTER_CFG table=filter:3 family=2 entries=2 op=nft_register_chain pid=2391 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 00:57:23.830000 audit[2391]: SYSCALL arch=c000003e syscall=46 success=yes exit=124 a0=3 a1=7fff9edb8270 a2=0 a3=0 items=0 ppid=2341 pid=2391 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:57:23.830000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Jan 14 00:57:23.832000 audit[2393]: NETFILTER_CFG table=filter:4 family=2 entries=1 op=nft_register_chain pid=2393 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 00:57:23.832000 audit[2393]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffef2156880 a2=0 a3=0 items=0 ppid=2341 pid=2393 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:57:23.832000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D464F5257415244 Jan 14 00:57:23.834000 audit[2395]: NETFILTER_CFG table=filter:5 family=2 entries=1 op=nft_register_chain pid=2395 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 00:57:23.834000 audit[2395]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc0e9e8150 a2=0 a3=0 items=0 ppid=2341 pid=2395 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:57:23.834000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D425249444745 Jan 14 00:57:23.836000 audit[2397]: NETFILTER_CFG table=filter:6 family=2 entries=1 op=nft_register_chain pid=2397 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 00:57:23.836000 audit[2397]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffe3c09a880 a2=0 a3=0 items=0 ppid=2341 pid=2397 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:57:23.836000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D4354 Jan 14 00:57:23.837000 audit[2399]: NETFILTER_CFG table=filter:7 family=2 entries=1 op=nft_register_chain pid=2399 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 00:57:23.837000 audit[2399]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffd1c29f170 a2=0 a3=0 items=0 ppid=2341 pid=2399 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:57:23.837000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 14 00:57:23.839000 audit[2401]: NETFILTER_CFG table=filter:8 family=2 entries=1 op=nft_register_chain pid=2401 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 00:57:23.839000 audit[2401]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffd2032db60 a2=0 a3=0 items=0 ppid=2341 pid=2401 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:57:23.839000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Jan 14 00:57:23.841000 audit[2403]: NETFILTER_CFG table=nat:9 family=2 entries=2 op=nft_register_chain pid=2403 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 00:57:23.841000 audit[2403]: SYSCALL arch=c000003e syscall=46 success=yes exit=384 a0=3 a1=7ffc3afc9ef0 a2=0 a3=0 items=0 ppid=2341 pid=2403 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:57:23.841000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Jan 14 00:57:23.920000 audit[2406]: NETFILTER_CFG table=nat:10 family=2 entries=2 op=nft_register_chain pid=2406 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 00:57:23.920000 audit[2406]: SYSCALL arch=c000003e syscall=46 success=yes exit=472 a0=3 a1=7fff0c65cf90 a2=0 a3=0 items=0 ppid=2341 pid=2406 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:57:23.920000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003132372E302E302E302F38 Jan 14 00:57:23.923000 audit[2408]: NETFILTER_CFG table=filter:11 family=2 entries=2 op=nft_register_chain pid=2408 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 00:57:23.923000 audit[2408]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7fffe5676ad0 a2=0 a3=0 items=0 ppid=2341 pid=2408 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:57:23.923000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D464F5257415244 Jan 14 00:57:23.925000 audit[2410]: NETFILTER_CFG table=filter:12 family=2 entries=1 op=nft_register_rule pid=2410 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 00:57:23.925000 audit[2410]: SYSCALL arch=c000003e syscall=46 success=yes exit=236 a0=3 a1=7ffc9c30fa30 a2=0 a3=0 items=0 ppid=2341 pid=2410 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:57:23.925000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D425249444745 Jan 14 00:57:23.927000 audit[2412]: NETFILTER_CFG table=filter:13 family=2 entries=1 op=nft_register_rule pid=2412 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 00:57:23.927000 audit[2412]: SYSCALL arch=c000003e syscall=46 success=yes exit=248 a0=3 a1=7ffe429a12b0 a2=0 a3=0 items=0 ppid=2341 pid=2412 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:57:23.927000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 14 00:57:23.929000 audit[2414]: NETFILTER_CFG table=filter:14 family=2 entries=1 op=nft_register_rule pid=2414 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 00:57:23.929000 audit[2414]: SYSCALL arch=c000003e syscall=46 success=yes exit=232 a0=3 a1=7fff087e4e20 a2=0 a3=0 items=0 ppid=2341 pid=2414 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:57:23.929000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D4354 Jan 14 00:57:23.972000 audit[2444]: NETFILTER_CFG table=nat:15 family=10 entries=2 op=nft_register_chain pid=2444 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 00:57:23.972000 audit[2444]: SYSCALL arch=c000003e syscall=46 success=yes exit=116 a0=3 a1=7fff9c409d40 a2=0 a3=0 items=0 ppid=2341 pid=2444 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:57:23.972000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Jan 14 00:57:23.974000 audit[2446]: NETFILTER_CFG table=filter:16 family=10 entries=2 op=nft_register_chain pid=2446 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 00:57:23.974000 audit[2446]: SYSCALL arch=c000003e syscall=46 success=yes exit=124 a0=3 a1=7ffebcbb1970 a2=0 a3=0 items=0 ppid=2341 pid=2446 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:57:23.974000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Jan 14 00:57:23.976000 audit[2448]: NETFILTER_CFG table=filter:17 family=10 entries=1 op=nft_register_chain pid=2448 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 00:57:23.976000 audit[2448]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffdec203660 a2=0 a3=0 items=0 ppid=2341 pid=2448 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:57:23.976000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D464F5257415244 Jan 14 00:57:23.978000 audit[2450]: NETFILTER_CFG table=filter:18 family=10 entries=1 op=nft_register_chain pid=2450 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 00:57:23.978000 audit[2450]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe53a7a670 a2=0 a3=0 items=0 ppid=2341 pid=2450 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:57:23.978000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D425249444745 Jan 14 00:57:23.980000 audit[2452]: NETFILTER_CFG table=filter:19 family=10 entries=1 op=nft_register_chain pid=2452 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 00:57:23.980000 audit[2452]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffea39f7aa0 a2=0 a3=0 items=0 ppid=2341 pid=2452 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:57:23.980000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D4354 Jan 14 00:57:23.982000 audit[2454]: NETFILTER_CFG table=filter:20 family=10 entries=1 op=nft_register_chain pid=2454 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 00:57:23.982000 audit[2454]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffc1e425350 a2=0 a3=0 items=0 ppid=2341 pid=2454 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:57:23.982000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 14 00:57:23.984000 audit[2456]: NETFILTER_CFG table=filter:21 family=10 entries=1 op=nft_register_chain pid=2456 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 00:57:23.984000 audit[2456]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffc9c5ead50 a2=0 a3=0 items=0 ppid=2341 pid=2456 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:57:23.984000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Jan 14 00:57:23.987000 audit[2458]: NETFILTER_CFG table=nat:22 family=10 entries=2 op=nft_register_chain pid=2458 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 00:57:23.987000 audit[2458]: SYSCALL arch=c000003e syscall=46 success=yes exit=384 a0=3 a1=7fff52d02310 a2=0 a3=0 items=0 ppid=2341 pid=2458 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:57:23.987000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Jan 14 00:57:23.989000 audit[2460]: NETFILTER_CFG table=nat:23 family=10 entries=2 op=nft_register_chain pid=2460 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 00:57:23.989000 audit[2460]: SYSCALL arch=c000003e syscall=46 success=yes exit=484 a0=3 a1=7ffede0ca140 a2=0 a3=0 items=0 ppid=2341 pid=2460 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:57:23.989000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003A3A312F313238 Jan 14 00:57:23.991000 audit[2462]: NETFILTER_CFG table=filter:24 family=10 entries=2 op=nft_register_chain pid=2462 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 00:57:23.991000 audit[2462]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7ffebadaca00 a2=0 a3=0 items=0 ppid=2341 pid=2462 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:57:23.991000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D464F5257415244 Jan 14 00:57:23.993000 audit[2464]: NETFILTER_CFG table=filter:25 family=10 entries=1 op=nft_register_rule pid=2464 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 00:57:23.993000 audit[2464]: SYSCALL arch=c000003e syscall=46 success=yes exit=236 a0=3 a1=7ffc42f16c20 a2=0 a3=0 items=0 ppid=2341 pid=2464 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:57:23.993000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D425249444745 Jan 14 00:57:23.996000 audit[2466]: NETFILTER_CFG table=filter:26 family=10 entries=1 op=nft_register_rule pid=2466 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 00:57:23.996000 audit[2466]: SYSCALL arch=c000003e syscall=46 success=yes exit=248 a0=3 a1=7ffc34425d70 a2=0 a3=0 items=0 ppid=2341 pid=2466 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:57:23.996000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 14 00:57:23.998000 audit[2468]: NETFILTER_CFG table=filter:27 family=10 entries=1 op=nft_register_rule pid=2468 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 00:57:23.998000 audit[2468]: SYSCALL arch=c000003e syscall=46 success=yes exit=232 a0=3 a1=7fff374e0890 a2=0 a3=0 items=0 ppid=2341 pid=2468 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:57:23.998000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D4354 Jan 14 00:57:24.003000 audit[2473]: NETFILTER_CFG table=filter:28 family=2 entries=1 op=nft_register_chain pid=2473 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 00:57:24.003000 audit[2473]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffc600931a0 a2=0 a3=0 items=0 ppid=2341 pid=2473 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:57:24.003000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Jan 14 00:57:24.005000 audit[2475]: NETFILTER_CFG table=filter:29 family=2 entries=1 op=nft_register_rule pid=2475 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 00:57:24.005000 audit[2475]: SYSCALL arch=c000003e syscall=46 success=yes exit=212 a0=3 a1=7ffdae5cddb0 a2=0 a3=0 items=0 ppid=2341 pid=2475 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:57:24.005000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Jan 14 00:57:24.008000 audit[2477]: NETFILTER_CFG table=filter:30 family=2 entries=1 op=nft_register_rule pid=2477 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 00:57:24.008000 audit[2477]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7ffda9887960 a2=0 a3=0 items=0 ppid=2341 pid=2477 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:57:24.008000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Jan 14 00:57:24.010000 audit[2479]: NETFILTER_CFG table=filter:31 family=10 entries=1 op=nft_register_chain pid=2479 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 00:57:24.010000 audit[2479]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffe6a5beab0 a2=0 a3=0 items=0 ppid=2341 pid=2479 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:57:24.010000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Jan 14 00:57:24.012000 audit[2481]: NETFILTER_CFG table=filter:32 family=10 entries=1 op=nft_register_rule pid=2481 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 00:57:24.012000 audit[2481]: SYSCALL arch=c000003e syscall=46 success=yes exit=212 a0=3 a1=7fff46c02900 a2=0 a3=0 items=0 ppid=2341 pid=2481 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:57:24.012000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Jan 14 00:57:24.014000 audit[2483]: NETFILTER_CFG table=filter:33 family=10 entries=1 op=nft_register_rule pid=2483 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 00:57:24.014000 audit[2483]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7fff4a342310 a2=0 a3=0 items=0 ppid=2341 pid=2483 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:57:24.014000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Jan 14 00:57:24.025104 (udev-worker)[2362]: Network interface NamePolicy= disabled on kernel command line. Jan 14 00:57:24.034000 audit[2487]: NETFILTER_CFG table=nat:34 family=2 entries=2 op=nft_register_chain pid=2487 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 00:57:24.034000 audit[2487]: SYSCALL arch=c000003e syscall=46 success=yes exit=520 a0=3 a1=7ffd81b5a8a0 a2=0 a3=0 items=0 ppid=2341 pid=2487 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:57:24.034000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4900504F5354524F5554494E47002D73003137322E31372E302E302F31360000002D6F00646F636B657230002D6A004D415351554552414445 Jan 14 00:57:24.037000 audit[2490]: NETFILTER_CFG table=nat:35 family=2 entries=1 op=nft_register_rule pid=2490 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 00:57:24.037000 audit[2490]: SYSCALL arch=c000003e syscall=46 success=yes exit=288 a0=3 a1=7ffd3223f200 a2=0 a3=0 items=0 ppid=2341 pid=2490 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:57:24.037000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4900444F434B4552002D6900646F636B657230002D6A0052455455524E Jan 14 00:57:24.045000 audit[2498]: NETFILTER_CFG table=filter:36 family=2 entries=1 op=nft_register_rule pid=2498 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 00:57:24.045000 audit[2498]: SYSCALL arch=c000003e syscall=46 success=yes exit=300 a0=3 a1=7ffdea62aaa0 a2=0 a3=0 items=0 ppid=2341 pid=2498 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:57:24.045000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D464F5257415244002D6900646F636B657230002D6A00414343455054 Jan 14 00:57:24.055000 audit[2504]: NETFILTER_CFG table=filter:37 family=2 entries=1 op=nft_register_rule pid=2504 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 00:57:24.055000 audit[2504]: SYSCALL arch=c000003e syscall=46 success=yes exit=376 a0=3 a1=7ffe199aaac0 a2=0 a3=0 items=0 ppid=2341 pid=2504 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:57:24.055000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45520000002D6900646F636B657230002D6F00646F636B657230002D6A0044524F50 Jan 14 00:57:24.057000 audit[2506]: NETFILTER_CFG table=filter:38 family=2 entries=1 op=nft_register_rule pid=2506 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 00:57:24.057000 audit[2506]: SYSCALL arch=c000003e syscall=46 success=yes exit=512 a0=3 a1=7ffc2dfe5a20 a2=0 a3=0 items=0 ppid=2341 pid=2506 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:57:24.057000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D4354002D6F00646F636B657230002D6D00636F6E6E747261636B002D2D637473746174650052454C415445442C45535441424C4953484544002D6A00414343455054 Jan 14 00:57:24.059000 audit[2508]: NETFILTER_CFG table=filter:39 family=2 entries=1 op=nft_register_rule pid=2508 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 00:57:24.059000 audit[2508]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7fff8f961f80 a2=0 a3=0 items=0 ppid=2341 pid=2508 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:57:24.059000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D425249444745002D6F00646F636B657230002D6A00444F434B4552 Jan 14 00:57:24.061000 audit[2510]: NETFILTER_CFG table=filter:40 family=2 entries=1 op=nft_register_rule pid=2510 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 00:57:24.061000 audit[2510]: SYSCALL arch=c000003e syscall=46 success=yes exit=428 a0=3 a1=7ffcb84f90d0 a2=0 a3=0 items=0 ppid=2341 pid=2510 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:57:24.061000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D49534F4C4154494F4E2D53544147452D31002D6900646F636B6572300000002D6F00646F636B657230002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D32 Jan 14 00:57:24.063000 audit[2512]: NETFILTER_CFG table=filter:41 family=2 entries=1 op=nft_register_rule pid=2512 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 00:57:24.063000 audit[2512]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffd225e7c00 a2=0 a3=0 items=0 ppid=2341 pid=2512 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:57:24.063000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D32002D6F00646F636B657230002D6A0044524F50 Jan 14 00:57:24.065515 systemd-networkd[1548]: docker0: Link UP Jan 14 00:57:24.069946 dockerd[2341]: time="2026-01-14T00:57:24.069911027Z" level=info msg="Loading containers: done." Jan 14 00:57:24.105907 dockerd[2341]: time="2026-01-14T00:57:24.105780896Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 14 00:57:24.105907 dockerd[2341]: time="2026-01-14T00:57:24.105866405Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Jan 14 00:57:24.106477 dockerd[2341]: time="2026-01-14T00:57:24.106327147Z" level=info msg="Initializing buildkit" Jan 14 00:57:24.145434 dockerd[2341]: time="2026-01-14T00:57:24.145389378Z" level=info msg="Completed buildkit initialization" Jan 14 00:57:24.149599 dockerd[2341]: time="2026-01-14T00:57:24.149555106Z" level=info msg="Daemon has completed initialization" Jan 14 00:57:24.149727 dockerd[2341]: time="2026-01-14T00:57:24.149693075Z" level=info msg="API listen on /run/docker.sock" Jan 14 00:57:24.149925 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 14 00:57:24.149000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=docker comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:57:26.066193 containerd[1939]: time="2026-01-14T00:57:26.066144432Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.7\"" Jan 14 00:57:26.066403 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 14 00:57:26.068419 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 14 00:57:26.334000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:57:26.336370 kernel: kauditd_printk_skb: 132 callbacks suppressed Jan 14 00:57:26.336427 kernel: audit: type=1130 audit(1768352246.334:297): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:57:26.334689 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 00:57:26.345633 (kubelet)[2560]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 14 00:57:26.386700 kubelet[2560]: E0114 00:57:26.386638 2560 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 14 00:57:26.390733 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 14 00:57:26.390915 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 14 00:57:26.390000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 14 00:57:26.391504 systemd[1]: kubelet.service: Consumed 166ms CPU time, 110.5M memory peak. Jan 14 00:57:26.396211 kernel: audit: type=1131 audit(1768352246.390:298): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 14 00:57:26.702940 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2269565500.mount: Deactivated successfully. Jan 14 00:57:27.858980 containerd[1939]: time="2026-01-14T00:57:27.858927549Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 00:57:27.860072 containerd[1939]: time="2026-01-14T00:57:27.859842731Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.7: active requests=0, bytes read=28448874" Jan 14 00:57:27.861085 containerd[1939]: time="2026-01-14T00:57:27.861058545Z" level=info msg="ImageCreate event name:\"sha256:021d1ceeffb11df7a9fb9adfa0ad0a30dcd13cb3d630022066f184cdcb93731b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 00:57:27.863272 containerd[1939]: time="2026-01-14T00:57:27.863247302Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:9585226cb85d1dc0f0ef5f7a75f04e4bc91ddd82de249533bd293aa3cf958dab\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 00:57:27.864023 containerd[1939]: time="2026-01-14T00:57:27.863999160Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.7\" with image id \"sha256:021d1ceeffb11df7a9fb9adfa0ad0a30dcd13cb3d630022066f184cdcb93731b\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.7\", repo digest \"registry.k8s.io/kube-apiserver@sha256:9585226cb85d1dc0f0ef5f7a75f04e4bc91ddd82de249533bd293aa3cf958dab\", size \"30111311\" in 1.797818104s" Jan 14 00:57:27.864100 containerd[1939]: time="2026-01-14T00:57:27.864088905Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.7\" returns image reference \"sha256:021d1ceeffb11df7a9fb9adfa0ad0a30dcd13cb3d630022066f184cdcb93731b\"" Jan 14 00:57:27.864625 containerd[1939]: time="2026-01-14T00:57:27.864594089Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.7\"" Jan 14 00:57:29.403866 containerd[1939]: time="2026-01-14T00:57:29.403813079Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 00:57:29.405695 containerd[1939]: time="2026-01-14T00:57:29.405655327Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.7: active requests=0, bytes read=26008626" Jan 14 00:57:29.407891 containerd[1939]: time="2026-01-14T00:57:29.407841199Z" level=info msg="ImageCreate event name:\"sha256:29c7cab9d8e681d047281fd3711baf13c28f66923480fb11c8f22ddb7ca742d1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 00:57:29.411373 containerd[1939]: time="2026-01-14T00:57:29.411319373Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:f69d77ca0626b5a4b7b432c18de0952941181db7341c80eb89731f46d1d0c230\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 00:57:29.412324 containerd[1939]: time="2026-01-14T00:57:29.412026516Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.7\" with image id \"sha256:29c7cab9d8e681d047281fd3711baf13c28f66923480fb11c8f22ddb7ca742d1\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.7\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:f69d77ca0626b5a4b7b432c18de0952941181db7341c80eb89731f46d1d0c230\", size \"27673815\" in 1.547402934s" Jan 14 00:57:29.412324 containerd[1939]: time="2026-01-14T00:57:29.412052867Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.7\" returns image reference \"sha256:29c7cab9d8e681d047281fd3711baf13c28f66923480fb11c8f22ddb7ca742d1\"" Jan 14 00:57:29.412868 containerd[1939]: time="2026-01-14T00:57:29.412840057Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.7\"" Jan 14 00:57:30.781356 containerd[1939]: time="2026-01-14T00:57:30.781308567Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 00:57:30.783374 containerd[1939]: time="2026-01-14T00:57:30.783290773Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.7: active requests=0, bytes read=20149965" Jan 14 00:57:30.785863 containerd[1939]: time="2026-01-14T00:57:30.785756277Z" level=info msg="ImageCreate event name:\"sha256:f457f6fcd712acb5b9beef873f6f4a4869182f9eb52ea6e24824fd4ac4eed393\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 00:57:30.789657 containerd[1939]: time="2026-01-14T00:57:30.789590870Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:21bda321d8b4d48eb059fbc1593203d55d8b3bc7acd0584e04e55504796d78d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 00:57:30.790638 containerd[1939]: time="2026-01-14T00:57:30.790480955Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.7\" with image id \"sha256:f457f6fcd712acb5b9beef873f6f4a4869182f9eb52ea6e24824fd4ac4eed393\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.7\", repo digest \"registry.k8s.io/kube-scheduler@sha256:21bda321d8b4d48eb059fbc1593203d55d8b3bc7acd0584e04e55504796d78d0\", size \"21815154\" in 1.377608065s" Jan 14 00:57:30.790638 containerd[1939]: time="2026-01-14T00:57:30.790514761Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.7\" returns image reference \"sha256:f457f6fcd712acb5b9beef873f6f4a4869182f9eb52ea6e24824fd4ac4eed393\"" Jan 14 00:57:30.791447 containerd[1939]: time="2026-01-14T00:57:30.791255667Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.7\"" Jan 14 00:57:31.889108 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3840900056.mount: Deactivated successfully. Jan 14 00:57:32.516531 containerd[1939]: time="2026-01-14T00:57:32.516492016Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 00:57:32.525092 containerd[1939]: time="2026-01-14T00:57:32.525008258Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.7: active requests=0, bytes read=20340589" Jan 14 00:57:32.530736 containerd[1939]: time="2026-01-14T00:57:32.530698900Z" level=info msg="ImageCreate event name:\"sha256:0929027b17fc30cb9de279f3bdba4e130b991a1dab7978a7db2e5feb2091853c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 00:57:32.535744 containerd[1939]: time="2026-01-14T00:57:32.535636657Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:ec25702b19026e9c0d339bc1c3bd231435a59f28b5fccb21e1b1078a357380f5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 00:57:32.536376 containerd[1939]: time="2026-01-14T00:57:32.536351487Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.7\" with image id \"sha256:0929027b17fc30cb9de279f3bdba4e130b991a1dab7978a7db2e5feb2091853c\", repo tag \"registry.k8s.io/kube-proxy:v1.33.7\", repo digest \"registry.k8s.io/kube-proxy@sha256:ec25702b19026e9c0d339bc1c3bd231435a59f28b5fccb21e1b1078a357380f5\", size \"31929115\" in 1.745066557s" Jan 14 00:57:32.536683 containerd[1939]: time="2026-01-14T00:57:32.536463908Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.7\" returns image reference \"sha256:0929027b17fc30cb9de279f3bdba4e130b991a1dab7978a7db2e5feb2091853c\"" Jan 14 00:57:32.537258 containerd[1939]: time="2026-01-14T00:57:32.537226806Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Jan 14 00:57:33.364445 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4183199803.mount: Deactivated successfully. Jan 14 00:57:34.200753 containerd[1939]: time="2026-01-14T00:57:34.200701551Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 00:57:34.201717 containerd[1939]: time="2026-01-14T00:57:34.201693983Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20128467" Jan 14 00:57:34.202831 containerd[1939]: time="2026-01-14T00:57:34.202787600Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 00:57:34.205204 containerd[1939]: time="2026-01-14T00:57:34.205161078Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 00:57:34.206348 containerd[1939]: time="2026-01-14T00:57:34.206100594Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 1.668803097s" Jan 14 00:57:34.206348 containerd[1939]: time="2026-01-14T00:57:34.206133410Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Jan 14 00:57:34.206716 containerd[1939]: time="2026-01-14T00:57:34.206692351Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jan 14 00:57:34.640843 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3147047164.mount: Deactivated successfully. Jan 14 00:57:34.646806 containerd[1939]: time="2026-01-14T00:57:34.646763658Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 14 00:57:34.647650 containerd[1939]: time="2026-01-14T00:57:34.647486060Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=316581" Jan 14 00:57:34.648637 containerd[1939]: time="2026-01-14T00:57:34.648616135Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 14 00:57:34.650618 containerd[1939]: time="2026-01-14T00:57:34.650575108Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 14 00:57:34.651205 containerd[1939]: time="2026-01-14T00:57:34.651048259Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 444.221579ms" Jan 14 00:57:34.651205 containerd[1939]: time="2026-01-14T00:57:34.651073626Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jan 14 00:57:34.651622 containerd[1939]: time="2026-01-14T00:57:34.651603206Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Jan 14 00:57:35.178559 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2485703822.mount: Deactivated successfully. Jan 14 00:57:36.566006 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 14 00:57:36.569610 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 14 00:57:36.871426 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 00:57:36.870000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:57:36.876233 kernel: audit: type=1130 audit(1768352256.870:299): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:57:36.881732 (kubelet)[2758]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 14 00:57:37.004935 kubelet[2758]: E0114 00:57:37.004870 2758 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 14 00:57:37.009349 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 14 00:57:37.009557 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 14 00:57:37.015939 kernel: audit: type=1131 audit(1768352257.008:300): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 14 00:57:37.008000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 14 00:57:37.015325 systemd[1]: kubelet.service: Consumed 221ms CPU time, 108.9M memory peak. Jan 14 00:57:37.462459 containerd[1939]: time="2026-01-14T00:57:37.462413445Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 00:57:37.463924 containerd[1939]: time="2026-01-14T00:57:37.463887843Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=58133605" Jan 14 00:57:37.465530 containerd[1939]: time="2026-01-14T00:57:37.465481621Z" level=info msg="ImageCreate event name:\"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 00:57:37.468783 containerd[1939]: time="2026-01-14T00:57:37.468733756Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 00:57:37.469750 containerd[1939]: time="2026-01-14T00:57:37.469722186Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"58938593\" in 2.818090959s" Jan 14 00:57:37.470008 containerd[1939]: time="2026-01-14T00:57:37.469896321Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\"" Jan 14 00:57:39.840000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hostnamed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:57:39.841590 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Jan 14 00:57:39.847333 kernel: audit: type=1131 audit(1768352259.840:301): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hostnamed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:57:39.859000 audit: BPF prog-id=66 op=UNLOAD Jan 14 00:57:39.862298 kernel: audit: type=1334 audit(1768352259.859:302): prog-id=66 op=UNLOAD Jan 14 00:57:40.634144 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 00:57:40.634437 systemd[1]: kubelet.service: Consumed 221ms CPU time, 108.9M memory peak. Jan 14 00:57:40.633000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:57:40.643005 kernel: audit: type=1130 audit(1768352260.633:303): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:57:40.643100 kernel: audit: type=1131 audit(1768352260.633:304): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:57:40.633000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:57:40.641238 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 14 00:57:40.678494 systemd[1]: Reload requested from client PID 2799 ('systemctl') (unit session-8.scope)... Jan 14 00:57:40.678514 systemd[1]: Reloading... Jan 14 00:57:40.816221 zram_generator::config[2849]: No configuration found. Jan 14 00:57:41.090206 systemd[1]: Reloading finished in 411 ms. Jan 14 00:57:41.115664 kernel: audit: type=1334 audit(1768352261.107:305): prog-id=70 op=LOAD Jan 14 00:57:41.115777 kernel: audit: type=1334 audit(1768352261.107:306): prog-id=63 op=UNLOAD Jan 14 00:57:41.107000 audit: BPF prog-id=70 op=LOAD Jan 14 00:57:41.107000 audit: BPF prog-id=63 op=UNLOAD Jan 14 00:57:41.120912 kernel: audit: type=1334 audit(1768352261.107:307): prog-id=71 op=LOAD Jan 14 00:57:41.121016 kernel: audit: type=1334 audit(1768352261.107:308): prog-id=72 op=LOAD Jan 14 00:57:41.107000 audit: BPF prog-id=71 op=LOAD Jan 14 00:57:41.107000 audit: BPF prog-id=72 op=LOAD Jan 14 00:57:41.107000 audit: BPF prog-id=64 op=UNLOAD Jan 14 00:57:41.107000 audit: BPF prog-id=65 op=UNLOAD Jan 14 00:57:41.108000 audit: BPF prog-id=73 op=LOAD Jan 14 00:57:41.108000 audit: BPF prog-id=74 op=LOAD Jan 14 00:57:41.108000 audit: BPF prog-id=53 op=UNLOAD Jan 14 00:57:41.108000 audit: BPF prog-id=54 op=UNLOAD Jan 14 00:57:41.110000 audit: BPF prog-id=75 op=LOAD Jan 14 00:57:41.110000 audit: BPF prog-id=47 op=UNLOAD Jan 14 00:57:41.110000 audit: BPF prog-id=76 op=LOAD Jan 14 00:57:41.110000 audit: BPF prog-id=77 op=LOAD Jan 14 00:57:41.110000 audit: BPF prog-id=48 op=UNLOAD Jan 14 00:57:41.110000 audit: BPF prog-id=49 op=UNLOAD Jan 14 00:57:41.111000 audit: BPF prog-id=78 op=LOAD Jan 14 00:57:41.111000 audit: BPF prog-id=56 op=UNLOAD Jan 14 00:57:41.111000 audit: BPF prog-id=79 op=LOAD Jan 14 00:57:41.111000 audit: BPF prog-id=80 op=LOAD Jan 14 00:57:41.111000 audit: BPF prog-id=57 op=UNLOAD Jan 14 00:57:41.111000 audit: BPF prog-id=58 op=UNLOAD Jan 14 00:57:41.112000 audit: BPF prog-id=81 op=LOAD Jan 14 00:57:41.112000 audit: BPF prog-id=69 op=UNLOAD Jan 14 00:57:41.113000 audit: BPF prog-id=82 op=LOAD Jan 14 00:57:41.113000 audit: BPF prog-id=62 op=UNLOAD Jan 14 00:57:41.115000 audit: BPF prog-id=83 op=LOAD Jan 14 00:57:41.115000 audit: BPF prog-id=50 op=UNLOAD Jan 14 00:57:41.115000 audit: BPF prog-id=84 op=LOAD Jan 14 00:57:41.115000 audit: BPF prog-id=85 op=LOAD Jan 14 00:57:41.115000 audit: BPF prog-id=51 op=UNLOAD Jan 14 00:57:41.115000 audit: BPF prog-id=52 op=UNLOAD Jan 14 00:57:41.117000 audit: BPF prog-id=86 op=LOAD Jan 14 00:57:41.117000 audit: BPF prog-id=59 op=UNLOAD Jan 14 00:57:41.117000 audit: BPF prog-id=87 op=LOAD Jan 14 00:57:41.117000 audit: BPF prog-id=88 op=LOAD Jan 14 00:57:41.117000 audit: BPF prog-id=60 op=UNLOAD Jan 14 00:57:41.117000 audit: BPF prog-id=61 op=UNLOAD Jan 14 00:57:41.118000 audit: BPF prog-id=89 op=LOAD Jan 14 00:57:41.118000 audit: BPF prog-id=55 op=UNLOAD Jan 14 00:57:41.137749 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 14 00:57:41.137853 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 14 00:57:41.138242 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 00:57:41.137000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 14 00:57:41.138314 systemd[1]: kubelet.service: Consumed 137ms CPU time, 98.5M memory peak. Jan 14 00:57:41.140177 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 14 00:57:41.347234 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 00:57:41.346000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:57:41.358558 (kubelet)[2907]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 14 00:57:41.409914 kubelet[2907]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 14 00:57:41.409914 kubelet[2907]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 14 00:57:41.409914 kubelet[2907]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 14 00:57:41.413030 kubelet[2907]: I0114 00:57:41.412442 2907 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 14 00:57:41.968214 kubelet[2907]: I0114 00:57:41.966499 2907 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Jan 14 00:57:41.968214 kubelet[2907]: I0114 00:57:41.966529 2907 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 14 00:57:41.968214 kubelet[2907]: I0114 00:57:41.967014 2907 server.go:956] "Client rotation is on, will bootstrap in background" Jan 14 00:57:42.012693 kubelet[2907]: E0114 00:57:42.012656 2907 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://172.31.19.12:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.19.12:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jan 14 00:57:42.014548 kubelet[2907]: I0114 00:57:42.014511 2907 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 14 00:57:42.046979 kubelet[2907]: I0114 00:57:42.046938 2907 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 14 00:57:42.054285 kubelet[2907]: I0114 00:57:42.054257 2907 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 14 00:57:42.059973 kubelet[2907]: I0114 00:57:42.059930 2907 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 14 00:57:42.063749 kubelet[2907]: I0114 00:57:42.059966 2907 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-19-12","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 14 00:57:42.063749 kubelet[2907]: I0114 00:57:42.063750 2907 topology_manager.go:138] "Creating topology manager with none policy" Jan 14 00:57:42.063942 kubelet[2907]: I0114 00:57:42.063764 2907 container_manager_linux.go:303] "Creating device plugin manager" Jan 14 00:57:42.065049 kubelet[2907]: I0114 00:57:42.065018 2907 state_mem.go:36] "Initialized new in-memory state store" Jan 14 00:57:42.068647 kubelet[2907]: I0114 00:57:42.068557 2907 kubelet.go:480] "Attempting to sync node with API server" Jan 14 00:57:42.068647 kubelet[2907]: I0114 00:57:42.068589 2907 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 14 00:57:42.070452 kubelet[2907]: I0114 00:57:42.070292 2907 kubelet.go:386] "Adding apiserver pod source" Jan 14 00:57:42.070452 kubelet[2907]: I0114 00:57:42.070317 2907 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 14 00:57:42.073563 kubelet[2907]: E0114 00:57:42.072720 2907 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://172.31.19.12:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-19-12&limit=500&resourceVersion=0\": dial tcp 172.31.19.12:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 14 00:57:42.078809 kubelet[2907]: E0114 00:57:42.078790 2907 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://172.31.19.12:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.19.12:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 14 00:57:42.079162 kubelet[2907]: I0114 00:57:42.079149 2907 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.1.5" apiVersion="v1" Jan 14 00:57:42.079725 kubelet[2907]: I0114 00:57:42.079698 2907 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jan 14 00:57:42.081526 kubelet[2907]: W0114 00:57:42.080460 2907 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 14 00:57:42.087740 kubelet[2907]: I0114 00:57:42.087718 2907 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 14 00:57:42.087803 kubelet[2907]: I0114 00:57:42.087766 2907 server.go:1289] "Started kubelet" Jan 14 00:57:42.090457 kubelet[2907]: I0114 00:57:42.090400 2907 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jan 14 00:57:42.093358 kubelet[2907]: I0114 00:57:42.093222 2907 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 14 00:57:42.093783 kubelet[2907]: I0114 00:57:42.093763 2907 server.go:317] "Adding debug handlers to kubelet server" Jan 14 00:57:42.100000 audit[2921]: NETFILTER_CFG table=mangle:42 family=2 entries=2 op=nft_register_chain pid=2921 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 00:57:42.103200 kernel: kauditd_printk_skb: 38 callbacks suppressed Jan 14 00:57:42.103262 kernel: audit: type=1325 audit(1768352262.100:347): table=mangle:42 family=2 entries=2 op=nft_register_chain pid=2921 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 00:57:42.100000 audit[2921]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffceda3b4c0 a2=0 a3=0 items=0 ppid=2907 pid=2921 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:57:42.110950 kernel: audit: type=1300 audit(1768352262.100:347): arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffceda3b4c0 a2=0 a3=0 items=0 ppid=2907 pid=2921 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:57:42.111031 kubelet[2907]: I0114 00:57:42.110564 2907 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 14 00:57:42.111031 kubelet[2907]: I0114 00:57:42.110949 2907 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 14 00:57:42.113759 kernel: audit: type=1327 audit(1768352262.100:347): proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Jan 14 00:57:42.100000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Jan 14 00:57:42.113862 kubelet[2907]: I0114 00:57:42.111140 2907 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 14 00:57:42.113862 kubelet[2907]: I0114 00:57:42.113389 2907 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 14 00:57:42.113862 kubelet[2907]: E0114 00:57:42.113594 2907 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-19-12\" not found" Jan 14 00:57:42.103000 audit[2923]: NETFILTER_CFG table=filter:43 family=2 entries=1 op=nft_register_chain pid=2923 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 00:57:42.103000 audit[2923]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff429312a0 a2=0 a3=0 items=0 ppid=2907 pid=2923 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:57:42.117918 kubelet[2907]: E0114 00:57:42.111335 2907 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.19.12:6443/api/v1/namespaces/default/events\": dial tcp 172.31.19.12:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-19-12.188a7303637f8bda default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-19-12,UID:ip-172-31-19-12,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-19-12,},FirstTimestamp:2026-01-14 00:57:42.087740378 +0000 UTC m=+0.724091407,LastTimestamp:2026-01-14 00:57:42.087740378 +0000 UTC m=+0.724091407,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-19-12,}" Jan 14 00:57:42.117918 kubelet[2907]: I0114 00:57:42.117215 2907 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 14 00:57:42.117918 kubelet[2907]: I0114 00:57:42.117265 2907 reconciler.go:26] "Reconciler: start to sync state" Jan 14 00:57:42.117918 kubelet[2907]: E0114 00:57:42.117545 2907 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://172.31.19.12:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.19.12:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 14 00:57:42.117918 kubelet[2907]: E0114 00:57:42.117598 2907 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.19.12:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-19-12?timeout=10s\": dial tcp 172.31.19.12:6443: connect: connection refused" interval="200ms" Jan 14 00:57:42.118894 kernel: audit: type=1325 audit(1768352262.103:348): table=filter:43 family=2 entries=1 op=nft_register_chain pid=2923 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 00:57:42.118944 kernel: audit: type=1300 audit(1768352262.103:348): arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff429312a0 a2=0 a3=0 items=0 ppid=2907 pid=2923 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:57:42.103000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Jan 14 00:57:42.129178 kernel: audit: type=1327 audit(1768352262.103:348): proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Jan 14 00:57:42.129271 kernel: audit: type=1325 audit(1768352262.118:349): table=filter:44 family=2 entries=2 op=nft_register_chain pid=2925 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 00:57:42.118000 audit[2925]: NETFILTER_CFG table=filter:44 family=2 entries=2 op=nft_register_chain pid=2925 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 00:57:42.118000 audit[2925]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7ffe7b704d20 a2=0 a3=0 items=0 ppid=2907 pid=2925 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:57:42.134768 kubelet[2907]: I0114 00:57:42.130854 2907 factory.go:223] Registration of the systemd container factory successfully Jan 14 00:57:42.134768 kubelet[2907]: I0114 00:57:42.130933 2907 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 14 00:57:42.134768 kubelet[2907]: I0114 00:57:42.132749 2907 factory.go:223] Registration of the containerd container factory successfully Jan 14 00:57:42.135199 kernel: audit: type=1300 audit(1768352262.118:349): arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7ffe7b704d20 a2=0 a3=0 items=0 ppid=2907 pid=2925 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:57:42.118000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 14 00:57:42.128000 audit[2927]: NETFILTER_CFG table=filter:45 family=2 entries=2 op=nft_register_chain pid=2927 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 00:57:42.139427 kernel: audit: type=1327 audit(1768352262.118:349): proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 14 00:57:42.139504 kernel: audit: type=1325 audit(1768352262.128:350): table=filter:45 family=2 entries=2 op=nft_register_chain pid=2927 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 00:57:42.128000 audit[2927]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7ffe6f0c92f0 a2=0 a3=0 items=0 ppid=2907 pid=2927 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:57:42.128000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 14 00:57:42.145000 audit[2930]: NETFILTER_CFG table=filter:46 family=2 entries=1 op=nft_register_rule pid=2930 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 00:57:42.145000 audit[2930]: SYSCALL arch=c000003e syscall=46 success=yes exit=924 a0=3 a1=7ffe32ad7090 a2=0 a3=0 items=0 ppid=2907 pid=2930 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:57:42.145000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E7400626C6F636B20696E636F6D696E67206C6F63616C6E657420636F6E6E656374696F6E73002D2D647374003132372E302E302E302F38 Jan 14 00:57:42.147994 kubelet[2907]: I0114 00:57:42.147863 2907 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Jan 14 00:57:42.147000 audit[2931]: NETFILTER_CFG table=mangle:47 family=10 entries=2 op=nft_register_chain pid=2931 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 00:57:42.147000 audit[2931]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffe62e1cfc0 a2=0 a3=0 items=0 ppid=2907 pid=2931 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:57:42.147000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Jan 14 00:57:42.149498 kubelet[2907]: I0114 00:57:42.149406 2907 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Jan 14 00:57:42.149498 kubelet[2907]: I0114 00:57:42.149422 2907 status_manager.go:230] "Starting to sync pod status with apiserver" Jan 14 00:57:42.149498 kubelet[2907]: I0114 00:57:42.149442 2907 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 14 00:57:42.149498 kubelet[2907]: I0114 00:57:42.149448 2907 kubelet.go:2436] "Starting kubelet main sync loop" Jan 14 00:57:42.149498 kubelet[2907]: E0114 00:57:42.149486 2907 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 14 00:57:42.149000 audit[2932]: NETFILTER_CFG table=mangle:48 family=2 entries=1 op=nft_register_chain pid=2932 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 00:57:42.149000 audit[2932]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fff9dcb1680 a2=0 a3=0 items=0 ppid=2907 pid=2932 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:57:42.149000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Jan 14 00:57:42.150000 audit[2933]: NETFILTER_CFG table=nat:49 family=2 entries=1 op=nft_register_chain pid=2933 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 00:57:42.150000 audit[2933]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd412b24a0 a2=0 a3=0 items=0 ppid=2907 pid=2933 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:57:42.150000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Jan 14 00:57:42.152000 audit[2934]: NETFILTER_CFG table=filter:50 family=2 entries=1 op=nft_register_chain pid=2934 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 00:57:42.152000 audit[2934]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fffafafdd40 a2=0 a3=0 items=0 ppid=2907 pid=2934 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:57:42.152000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Jan 14 00:57:42.152000 audit[2938]: NETFILTER_CFG table=mangle:51 family=10 entries=1 op=nft_register_chain pid=2938 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 00:57:42.152000 audit[2938]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fff125d32e0 a2=0 a3=0 items=0 ppid=2907 pid=2938 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:57:42.152000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Jan 14 00:57:42.156025 kubelet[2907]: E0114 00:57:42.155999 2907 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://172.31.19.12:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.19.12:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 14 00:57:42.156000 audit[2940]: NETFILTER_CFG table=nat:52 family=10 entries=1 op=nft_register_chain pid=2940 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 00:57:42.156000 audit[2940]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffdc5300e90 a2=0 a3=0 items=0 ppid=2907 pid=2940 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:57:42.156000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Jan 14 00:57:42.159000 audit[2941]: NETFILTER_CFG table=filter:53 family=10 entries=1 op=nft_register_chain pid=2941 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 00:57:42.159000 audit[2941]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fffc35881e0 a2=0 a3=0 items=0 ppid=2907 pid=2941 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:57:42.159000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Jan 14 00:57:42.161153 kubelet[2907]: I0114 00:57:42.161105 2907 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 14 00:57:42.161153 kubelet[2907]: I0114 00:57:42.161145 2907 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 14 00:57:42.161265 kubelet[2907]: I0114 00:57:42.161163 2907 state_mem.go:36] "Initialized new in-memory state store" Jan 14 00:57:42.164273 kubelet[2907]: I0114 00:57:42.164250 2907 policy_none.go:49] "None policy: Start" Jan 14 00:57:42.164273 kubelet[2907]: I0114 00:57:42.164273 2907 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 14 00:57:42.164393 kubelet[2907]: I0114 00:57:42.164285 2907 state_mem.go:35] "Initializing new in-memory state store" Jan 14 00:57:42.174129 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 14 00:57:42.186515 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 14 00:57:42.190072 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 14 00:57:42.205440 kubelet[2907]: E0114 00:57:42.205379 2907 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jan 14 00:57:42.205793 kubelet[2907]: I0114 00:57:42.205780 2907 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 14 00:57:42.206133 kubelet[2907]: I0114 00:57:42.205827 2907 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 14 00:57:42.206373 kubelet[2907]: I0114 00:57:42.206355 2907 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 14 00:57:42.207408 kubelet[2907]: E0114 00:57:42.207209 2907 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 14 00:57:42.207408 kubelet[2907]: E0114 00:57:42.207253 2907 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-19-12\" not found" Jan 14 00:57:42.276773 systemd[1]: Created slice kubepods-burstable-pode7c96cb9b4999699be87538ae713d7ad.slice - libcontainer container kubepods-burstable-pode7c96cb9b4999699be87538ae713d7ad.slice. Jan 14 00:57:42.282986 kubelet[2907]: E0114 00:57:42.282907 2907 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-19-12\" not found" node="ip-172-31-19-12" Jan 14 00:57:42.287568 systemd[1]: Created slice kubepods-burstable-pod5d43ea737c4eb9bda0c1bd75cdfe7201.slice - libcontainer container kubepods-burstable-pod5d43ea737c4eb9bda0c1bd75cdfe7201.slice. Jan 14 00:57:42.302659 kubelet[2907]: E0114 00:57:42.302630 2907 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-19-12\" not found" node="ip-172-31-19-12" Jan 14 00:57:42.307150 systemd[1]: Created slice kubepods-burstable-pod03f9cbd5379ba06aca4d1687dd0c58e3.slice - libcontainer container kubepods-burstable-pod03f9cbd5379ba06aca4d1687dd0c58e3.slice. Jan 14 00:57:42.310867 kubelet[2907]: E0114 00:57:42.310700 2907 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-19-12\" not found" node="ip-172-31-19-12" Jan 14 00:57:42.311108 kubelet[2907]: I0114 00:57:42.311090 2907 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-19-12" Jan 14 00:57:42.311389 kubelet[2907]: E0114 00:57:42.311368 2907 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.19.12:6443/api/v1/nodes\": dial tcp 172.31.19.12:6443: connect: connection refused" node="ip-172-31-19-12" Jan 14 00:57:42.318029 kubelet[2907]: E0114 00:57:42.317984 2907 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.19.12:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-19-12?timeout=10s\": dial tcp 172.31.19.12:6443: connect: connection refused" interval="400ms" Jan 14 00:57:42.418333 kubelet[2907]: I0114 00:57:42.418283 2907 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5d43ea737c4eb9bda0c1bd75cdfe7201-k8s-certs\") pod \"kube-controller-manager-ip-172-31-19-12\" (UID: \"5d43ea737c4eb9bda0c1bd75cdfe7201\") " pod="kube-system/kube-controller-manager-ip-172-31-19-12" Jan 14 00:57:42.418333 kubelet[2907]: I0114 00:57:42.418323 2907 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/03f9cbd5379ba06aca4d1687dd0c58e3-kubeconfig\") pod \"kube-scheduler-ip-172-31-19-12\" (UID: \"03f9cbd5379ba06aca4d1687dd0c58e3\") " pod="kube-system/kube-scheduler-ip-172-31-19-12" Jan 14 00:57:42.418333 kubelet[2907]: I0114 00:57:42.418343 2907 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e7c96cb9b4999699be87538ae713d7ad-ca-certs\") pod \"kube-apiserver-ip-172-31-19-12\" (UID: \"e7c96cb9b4999699be87538ae713d7ad\") " pod="kube-system/kube-apiserver-ip-172-31-19-12" Jan 14 00:57:42.418808 kubelet[2907]: I0114 00:57:42.418359 2907 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5d43ea737c4eb9bda0c1bd75cdfe7201-kubeconfig\") pod \"kube-controller-manager-ip-172-31-19-12\" (UID: \"5d43ea737c4eb9bda0c1bd75cdfe7201\") " pod="kube-system/kube-controller-manager-ip-172-31-19-12" Jan 14 00:57:42.418808 kubelet[2907]: I0114 00:57:42.418376 2907 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5d43ea737c4eb9bda0c1bd75cdfe7201-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-19-12\" (UID: \"5d43ea737c4eb9bda0c1bd75cdfe7201\") " pod="kube-system/kube-controller-manager-ip-172-31-19-12" Jan 14 00:57:42.418808 kubelet[2907]: I0114 00:57:42.418393 2907 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e7c96cb9b4999699be87538ae713d7ad-k8s-certs\") pod \"kube-apiserver-ip-172-31-19-12\" (UID: \"e7c96cb9b4999699be87538ae713d7ad\") " pod="kube-system/kube-apiserver-ip-172-31-19-12" Jan 14 00:57:42.418808 kubelet[2907]: I0114 00:57:42.418407 2907 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e7c96cb9b4999699be87538ae713d7ad-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-19-12\" (UID: \"e7c96cb9b4999699be87538ae713d7ad\") " pod="kube-system/kube-apiserver-ip-172-31-19-12" Jan 14 00:57:42.418808 kubelet[2907]: I0114 00:57:42.418421 2907 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5d43ea737c4eb9bda0c1bd75cdfe7201-ca-certs\") pod \"kube-controller-manager-ip-172-31-19-12\" (UID: \"5d43ea737c4eb9bda0c1bd75cdfe7201\") " pod="kube-system/kube-controller-manager-ip-172-31-19-12" Jan 14 00:57:42.418941 kubelet[2907]: I0114 00:57:42.418438 2907 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/5d43ea737c4eb9bda0c1bd75cdfe7201-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-19-12\" (UID: \"5d43ea737c4eb9bda0c1bd75cdfe7201\") " pod="kube-system/kube-controller-manager-ip-172-31-19-12" Jan 14 00:57:42.513024 kubelet[2907]: I0114 00:57:42.512989 2907 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-19-12" Jan 14 00:57:42.513343 kubelet[2907]: E0114 00:57:42.513316 2907 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.19.12:6443/api/v1/nodes\": dial tcp 172.31.19.12:6443: connect: connection refused" node="ip-172-31-19-12" Jan 14 00:57:42.584920 containerd[1939]: time="2026-01-14T00:57:42.584876472Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-19-12,Uid:e7c96cb9b4999699be87538ae713d7ad,Namespace:kube-system,Attempt:0,}" Jan 14 00:57:42.603593 containerd[1939]: time="2026-01-14T00:57:42.603529302Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-19-12,Uid:5d43ea737c4eb9bda0c1bd75cdfe7201,Namespace:kube-system,Attempt:0,}" Jan 14 00:57:42.612316 containerd[1939]: time="2026-01-14T00:57:42.612277367Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-19-12,Uid:03f9cbd5379ba06aca4d1687dd0c58e3,Namespace:kube-system,Attempt:0,}" Jan 14 00:57:42.719454 kubelet[2907]: E0114 00:57:42.719386 2907 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.19.12:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-19-12?timeout=10s\": dial tcp 172.31.19.12:6443: connect: connection refused" interval="800ms" Jan 14 00:57:42.721204 containerd[1939]: time="2026-01-14T00:57:42.720895868Z" level=info msg="connecting to shim b6616050c2d9e0bb81aaaa66326aa054cff94e7530d7ae5e9ee0af0696abb501" address="unix:///run/containerd/s/d858b4ced17fbdbbc511d282f724a3c859204a12a9986fec7ceb8ceee921b89e" namespace=k8s.io protocol=ttrpc version=3 Jan 14 00:57:42.723224 containerd[1939]: time="2026-01-14T00:57:42.723158710Z" level=info msg="connecting to shim d11a8b5ca1269e5c68bf9567e2f85e65f3b8e88ac94d55bbc95b56a5e3d7ef67" address="unix:///run/containerd/s/bef8dae280b85e1e4621bc1d8be669ea09bd2e43752593348f2d5acbef21692d" namespace=k8s.io protocol=ttrpc version=3 Jan 14 00:57:42.726514 containerd[1939]: time="2026-01-14T00:57:42.726469258Z" level=info msg="connecting to shim e0624ff800d072cdd7eb9fbe78f9aabc14846d3eadaf98eba2519cc9ba7da7fc" address="unix:///run/containerd/s/873f1d8a013bfbdb4f30ef576093bc26dacbd930b2fc09a652a50b6d81060700" namespace=k8s.io protocol=ttrpc version=3 Jan 14 00:57:42.823463 systemd[1]: Started cri-containerd-b6616050c2d9e0bb81aaaa66326aa054cff94e7530d7ae5e9ee0af0696abb501.scope - libcontainer container b6616050c2d9e0bb81aaaa66326aa054cff94e7530d7ae5e9ee0af0696abb501. Jan 14 00:57:42.824539 systemd[1]: Started cri-containerd-d11a8b5ca1269e5c68bf9567e2f85e65f3b8e88ac94d55bbc95b56a5e3d7ef67.scope - libcontainer container d11a8b5ca1269e5c68bf9567e2f85e65f3b8e88ac94d55bbc95b56a5e3d7ef67. Jan 14 00:57:42.826303 systemd[1]: Started cri-containerd-e0624ff800d072cdd7eb9fbe78f9aabc14846d3eadaf98eba2519cc9ba7da7fc.scope - libcontainer container e0624ff800d072cdd7eb9fbe78f9aabc14846d3eadaf98eba2519cc9ba7da7fc. Jan 14 00:57:42.848000 audit: BPF prog-id=90 op=LOAD Jan 14 00:57:42.848000 audit: BPF prog-id=91 op=LOAD Jan 14 00:57:42.848000 audit[2995]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=2966 pid=2995 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:57:42.848000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6431316138623563613132363965356336386266393536376532663835 Jan 14 00:57:42.849000 audit: BPF prog-id=91 op=UNLOAD Jan 14 00:57:42.849000 audit[2995]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2966 pid=2995 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:57:42.849000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6431316138623563613132363965356336386266393536376532663835 Jan 14 00:57:42.853000 audit: BPF prog-id=92 op=LOAD Jan 14 00:57:42.853000 audit[2995]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=2966 pid=2995 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:57:42.853000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6431316138623563613132363965356336386266393536376532663835 Jan 14 00:57:42.853000 audit: BPF prog-id=93 op=LOAD Jan 14 00:57:42.853000 audit[2995]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=2966 pid=2995 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:57:42.853000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6431316138623563613132363965356336386266393536376532663835 Jan 14 00:57:42.853000 audit: BPF prog-id=93 op=UNLOAD Jan 14 00:57:42.853000 audit[2995]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2966 pid=2995 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:57:42.853000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6431316138623563613132363965356336386266393536376532663835 Jan 14 00:57:42.853000 audit: BPF prog-id=92 op=UNLOAD Jan 14 00:57:42.853000 audit[2995]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2966 pid=2995 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:57:42.853000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6431316138623563613132363965356336386266393536376532663835 Jan 14 00:57:42.853000 audit: BPF prog-id=94 op=LOAD Jan 14 00:57:42.853000 audit[2995]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=2966 pid=2995 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:57:42.853000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6431316138623563613132363965356336386266393536376532663835 Jan 14 00:57:42.853000 audit: BPF prog-id=95 op=LOAD Jan 14 00:57:42.854000 audit: BPF prog-id=96 op=LOAD Jan 14 00:57:42.854000 audit[3003]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000128238 a2=98 a3=0 items=0 ppid=2975 pid=3003 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:57:42.855000 audit: BPF prog-id=97 op=LOAD Jan 14 00:57:42.856000 audit: BPF prog-id=98 op=LOAD Jan 14 00:57:42.856000 audit[3004]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=2961 pid=3004 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:57:42.856000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6236363136303530633264396530626238316161616136363332366161 Jan 14 00:57:42.856000 audit: BPF prog-id=98 op=UNLOAD Jan 14 00:57:42.856000 audit[3004]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2961 pid=3004 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:57:42.856000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6236363136303530633264396530626238316161616136363332366161 Jan 14 00:57:42.856000 audit: BPF prog-id=99 op=LOAD Jan 14 00:57:42.856000 audit[3004]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=2961 pid=3004 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:57:42.856000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6236363136303530633264396530626238316161616136363332366161 Jan 14 00:57:42.856000 audit: BPF prog-id=100 op=LOAD Jan 14 00:57:42.854000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6530363234666638303064303732636464376562396662653738663961 Jan 14 00:57:42.856000 audit: BPF prog-id=96 op=UNLOAD Jan 14 00:57:42.856000 audit[3003]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2975 pid=3003 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:57:42.856000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6530363234666638303064303732636464376562396662653738663961 Jan 14 00:57:42.856000 audit: BPF prog-id=101 op=LOAD Jan 14 00:57:42.856000 audit[3003]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000128488 a2=98 a3=0 items=0 ppid=2975 pid=3003 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:57:42.856000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6530363234666638303064303732636464376562396662653738663961 Jan 14 00:57:42.856000 audit: BPF prog-id=102 op=LOAD Jan 14 00:57:42.856000 audit[3003]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000128218 a2=98 a3=0 items=0 ppid=2975 pid=3003 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:57:42.856000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6530363234666638303064303732636464376562396662653738663961 Jan 14 00:57:42.856000 audit: BPF prog-id=102 op=UNLOAD Jan 14 00:57:42.856000 audit[3003]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2975 pid=3003 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:57:42.856000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6530363234666638303064303732636464376562396662653738663961 Jan 14 00:57:42.856000 audit: BPF prog-id=101 op=UNLOAD Jan 14 00:57:42.856000 audit[3003]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2975 pid=3003 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:57:42.856000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6530363234666638303064303732636464376562396662653738663961 Jan 14 00:57:42.856000 audit: BPF prog-id=103 op=LOAD Jan 14 00:57:42.856000 audit[3003]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001286e8 a2=98 a3=0 items=0 ppid=2975 pid=3003 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:57:42.856000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6530363234666638303064303732636464376562396662653738663961 Jan 14 00:57:42.856000 audit[3004]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=2961 pid=3004 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:57:42.856000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6236363136303530633264396530626238316161616136363332366161 Jan 14 00:57:42.857000 audit: BPF prog-id=100 op=UNLOAD Jan 14 00:57:42.857000 audit[3004]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2961 pid=3004 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:57:42.857000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6236363136303530633264396530626238316161616136363332366161 Jan 14 00:57:42.857000 audit: BPF prog-id=99 op=UNLOAD Jan 14 00:57:42.857000 audit[3004]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2961 pid=3004 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:57:42.857000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6236363136303530633264396530626238316161616136363332366161 Jan 14 00:57:42.857000 audit: BPF prog-id=104 op=LOAD Jan 14 00:57:42.857000 audit[3004]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=2961 pid=3004 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:57:42.857000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6236363136303530633264396530626238316161616136363332366161 Jan 14 00:57:42.913146 containerd[1939]: time="2026-01-14T00:57:42.913056046Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-19-12,Uid:03f9cbd5379ba06aca4d1687dd0c58e3,Namespace:kube-system,Attempt:0,} returns sandbox id \"e0624ff800d072cdd7eb9fbe78f9aabc14846d3eadaf98eba2519cc9ba7da7fc\"" Jan 14 00:57:42.921170 kubelet[2907]: I0114 00:57:42.920707 2907 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-19-12" Jan 14 00:57:42.921170 kubelet[2907]: E0114 00:57:42.920993 2907 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.19.12:6443/api/v1/nodes\": dial tcp 172.31.19.12:6443: connect: connection refused" node="ip-172-31-19-12" Jan 14 00:57:42.935726 containerd[1939]: time="2026-01-14T00:57:42.935690401Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-19-12,Uid:5d43ea737c4eb9bda0c1bd75cdfe7201,Namespace:kube-system,Attempt:0,} returns sandbox id \"b6616050c2d9e0bb81aaaa66326aa054cff94e7530d7ae5e9ee0af0696abb501\"" Jan 14 00:57:42.939732 containerd[1939]: time="2026-01-14T00:57:42.939674877Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-19-12,Uid:e7c96cb9b4999699be87538ae713d7ad,Namespace:kube-system,Attempt:0,} returns sandbox id \"d11a8b5ca1269e5c68bf9567e2f85e65f3b8e88ac94d55bbc95b56a5e3d7ef67\"" Jan 14 00:57:42.942215 containerd[1939]: time="2026-01-14T00:57:42.942066040Z" level=info msg="CreateContainer within sandbox \"e0624ff800d072cdd7eb9fbe78f9aabc14846d3eadaf98eba2519cc9ba7da7fc\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 14 00:57:42.947365 containerd[1939]: time="2026-01-14T00:57:42.947218654Z" level=info msg="CreateContainer within sandbox \"b6616050c2d9e0bb81aaaa66326aa054cff94e7530d7ae5e9ee0af0696abb501\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 14 00:57:42.949918 containerd[1939]: time="2026-01-14T00:57:42.949894501Z" level=info msg="CreateContainer within sandbox \"d11a8b5ca1269e5c68bf9567e2f85e65f3b8e88ac94d55bbc95b56a5e3d7ef67\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 14 00:57:42.989199 containerd[1939]: time="2026-01-14T00:57:42.989046320Z" level=info msg="Container e70e6bd900ab1e91d51c19e696ab90e6e9657fab8664afa9004fd5804aab2048: CDI devices from CRI Config.CDIDevices: []" Jan 14 00:57:42.989199 containerd[1939]: time="2026-01-14T00:57:42.989074860Z" level=info msg="Container c011025727dd0bfe8564a2172d1dd738060f9517bc11b365a2ed796d36f180e9: CDI devices from CRI Config.CDIDevices: []" Jan 14 00:57:42.990008 containerd[1939]: time="2026-01-14T00:57:42.989978792Z" level=info msg="Container 52ec8a0ed79ee465b8631b665e07f29a964f28541516af4333b01759f8777cf3: CDI devices from CRI Config.CDIDevices: []" Jan 14 00:57:43.003772 containerd[1939]: time="2026-01-14T00:57:43.003732785Z" level=info msg="CreateContainer within sandbox \"d11a8b5ca1269e5c68bf9567e2f85e65f3b8e88ac94d55bbc95b56a5e3d7ef67\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"e70e6bd900ab1e91d51c19e696ab90e6e9657fab8664afa9004fd5804aab2048\"" Jan 14 00:57:43.004401 containerd[1939]: time="2026-01-14T00:57:43.004370442Z" level=info msg="StartContainer for \"e70e6bd900ab1e91d51c19e696ab90e6e9657fab8664afa9004fd5804aab2048\"" Jan 14 00:57:43.005518 containerd[1939]: time="2026-01-14T00:57:43.005443110Z" level=info msg="connecting to shim e70e6bd900ab1e91d51c19e696ab90e6e9657fab8664afa9004fd5804aab2048" address="unix:///run/containerd/s/bef8dae280b85e1e4621bc1d8be669ea09bd2e43752593348f2d5acbef21692d" protocol=ttrpc version=3 Jan 14 00:57:43.016322 containerd[1939]: time="2026-01-14T00:57:43.015885235Z" level=info msg="CreateContainer within sandbox \"b6616050c2d9e0bb81aaaa66326aa054cff94e7530d7ae5e9ee0af0696abb501\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"c011025727dd0bfe8564a2172d1dd738060f9517bc11b365a2ed796d36f180e9\"" Jan 14 00:57:43.017107 containerd[1939]: time="2026-01-14T00:57:43.017037187Z" level=info msg="CreateContainer within sandbox \"e0624ff800d072cdd7eb9fbe78f9aabc14846d3eadaf98eba2519cc9ba7da7fc\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"52ec8a0ed79ee465b8631b665e07f29a964f28541516af4333b01759f8777cf3\"" Jan 14 00:57:43.017823 containerd[1939]: time="2026-01-14T00:57:43.017807262Z" level=info msg="StartContainer for \"c011025727dd0bfe8564a2172d1dd738060f9517bc11b365a2ed796d36f180e9\"" Jan 14 00:57:43.018321 containerd[1939]: time="2026-01-14T00:57:43.018271489Z" level=info msg="StartContainer for \"52ec8a0ed79ee465b8631b665e07f29a964f28541516af4333b01759f8777cf3\"" Jan 14 00:57:43.019209 containerd[1939]: time="2026-01-14T00:57:43.019172515Z" level=info msg="connecting to shim 52ec8a0ed79ee465b8631b665e07f29a964f28541516af4333b01759f8777cf3" address="unix:///run/containerd/s/873f1d8a013bfbdb4f30ef576093bc26dacbd930b2fc09a652a50b6d81060700" protocol=ttrpc version=3 Jan 14 00:57:43.020272 containerd[1939]: time="2026-01-14T00:57:43.019904382Z" level=info msg="connecting to shim c011025727dd0bfe8564a2172d1dd738060f9517bc11b365a2ed796d36f180e9" address="unix:///run/containerd/s/d858b4ced17fbdbbc511d282f724a3c859204a12a9986fec7ceb8ceee921b89e" protocol=ttrpc version=3 Jan 14 00:57:43.026508 systemd[1]: Started cri-containerd-e70e6bd900ab1e91d51c19e696ab90e6e9657fab8664afa9004fd5804aab2048.scope - libcontainer container e70e6bd900ab1e91d51c19e696ab90e6e9657fab8664afa9004fd5804aab2048. Jan 14 00:57:43.042489 systemd[1]: Started cri-containerd-c011025727dd0bfe8564a2172d1dd738060f9517bc11b365a2ed796d36f180e9.scope - libcontainer container c011025727dd0bfe8564a2172d1dd738060f9517bc11b365a2ed796d36f180e9. Jan 14 00:57:43.052521 systemd[1]: Started cri-containerd-52ec8a0ed79ee465b8631b665e07f29a964f28541516af4333b01759f8777cf3.scope - libcontainer container 52ec8a0ed79ee465b8631b665e07f29a964f28541516af4333b01759f8777cf3. Jan 14 00:57:43.054000 audit: BPF prog-id=105 op=LOAD Jan 14 00:57:43.055000 audit: BPF prog-id=106 op=LOAD Jan 14 00:57:43.055000 audit[3082]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=2966 pid=3082 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:57:43.055000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6537306536626439303061623165393164353163313965363936616239 Jan 14 00:57:43.056000 audit: BPF prog-id=106 op=UNLOAD Jan 14 00:57:43.056000 audit[3082]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2966 pid=3082 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:57:43.056000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6537306536626439303061623165393164353163313965363936616239 Jan 14 00:57:43.056000 audit: BPF prog-id=107 op=LOAD Jan 14 00:57:43.056000 audit[3082]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=2966 pid=3082 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:57:43.056000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6537306536626439303061623165393164353163313965363936616239 Jan 14 00:57:43.056000 audit: BPF prog-id=108 op=LOAD Jan 14 00:57:43.056000 audit[3082]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=2966 pid=3082 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:57:43.056000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6537306536626439303061623165393164353163313965363936616239 Jan 14 00:57:43.056000 audit: BPF prog-id=108 op=UNLOAD Jan 14 00:57:43.056000 audit[3082]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2966 pid=3082 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:57:43.056000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6537306536626439303061623165393164353163313965363936616239 Jan 14 00:57:43.056000 audit: BPF prog-id=107 op=UNLOAD Jan 14 00:57:43.056000 audit[3082]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2966 pid=3082 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:57:43.056000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6537306536626439303061623165393164353163313965363936616239 Jan 14 00:57:43.056000 audit: BPF prog-id=109 op=LOAD Jan 14 00:57:43.056000 audit[3082]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=2966 pid=3082 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:57:43.056000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6537306536626439303061623165393164353163313965363936616239 Jan 14 00:57:43.061000 audit: BPF prog-id=110 op=LOAD Jan 14 00:57:43.061000 audit: BPF prog-id=111 op=LOAD Jan 14 00:57:43.061000 audit[3094]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000228238 a2=98 a3=0 items=0 ppid=2961 pid=3094 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:57:43.061000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6330313130323537323764643062666538353634613231373264316464 Jan 14 00:57:43.062000 audit: BPF prog-id=111 op=UNLOAD Jan 14 00:57:43.062000 audit[3094]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2961 pid=3094 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:57:43.062000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6330313130323537323764643062666538353634613231373264316464 Jan 14 00:57:43.062000 audit: BPF prog-id=112 op=LOAD Jan 14 00:57:43.062000 audit[3094]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000228488 a2=98 a3=0 items=0 ppid=2961 pid=3094 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:57:43.062000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6330313130323537323764643062666538353634613231373264316464 Jan 14 00:57:43.062000 audit: BPF prog-id=113 op=LOAD Jan 14 00:57:43.062000 audit[3094]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000228218 a2=98 a3=0 items=0 ppid=2961 pid=3094 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:57:43.062000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6330313130323537323764643062666538353634613231373264316464 Jan 14 00:57:43.062000 audit: BPF prog-id=113 op=UNLOAD Jan 14 00:57:43.062000 audit[3094]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2961 pid=3094 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:57:43.062000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6330313130323537323764643062666538353634613231373264316464 Jan 14 00:57:43.062000 audit: BPF prog-id=112 op=UNLOAD Jan 14 00:57:43.062000 audit[3094]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2961 pid=3094 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:57:43.062000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6330313130323537323764643062666538353634613231373264316464 Jan 14 00:57:43.062000 audit: BPF prog-id=114 op=LOAD Jan 14 00:57:43.062000 audit[3094]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0002286e8 a2=98 a3=0 items=0 ppid=2961 pid=3094 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:57:43.062000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6330313130323537323764643062666538353634613231373264316464 Jan 14 00:57:43.079000 audit: BPF prog-id=115 op=LOAD Jan 14 00:57:43.080000 audit: BPF prog-id=116 op=LOAD Jan 14 00:57:43.080000 audit[3095]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000128238 a2=98 a3=0 items=0 ppid=2975 pid=3095 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:57:43.080000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3532656338613065643739656534363562383633316236363565303766 Jan 14 00:57:43.080000 audit: BPF prog-id=116 op=UNLOAD Jan 14 00:57:43.080000 audit[3095]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2975 pid=3095 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:57:43.080000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3532656338613065643739656534363562383633316236363565303766 Jan 14 00:57:43.080000 audit: BPF prog-id=117 op=LOAD Jan 14 00:57:43.080000 audit[3095]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000128488 a2=98 a3=0 items=0 ppid=2975 pid=3095 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:57:43.080000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3532656338613065643739656534363562383633316236363565303766 Jan 14 00:57:43.080000 audit: BPF prog-id=118 op=LOAD Jan 14 00:57:43.080000 audit[3095]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000128218 a2=98 a3=0 items=0 ppid=2975 pid=3095 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:57:43.080000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3532656338613065643739656534363562383633316236363565303766 Jan 14 00:57:43.080000 audit: BPF prog-id=118 op=UNLOAD Jan 14 00:57:43.080000 audit[3095]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2975 pid=3095 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:57:43.080000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3532656338613065643739656534363562383633316236363565303766 Jan 14 00:57:43.080000 audit: BPF prog-id=117 op=UNLOAD Jan 14 00:57:43.080000 audit[3095]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2975 pid=3095 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:57:43.080000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3532656338613065643739656534363562383633316236363565303766 Jan 14 00:57:43.080000 audit: BPF prog-id=119 op=LOAD Jan 14 00:57:43.080000 audit[3095]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001286e8 a2=98 a3=0 items=0 ppid=2975 pid=3095 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:57:43.080000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3532656338613065643739656534363562383633316236363565303766 Jan 14 00:57:43.122307 containerd[1939]: time="2026-01-14T00:57:43.120996186Z" level=info msg="StartContainer for \"e70e6bd900ab1e91d51c19e696ab90e6e9657fab8664afa9004fd5804aab2048\" returns successfully" Jan 14 00:57:43.127086 containerd[1939]: time="2026-01-14T00:57:43.127059584Z" level=info msg="StartContainer for \"c011025727dd0bfe8564a2172d1dd738060f9517bc11b365a2ed796d36f180e9\" returns successfully" Jan 14 00:57:43.150143 containerd[1939]: time="2026-01-14T00:57:43.150109778Z" level=info msg="StartContainer for \"52ec8a0ed79ee465b8631b665e07f29a964f28541516af4333b01759f8777cf3\" returns successfully" Jan 14 00:57:43.171100 kubelet[2907]: E0114 00:57:43.170762 2907 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-19-12\" not found" node="ip-172-31-19-12" Jan 14 00:57:43.174425 kubelet[2907]: E0114 00:57:43.174401 2907 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-19-12\" not found" node="ip-172-31-19-12" Jan 14 00:57:43.176590 kubelet[2907]: E0114 00:57:43.176566 2907 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-19-12\" not found" node="ip-172-31-19-12" Jan 14 00:57:43.296139 kubelet[2907]: E0114 00:57:43.296097 2907 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://172.31.19.12:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.19.12:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 14 00:57:43.401571 kubelet[2907]: E0114 00:57:43.401466 2907 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://172.31.19.12:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-19-12&limit=500&resourceVersion=0\": dial tcp 172.31.19.12:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 14 00:57:43.521201 kubelet[2907]: E0114 00:57:43.520349 2907 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.19.12:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-19-12?timeout=10s\": dial tcp 172.31.19.12:6443: connect: connection refused" interval="1.6s" Jan 14 00:57:43.553686 kubelet[2907]: E0114 00:57:43.553643 2907 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://172.31.19.12:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.19.12:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 14 00:57:43.562311 kubelet[2907]: E0114 00:57:43.562266 2907 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://172.31.19.12:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.19.12:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 14 00:57:43.582271 kubelet[2907]: E0114 00:57:43.582149 2907 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.19.12:6443/api/v1/namespaces/default/events\": dial tcp 172.31.19.12:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-19-12.188a7303637f8bda default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-19-12,UID:ip-172-31-19-12,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-19-12,},FirstTimestamp:2026-01-14 00:57:42.087740378 +0000 UTC m=+0.724091407,LastTimestamp:2026-01-14 00:57:42.087740378 +0000 UTC m=+0.724091407,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-19-12,}" Jan 14 00:57:43.723312 kubelet[2907]: I0114 00:57:43.722785 2907 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-19-12" Jan 14 00:57:43.723570 kubelet[2907]: E0114 00:57:43.723541 2907 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.19.12:6443/api/v1/nodes\": dial tcp 172.31.19.12:6443: connect: connection refused" node="ip-172-31-19-12" Jan 14 00:57:44.122905 kubelet[2907]: E0114 00:57:44.122867 2907 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://172.31.19.12:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.19.12:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jan 14 00:57:44.177054 kubelet[2907]: E0114 00:57:44.177018 2907 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-19-12\" not found" node="ip-172-31-19-12" Jan 14 00:57:44.177365 kubelet[2907]: E0114 00:57:44.177350 2907 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-19-12\" not found" node="ip-172-31-19-12" Jan 14 00:57:45.121559 kubelet[2907]: E0114 00:57:45.121524 2907 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.19.12:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-19-12?timeout=10s\": dial tcp 172.31.19.12:6443: connect: connection refused" interval="3.2s" Jan 14 00:57:45.325929 kubelet[2907]: I0114 00:57:45.325856 2907 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-19-12" Jan 14 00:57:45.326362 kubelet[2907]: E0114 00:57:45.326007 2907 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-19-12\" not found" node="ip-172-31-19-12" Jan 14 00:57:45.326362 kubelet[2907]: E0114 00:57:45.326343 2907 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.19.12:6443/api/v1/nodes\": dial tcp 172.31.19.12:6443: connect: connection refused" node="ip-172-31-19-12" Jan 14 00:57:45.435779 kubelet[2907]: E0114 00:57:45.435676 2907 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://172.31.19.12:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.19.12:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 14 00:57:45.446416 kubelet[2907]: E0114 00:57:45.446379 2907 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://172.31.19.12:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-19-12&limit=500&resourceVersion=0\": dial tcp 172.31.19.12:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 14 00:57:45.897377 kubelet[2907]: E0114 00:57:45.897331 2907 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://172.31.19.12:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.19.12:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 14 00:57:46.167541 kubelet[2907]: E0114 00:57:46.167462 2907 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-19-12\" not found" node="ip-172-31-19-12" Jan 14 00:57:46.327265 kubelet[2907]: E0114 00:57:46.327223 2907 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://172.31.19.12:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.19.12:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 14 00:57:48.271110 kubelet[2907]: E0114 00:57:48.271063 2907 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://172.31.19.12:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.19.12:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jan 14 00:57:48.322947 kubelet[2907]: E0114 00:57:48.322907 2907 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.19.12:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-19-12?timeout=10s\": dial tcp 172.31.19.12:6443: connect: connection refused" interval="6.4s" Jan 14 00:57:48.529146 kubelet[2907]: I0114 00:57:48.528668 2907 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-19-12" Jan 14 00:57:50.035056 kubelet[2907]: I0114 00:57:50.035024 2907 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-19-12" Jan 14 00:57:50.035572 kubelet[2907]: E0114 00:57:50.035447 2907 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"ip-172-31-19-12\": node \"ip-172-31-19-12\" not found" Jan 14 00:57:50.074387 kubelet[2907]: E0114 00:57:50.074349 2907 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-19-12\" not found" Jan 14 00:57:50.127741 kubelet[2907]: E0114 00:57:50.127714 2907 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-19-12\" not found" node="ip-172-31-19-12" Jan 14 00:57:50.175441 kubelet[2907]: E0114 00:57:50.175404 2907 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-19-12\" not found" Jan 14 00:57:50.276529 kubelet[2907]: E0114 00:57:50.276485 2907 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-19-12\" not found" Jan 14 00:57:50.377457 kubelet[2907]: E0114 00:57:50.377412 2907 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-19-12\" not found" Jan 14 00:57:50.478563 kubelet[2907]: E0114 00:57:50.478523 2907 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-19-12\" not found" Jan 14 00:57:50.579492 kubelet[2907]: E0114 00:57:50.579449 2907 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-19-12\" not found" Jan 14 00:57:50.680543 kubelet[2907]: E0114 00:57:50.680444 2907 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-19-12\" not found" Jan 14 00:57:50.816460 kubelet[2907]: I0114 00:57:50.816414 2907 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-19-12" Jan 14 00:57:50.824219 kubelet[2907]: E0114 00:57:50.824167 2907 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ip-172-31-19-12\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ip-172-31-19-12" Jan 14 00:57:50.824219 kubelet[2907]: I0114 00:57:50.824210 2907 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-19-12" Jan 14 00:57:50.825963 kubelet[2907]: E0114 00:57:50.825930 2907 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ip-172-31-19-12\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ip-172-31-19-12" Jan 14 00:57:50.825963 kubelet[2907]: I0114 00:57:50.825956 2907 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-19-12" Jan 14 00:57:50.827512 kubelet[2907]: E0114 00:57:50.827486 2907 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-19-12\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ip-172-31-19-12" Jan 14 00:57:51.079034 kubelet[2907]: I0114 00:57:51.078995 2907 apiserver.go:52] "Watching apiserver" Jan 14 00:57:51.118066 kubelet[2907]: I0114 00:57:51.118025 2907 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 14 00:57:52.332982 systemd[1]: Reload requested from client PID 3186 ('systemctl') (unit session-8.scope)... Jan 14 00:57:52.332998 systemd[1]: Reloading... Jan 14 00:57:52.428215 zram_generator::config[3233]: No configuration found. Jan 14 00:57:52.695789 systemd[1]: Reloading finished in 362 ms. Jan 14 00:57:52.730201 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 14 00:57:52.748683 systemd[1]: kubelet.service: Deactivated successfully. Jan 14 00:57:52.748956 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 00:57:52.747000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:57:52.749478 kernel: kauditd_printk_skb: 158 callbacks suppressed Jan 14 00:57:52.749543 kernel: audit: type=1131 audit(1768352272.747:407): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:57:52.749242 systemd[1]: kubelet.service: Consumed 1.108s CPU time, 128.2M memory peak. Jan 14 00:57:52.757318 kernel: audit: type=1334 audit(1768352272.753:408): prog-id=120 op=LOAD Jan 14 00:57:52.757386 kernel: audit: type=1334 audit(1768352272.753:409): prog-id=81 op=UNLOAD Jan 14 00:57:52.753000 audit: BPF prog-id=120 op=LOAD Jan 14 00:57:52.753000 audit: BPF prog-id=81 op=UNLOAD Jan 14 00:57:52.754117 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 14 00:57:52.757000 audit: BPF prog-id=121 op=LOAD Jan 14 00:57:52.757000 audit: BPF prog-id=70 op=UNLOAD Jan 14 00:57:52.757000 audit: BPF prog-id=122 op=LOAD Jan 14 00:57:52.757000 audit: BPF prog-id=123 op=LOAD Jan 14 00:57:52.757000 audit: BPF prog-id=71 op=UNLOAD Jan 14 00:57:52.757000 audit: BPF prog-id=72 op=UNLOAD Jan 14 00:57:52.757000 audit: BPF prog-id=124 op=LOAD Jan 14 00:57:52.757000 audit: BPF prog-id=86 op=UNLOAD Jan 14 00:57:52.757000 audit: BPF prog-id=125 op=LOAD Jan 14 00:57:52.757000 audit: BPF prog-id=126 op=LOAD Jan 14 00:57:52.757000 audit: BPF prog-id=87 op=UNLOAD Jan 14 00:57:52.757000 audit: BPF prog-id=88 op=UNLOAD Jan 14 00:57:52.759000 audit: BPF prog-id=127 op=LOAD Jan 14 00:57:52.759000 audit: BPF prog-id=83 op=UNLOAD Jan 14 00:57:52.759000 audit: BPF prog-id=128 op=LOAD Jan 14 00:57:52.759000 audit: BPF prog-id=129 op=LOAD Jan 14 00:57:52.759000 audit: BPF prog-id=84 op=UNLOAD Jan 14 00:57:52.761918 kernel: audit: type=1334 audit(1768352272.757:410): prog-id=121 op=LOAD Jan 14 00:57:52.761958 kernel: audit: type=1334 audit(1768352272.757:411): prog-id=70 op=UNLOAD Jan 14 00:57:52.761977 kernel: audit: type=1334 audit(1768352272.757:412): prog-id=122 op=LOAD Jan 14 00:57:52.761995 kernel: audit: type=1334 audit(1768352272.757:413): prog-id=123 op=LOAD Jan 14 00:57:52.762013 kernel: audit: type=1334 audit(1768352272.757:414): prog-id=71 op=UNLOAD Jan 14 00:57:52.762046 kernel: audit: type=1334 audit(1768352272.757:415): prog-id=72 op=UNLOAD Jan 14 00:57:52.762071 kernel: audit: type=1334 audit(1768352272.757:416): prog-id=124 op=LOAD Jan 14 00:57:52.759000 audit: BPF prog-id=85 op=UNLOAD Jan 14 00:57:52.760000 audit: BPF prog-id=130 op=LOAD Jan 14 00:57:52.760000 audit: BPF prog-id=82 op=UNLOAD Jan 14 00:57:52.761000 audit: BPF prog-id=131 op=LOAD Jan 14 00:57:52.761000 audit: BPF prog-id=75 op=UNLOAD Jan 14 00:57:52.761000 audit: BPF prog-id=132 op=LOAD Jan 14 00:57:52.761000 audit: BPF prog-id=133 op=LOAD Jan 14 00:57:52.761000 audit: BPF prog-id=76 op=UNLOAD Jan 14 00:57:52.761000 audit: BPF prog-id=77 op=UNLOAD Jan 14 00:57:52.761000 audit: BPF prog-id=134 op=LOAD Jan 14 00:57:52.761000 audit: BPF prog-id=135 op=LOAD Jan 14 00:57:52.761000 audit: BPF prog-id=73 op=UNLOAD Jan 14 00:57:52.761000 audit: BPF prog-id=74 op=UNLOAD Jan 14 00:57:52.764000 audit: BPF prog-id=136 op=LOAD Jan 14 00:57:52.764000 audit: BPF prog-id=89 op=UNLOAD Jan 14 00:57:52.774000 audit: BPF prog-id=137 op=LOAD Jan 14 00:57:52.774000 audit: BPF prog-id=78 op=UNLOAD Jan 14 00:57:52.774000 audit: BPF prog-id=138 op=LOAD Jan 14 00:57:52.774000 audit: BPF prog-id=139 op=LOAD Jan 14 00:57:52.774000 audit: BPF prog-id=79 op=UNLOAD Jan 14 00:57:52.774000 audit: BPF prog-id=80 op=UNLOAD Jan 14 00:57:53.028205 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 00:57:53.028000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:57:53.039515 (kubelet)[3293]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 14 00:57:53.131759 kubelet[3293]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 14 00:57:53.131759 kubelet[3293]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 14 00:57:53.131759 kubelet[3293]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 14 00:57:53.133824 kubelet[3293]: I0114 00:57:53.133778 3293 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 14 00:57:53.153207 kubelet[3293]: I0114 00:57:53.150683 3293 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Jan 14 00:57:53.153207 kubelet[3293]: I0114 00:57:53.150715 3293 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 14 00:57:53.153207 kubelet[3293]: I0114 00:57:53.151039 3293 server.go:956] "Client rotation is on, will bootstrap in background" Jan 14 00:57:53.157418 kubelet[3293]: I0114 00:57:53.156901 3293 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Jan 14 00:57:53.171839 kubelet[3293]: I0114 00:57:53.171800 3293 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 14 00:57:53.191406 kubelet[3293]: I0114 00:57:53.191378 3293 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 14 00:57:53.195534 kubelet[3293]: I0114 00:57:53.195510 3293 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 14 00:57:53.195958 kubelet[3293]: I0114 00:57:53.195935 3293 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 14 00:57:53.196204 kubelet[3293]: I0114 00:57:53.196030 3293 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-19-12","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 14 00:57:53.196408 kubelet[3293]: I0114 00:57:53.196325 3293 topology_manager.go:138] "Creating topology manager with none policy" Jan 14 00:57:53.196408 kubelet[3293]: I0114 00:57:53.196339 3293 container_manager_linux.go:303] "Creating device plugin manager" Jan 14 00:57:53.197979 kubelet[3293]: I0114 00:57:53.197926 3293 state_mem.go:36] "Initialized new in-memory state store" Jan 14 00:57:53.199628 kubelet[3293]: I0114 00:57:53.199609 3293 kubelet.go:480] "Attempting to sync node with API server" Jan 14 00:57:53.199628 kubelet[3293]: I0114 00:57:53.199630 3293 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 14 00:57:53.199917 kubelet[3293]: I0114 00:57:53.199655 3293 kubelet.go:386] "Adding apiserver pod source" Jan 14 00:57:53.199917 kubelet[3293]: I0114 00:57:53.199673 3293 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 14 00:57:53.218595 kubelet[3293]: I0114 00:57:53.218562 3293 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.1.5" apiVersion="v1" Jan 14 00:57:53.219620 kubelet[3293]: I0114 00:57:53.219170 3293 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jan 14 00:57:53.224823 kubelet[3293]: I0114 00:57:53.224799 3293 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 14 00:57:53.224950 kubelet[3293]: I0114 00:57:53.224887 3293 server.go:1289] "Started kubelet" Jan 14 00:57:53.230052 kubelet[3293]: I0114 00:57:53.229312 3293 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 14 00:57:53.243419 kubelet[3293]: I0114 00:57:53.243369 3293 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jan 14 00:57:53.245990 kubelet[3293]: I0114 00:57:53.245081 3293 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 14 00:57:53.245990 kubelet[3293]: E0114 00:57:53.245357 3293 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-19-12\" not found" Jan 14 00:57:53.251436 kubelet[3293]: I0114 00:57:53.251416 3293 server.go:317] "Adding debug handlers to kubelet server" Jan 14 00:57:53.265206 kubelet[3293]: I0114 00:57:53.251553 3293 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 14 00:57:53.267218 kubelet[3293]: I0114 00:57:53.253910 3293 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 14 00:57:53.267218 kubelet[3293]: I0114 00:57:53.256039 3293 reconciler.go:26] "Reconciler: start to sync state" Jan 14 00:57:53.268644 kubelet[3293]: I0114 00:57:53.268546 3293 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 14 00:57:53.271382 kubelet[3293]: I0114 00:57:53.271254 3293 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 14 00:57:53.284707 kubelet[3293]: I0114 00:57:53.282657 3293 factory.go:223] Registration of the containerd container factory successfully Jan 14 00:57:53.285087 kubelet[3293]: I0114 00:57:53.285013 3293 factory.go:223] Registration of the systemd container factory successfully Jan 14 00:57:53.287219 kubelet[3293]: E0114 00:57:53.286545 3293 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 14 00:57:53.288057 kubelet[3293]: I0114 00:57:53.288027 3293 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 14 00:57:53.294705 kubelet[3293]: I0114 00:57:53.294667 3293 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Jan 14 00:57:53.299598 kubelet[3293]: I0114 00:57:53.299570 3293 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Jan 14 00:57:53.299598 kubelet[3293]: I0114 00:57:53.299597 3293 status_manager.go:230] "Starting to sync pod status with apiserver" Jan 14 00:57:53.299773 kubelet[3293]: I0114 00:57:53.299629 3293 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 14 00:57:53.299773 kubelet[3293]: I0114 00:57:53.299641 3293 kubelet.go:2436] "Starting kubelet main sync loop" Jan 14 00:57:53.299773 kubelet[3293]: E0114 00:57:53.299705 3293 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 14 00:57:53.388204 kubelet[3293]: I0114 00:57:53.388094 3293 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 14 00:57:53.388204 kubelet[3293]: I0114 00:57:53.388118 3293 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 14 00:57:53.388204 kubelet[3293]: I0114 00:57:53.388140 3293 state_mem.go:36] "Initialized new in-memory state store" Jan 14 00:57:53.388431 kubelet[3293]: I0114 00:57:53.388306 3293 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 14 00:57:53.388431 kubelet[3293]: I0114 00:57:53.388319 3293 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 14 00:57:53.388431 kubelet[3293]: I0114 00:57:53.388337 3293 policy_none.go:49] "None policy: Start" Jan 14 00:57:53.388431 kubelet[3293]: I0114 00:57:53.388350 3293 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 14 00:57:53.388431 kubelet[3293]: I0114 00:57:53.388363 3293 state_mem.go:35] "Initializing new in-memory state store" Jan 14 00:57:53.388605 kubelet[3293]: I0114 00:57:53.388483 3293 state_mem.go:75] "Updated machine memory state" Jan 14 00:57:53.399267 kubelet[3293]: E0114 00:57:53.399240 3293 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jan 14 00:57:53.400211 kubelet[3293]: E0114 00:57:53.400066 3293 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 14 00:57:53.400725 kubelet[3293]: I0114 00:57:53.400706 3293 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 14 00:57:53.401014 kubelet[3293]: I0114 00:57:53.400727 3293 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 14 00:57:53.401342 kubelet[3293]: I0114 00:57:53.401324 3293 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 14 00:57:53.406553 kubelet[3293]: E0114 00:57:53.406527 3293 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 14 00:57:53.514340 kubelet[3293]: I0114 00:57:53.514169 3293 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-19-12" Jan 14 00:57:53.526860 kubelet[3293]: I0114 00:57:53.526472 3293 kubelet_node_status.go:124] "Node was previously registered" node="ip-172-31-19-12" Jan 14 00:57:53.526860 kubelet[3293]: I0114 00:57:53.526543 3293 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-19-12" Jan 14 00:57:53.601147 kubelet[3293]: I0114 00:57:53.601071 3293 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-19-12" Jan 14 00:57:53.601540 kubelet[3293]: I0114 00:57:53.601463 3293 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-19-12" Jan 14 00:57:53.601850 kubelet[3293]: I0114 00:57:53.601832 3293 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-19-12" Jan 14 00:57:53.672104 kubelet[3293]: I0114 00:57:53.672014 3293 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e7c96cb9b4999699be87538ae713d7ad-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-19-12\" (UID: \"e7c96cb9b4999699be87538ae713d7ad\") " pod="kube-system/kube-apiserver-ip-172-31-19-12" Jan 14 00:57:53.672254 kubelet[3293]: I0114 00:57:53.672235 3293 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5d43ea737c4eb9bda0c1bd75cdfe7201-ca-certs\") pod \"kube-controller-manager-ip-172-31-19-12\" (UID: \"5d43ea737c4eb9bda0c1bd75cdfe7201\") " pod="kube-system/kube-controller-manager-ip-172-31-19-12" Jan 14 00:57:53.672292 kubelet[3293]: I0114 00:57:53.672258 3293 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5d43ea737c4eb9bda0c1bd75cdfe7201-kubeconfig\") pod \"kube-controller-manager-ip-172-31-19-12\" (UID: \"5d43ea737c4eb9bda0c1bd75cdfe7201\") " pod="kube-system/kube-controller-manager-ip-172-31-19-12" Jan 14 00:57:53.672292 kubelet[3293]: I0114 00:57:53.672276 3293 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5d43ea737c4eb9bda0c1bd75cdfe7201-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-19-12\" (UID: \"5d43ea737c4eb9bda0c1bd75cdfe7201\") " pod="kube-system/kube-controller-manager-ip-172-31-19-12" Jan 14 00:57:53.672343 kubelet[3293]: I0114 00:57:53.672293 3293 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e7c96cb9b4999699be87538ae713d7ad-ca-certs\") pod \"kube-apiserver-ip-172-31-19-12\" (UID: \"e7c96cb9b4999699be87538ae713d7ad\") " pod="kube-system/kube-apiserver-ip-172-31-19-12" Jan 14 00:57:53.672343 kubelet[3293]: I0114 00:57:53.672306 3293 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e7c96cb9b4999699be87538ae713d7ad-k8s-certs\") pod \"kube-apiserver-ip-172-31-19-12\" (UID: \"e7c96cb9b4999699be87538ae713d7ad\") " pod="kube-system/kube-apiserver-ip-172-31-19-12" Jan 14 00:57:53.672343 kubelet[3293]: I0114 00:57:53.672320 3293 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/5d43ea737c4eb9bda0c1bd75cdfe7201-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-19-12\" (UID: \"5d43ea737c4eb9bda0c1bd75cdfe7201\") " pod="kube-system/kube-controller-manager-ip-172-31-19-12" Jan 14 00:57:53.672542 kubelet[3293]: I0114 00:57:53.672521 3293 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5d43ea737c4eb9bda0c1bd75cdfe7201-k8s-certs\") pod \"kube-controller-manager-ip-172-31-19-12\" (UID: \"5d43ea737c4eb9bda0c1bd75cdfe7201\") " pod="kube-system/kube-controller-manager-ip-172-31-19-12" Jan 14 00:57:53.672587 kubelet[3293]: I0114 00:57:53.672547 3293 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/03f9cbd5379ba06aca4d1687dd0c58e3-kubeconfig\") pod \"kube-scheduler-ip-172-31-19-12\" (UID: \"03f9cbd5379ba06aca4d1687dd0c58e3\") " pod="kube-system/kube-scheduler-ip-172-31-19-12" Jan 14 00:57:54.202543 kubelet[3293]: I0114 00:57:54.202498 3293 apiserver.go:52] "Watching apiserver" Jan 14 00:57:54.267300 kubelet[3293]: I0114 00:57:54.267251 3293 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 14 00:57:54.355559 kubelet[3293]: I0114 00:57:54.354594 3293 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-19-12" Jan 14 00:57:54.355559 kubelet[3293]: I0114 00:57:54.354765 3293 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-19-12" Jan 14 00:57:54.368449 kubelet[3293]: E0114 00:57:54.368405 3293 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ip-172-31-19-12\" already exists" pod="kube-system/kube-scheduler-ip-172-31-19-12" Jan 14 00:57:54.370034 kubelet[3293]: E0114 00:57:54.370006 3293 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-19-12\" already exists" pod="kube-system/kube-apiserver-ip-172-31-19-12" Jan 14 00:57:54.390895 kubelet[3293]: I0114 00:57:54.389962 3293 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-19-12" podStartSLOduration=1.3899453849999999 podStartE2EDuration="1.389945385s" podCreationTimestamp="2026-01-14 00:57:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-14 00:57:54.378449616 +0000 UTC m=+1.324804459" watchObservedRunningTime="2026-01-14 00:57:54.389945385 +0000 UTC m=+1.336300217" Jan 14 00:57:54.401602 kubelet[3293]: I0114 00:57:54.400952 3293 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-19-12" podStartSLOduration=1.400938453 podStartE2EDuration="1.400938453s" podCreationTimestamp="2026-01-14 00:57:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-14 00:57:54.390480778 +0000 UTC m=+1.336835641" watchObservedRunningTime="2026-01-14 00:57:54.400938453 +0000 UTC m=+1.347293304" Jan 14 00:57:54.411698 kubelet[3293]: I0114 00:57:54.411651 3293 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-19-12" podStartSLOduration=1.411635806 podStartE2EDuration="1.411635806s" podCreationTimestamp="2026-01-14 00:57:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-14 00:57:54.401452505 +0000 UTC m=+1.347807338" watchObservedRunningTime="2026-01-14 00:57:54.411635806 +0000 UTC m=+1.357990657" Jan 14 00:57:54.471607 update_engine[1924]: I20260114 00:57:54.471261 1924 update_attempter.cc:509] Updating boot flags... Jan 14 00:57:58.195325 kubelet[3293]: I0114 00:57:58.195292 3293 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 14 00:57:58.210568 containerd[1939]: time="2026-01-14T00:57:58.210511036Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 14 00:57:58.210933 kubelet[3293]: I0114 00:57:58.210845 3293 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 14 00:58:00.527767 systemd[1]: Created slice kubepods-besteffort-podbac2990d_9b48_4549_a310_4f736661df00.slice - libcontainer container kubepods-besteffort-podbac2990d_9b48_4549_a310_4f736661df00.slice. Jan 14 00:58:00.553178 systemd[1]: Created slice kubepods-besteffort-pod86222dc6_13c8_49c8_97d2_24af60c81dd7.slice - libcontainer container kubepods-besteffort-pod86222dc6_13c8_49c8_97d2_24af60c81dd7.slice. Jan 14 00:58:00.621595 kubelet[3293]: I0114 00:58:00.621488 3293 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/86222dc6-13c8-49c8-97d2-24af60c81dd7-var-lib-calico\") pod \"tigera-operator-7dcd859c48-mxcrz\" (UID: \"86222dc6-13c8-49c8-97d2-24af60c81dd7\") " pod="tigera-operator/tigera-operator-7dcd859c48-mxcrz" Jan 14 00:58:00.621595 kubelet[3293]: I0114 00:58:00.621531 3293 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/bac2990d-9b48-4549-a310-4f736661df00-kube-proxy\") pod \"kube-proxy-m45sf\" (UID: \"bac2990d-9b48-4549-a310-4f736661df00\") " pod="kube-system/kube-proxy-m45sf" Jan 14 00:58:00.621595 kubelet[3293]: I0114 00:58:00.621549 3293 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p2b2g\" (UniqueName: \"kubernetes.io/projected/86222dc6-13c8-49c8-97d2-24af60c81dd7-kube-api-access-p2b2g\") pod \"tigera-operator-7dcd859c48-mxcrz\" (UID: \"86222dc6-13c8-49c8-97d2-24af60c81dd7\") " pod="tigera-operator/tigera-operator-7dcd859c48-mxcrz" Jan 14 00:58:00.621595 kubelet[3293]: I0114 00:58:00.621563 3293 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/bac2990d-9b48-4549-a310-4f736661df00-xtables-lock\") pod \"kube-proxy-m45sf\" (UID: \"bac2990d-9b48-4549-a310-4f736661df00\") " pod="kube-system/kube-proxy-m45sf" Jan 14 00:58:00.621595 kubelet[3293]: I0114 00:58:00.621579 3293 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-56j28\" (UniqueName: \"kubernetes.io/projected/bac2990d-9b48-4549-a310-4f736661df00-kube-api-access-56j28\") pod \"kube-proxy-m45sf\" (UID: \"bac2990d-9b48-4549-a310-4f736661df00\") " pod="kube-system/kube-proxy-m45sf" Jan 14 00:58:00.622062 kubelet[3293]: I0114 00:58:00.621594 3293 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/bac2990d-9b48-4549-a310-4f736661df00-lib-modules\") pod \"kube-proxy-m45sf\" (UID: \"bac2990d-9b48-4549-a310-4f736661df00\") " pod="kube-system/kube-proxy-m45sf" Jan 14 00:58:00.836162 containerd[1939]: time="2026-01-14T00:58:00.836045019Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-m45sf,Uid:bac2990d-9b48-4549-a310-4f736661df00,Namespace:kube-system,Attempt:0,}" Jan 14 00:58:00.855917 containerd[1939]: time="2026-01-14T00:58:00.855864536Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-mxcrz,Uid:86222dc6-13c8-49c8-97d2-24af60c81dd7,Namespace:tigera-operator,Attempt:0,}" Jan 14 00:58:00.866730 containerd[1939]: time="2026-01-14T00:58:00.866512368Z" level=info msg="connecting to shim 6e10d60450d4c956c421cbbc36fcce9ce598b28f1bf598db5869a30cd1aa176f" address="unix:///run/containerd/s/4c9cbebaaa51b5ec20c956b578db83d6291127d5f9ef1baa204dbd1595ad9bae" namespace=k8s.io protocol=ttrpc version=3 Jan 14 00:58:00.895485 systemd[1]: Started cri-containerd-6e10d60450d4c956c421cbbc36fcce9ce598b28f1bf598db5869a30cd1aa176f.scope - libcontainer container 6e10d60450d4c956c421cbbc36fcce9ce598b28f1bf598db5869a30cd1aa176f. Jan 14 00:58:00.910000 audit: BPF prog-id=140 op=LOAD Jan 14 00:58:00.912452 kernel: kauditd_printk_skb: 32 callbacks suppressed Jan 14 00:58:00.912533 kernel: audit: type=1334 audit(1768352280.910:449): prog-id=140 op=LOAD Jan 14 00:58:00.913000 audit: BPF prog-id=141 op=LOAD Jan 14 00:58:00.920766 kernel: audit: type=1334 audit(1768352280.913:450): prog-id=141 op=LOAD Jan 14 00:58:00.920863 kernel: audit: type=1300 audit(1768352280.913:450): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000138238 a2=98 a3=0 items=0 ppid=3446 pid=3457 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:58:00.913000 audit[3457]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000138238 a2=98 a3=0 items=0 ppid=3446 pid=3457 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:58:00.925830 kernel: audit: type=1327 audit(1768352280.913:450): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3665313064363034353064346339353663343231636262633336666363 Jan 14 00:58:00.913000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3665313064363034353064346339353663343231636262633336666363 Jan 14 00:58:00.925953 containerd[1939]: time="2026-01-14T00:58:00.924062948Z" level=info msg="connecting to shim 927aaf7c5f5b56348f4a8f710d79cc4d7be1be3356ba8eb85e5744c5771ed667" address="unix:///run/containerd/s/7f5ce4b54665a7bea5f0471378e79b758ae01c8e0d8d3aa25eb32f222d6148a2" namespace=k8s.io protocol=ttrpc version=3 Jan 14 00:58:00.913000 audit: BPF prog-id=141 op=UNLOAD Jan 14 00:58:00.929200 kernel: audit: type=1334 audit(1768352280.913:451): prog-id=141 op=UNLOAD Jan 14 00:58:00.913000 audit[3457]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3446 pid=3457 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:58:00.938212 kernel: audit: type=1300 audit(1768352280.913:451): arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3446 pid=3457 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:58:00.913000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3665313064363034353064346339353663343231636262633336666363 Jan 14 00:58:00.944201 kernel: audit: type=1327 audit(1768352280.913:451): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3665313064363034353064346339353663343231636262633336666363 Jan 14 00:58:00.913000 audit: BPF prog-id=142 op=LOAD Jan 14 00:58:00.957712 kernel: audit: type=1334 audit(1768352280.913:452): prog-id=142 op=LOAD Jan 14 00:58:00.957791 kernel: audit: type=1300 audit(1768352280.913:452): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000138488 a2=98 a3=0 items=0 ppid=3446 pid=3457 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:58:00.913000 audit[3457]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000138488 a2=98 a3=0 items=0 ppid=3446 pid=3457 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:58:00.962949 kernel: audit: type=1327 audit(1768352280.913:452): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3665313064363034353064346339353663343231636262633336666363 Jan 14 00:58:00.913000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3665313064363034353064346339353663343231636262633336666363 Jan 14 00:58:00.913000 audit: BPF prog-id=143 op=LOAD Jan 14 00:58:00.913000 audit[3457]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000138218 a2=98 a3=0 items=0 ppid=3446 pid=3457 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:58:00.913000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3665313064363034353064346339353663343231636262633336666363 Jan 14 00:58:00.913000 audit: BPF prog-id=143 op=UNLOAD Jan 14 00:58:00.913000 audit[3457]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3446 pid=3457 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:58:00.913000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3665313064363034353064346339353663343231636262633336666363 Jan 14 00:58:00.913000 audit: BPF prog-id=142 op=UNLOAD Jan 14 00:58:00.913000 audit[3457]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3446 pid=3457 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:58:00.913000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3665313064363034353064346339353663343231636262633336666363 Jan 14 00:58:00.913000 audit: BPF prog-id=144 op=LOAD Jan 14 00:58:00.913000 audit[3457]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001386e8 a2=98 a3=0 items=0 ppid=3446 pid=3457 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:58:00.913000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3665313064363034353064346339353663343231636262633336666363 Jan 14 00:58:00.965385 systemd[1]: Started cri-containerd-927aaf7c5f5b56348f4a8f710d79cc4d7be1be3356ba8eb85e5744c5771ed667.scope - libcontainer container 927aaf7c5f5b56348f4a8f710d79cc4d7be1be3356ba8eb85e5744c5771ed667. Jan 14 00:58:00.973986 containerd[1939]: time="2026-01-14T00:58:00.973908084Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-m45sf,Uid:bac2990d-9b48-4549-a310-4f736661df00,Namespace:kube-system,Attempt:0,} returns sandbox id \"6e10d60450d4c956c421cbbc36fcce9ce598b28f1bf598db5869a30cd1aa176f\"" Jan 14 00:58:00.981799 containerd[1939]: time="2026-01-14T00:58:00.981698874Z" level=info msg="CreateContainer within sandbox \"6e10d60450d4c956c421cbbc36fcce9ce598b28f1bf598db5869a30cd1aa176f\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 14 00:58:00.982000 audit: BPF prog-id=145 op=LOAD Jan 14 00:58:00.982000 audit: BPF prog-id=146 op=LOAD Jan 14 00:58:00.982000 audit[3496]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001b0238 a2=98 a3=0 items=0 ppid=3485 pid=3496 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:58:00.982000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3932376161663763356635623536333438663461386637313064373963 Jan 14 00:58:00.982000 audit: BPF prog-id=146 op=UNLOAD Jan 14 00:58:00.982000 audit[3496]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3485 pid=3496 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:58:00.982000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3932376161663763356635623536333438663461386637313064373963 Jan 14 00:58:00.982000 audit: BPF prog-id=147 op=LOAD Jan 14 00:58:00.982000 audit[3496]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001b0488 a2=98 a3=0 items=0 ppid=3485 pid=3496 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:58:00.982000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3932376161663763356635623536333438663461386637313064373963 Jan 14 00:58:00.982000 audit: BPF prog-id=148 op=LOAD Jan 14 00:58:00.982000 audit[3496]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001b0218 a2=98 a3=0 items=0 ppid=3485 pid=3496 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:58:00.982000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3932376161663763356635623536333438663461386637313064373963 Jan 14 00:58:00.982000 audit: BPF prog-id=148 op=UNLOAD Jan 14 00:58:00.982000 audit[3496]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3485 pid=3496 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:58:00.982000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3932376161663763356635623536333438663461386637313064373963 Jan 14 00:58:00.982000 audit: BPF prog-id=147 op=UNLOAD Jan 14 00:58:00.982000 audit[3496]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3485 pid=3496 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:58:00.982000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3932376161663763356635623536333438663461386637313064373963 Jan 14 00:58:00.982000 audit: BPF prog-id=149 op=LOAD Jan 14 00:58:00.982000 audit[3496]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001b06e8 a2=98 a3=0 items=0 ppid=3485 pid=3496 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:58:00.982000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3932376161663763356635623536333438663461386637313064373963 Jan 14 00:58:01.023199 containerd[1939]: time="2026-01-14T00:58:01.023154155Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-mxcrz,Uid:86222dc6-13c8-49c8-97d2-24af60c81dd7,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"927aaf7c5f5b56348f4a8f710d79cc4d7be1be3356ba8eb85e5744c5771ed667\"" Jan 14 00:58:01.024975 containerd[1939]: time="2026-01-14T00:58:01.024943712Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Jan 14 00:58:01.033200 containerd[1939]: time="2026-01-14T00:58:01.033130817Z" level=info msg="Container a2ce2e8d1930be02246ffde4e2a3608fac43ad6296a606c44ac7d6a0bfcc64d2: CDI devices from CRI Config.CDIDevices: []" Jan 14 00:58:01.045798 containerd[1939]: time="2026-01-14T00:58:01.045751135Z" level=info msg="CreateContainer within sandbox \"6e10d60450d4c956c421cbbc36fcce9ce598b28f1bf598db5869a30cd1aa176f\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"a2ce2e8d1930be02246ffde4e2a3608fac43ad6296a606c44ac7d6a0bfcc64d2\"" Jan 14 00:58:01.046836 containerd[1939]: time="2026-01-14T00:58:01.046728697Z" level=info msg="StartContainer for \"a2ce2e8d1930be02246ffde4e2a3608fac43ad6296a606c44ac7d6a0bfcc64d2\"" Jan 14 00:58:01.076469 containerd[1939]: time="2026-01-14T00:58:01.076417750Z" level=info msg="connecting to shim a2ce2e8d1930be02246ffde4e2a3608fac43ad6296a606c44ac7d6a0bfcc64d2" address="unix:///run/containerd/s/4c9cbebaaa51b5ec20c956b578db83d6291127d5f9ef1baa204dbd1595ad9bae" protocol=ttrpc version=3 Jan 14 00:58:01.094419 systemd[1]: Started cri-containerd-a2ce2e8d1930be02246ffde4e2a3608fac43ad6296a606c44ac7d6a0bfcc64d2.scope - libcontainer container a2ce2e8d1930be02246ffde4e2a3608fac43ad6296a606c44ac7d6a0bfcc64d2. Jan 14 00:58:01.151000 audit: BPF prog-id=150 op=LOAD Jan 14 00:58:01.151000 audit[3529]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001a8488 a2=98 a3=0 items=0 ppid=3446 pid=3529 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:58:01.151000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6132636532653864313933306265303232343666666465346532613336 Jan 14 00:58:01.151000 audit: BPF prog-id=151 op=LOAD Jan 14 00:58:01.151000 audit[3529]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c0001a8218 a2=98 a3=0 items=0 ppid=3446 pid=3529 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:58:01.151000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6132636532653864313933306265303232343666666465346532613336 Jan 14 00:58:01.151000 audit: BPF prog-id=151 op=UNLOAD Jan 14 00:58:01.151000 audit[3529]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3446 pid=3529 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:58:01.151000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6132636532653864313933306265303232343666666465346532613336 Jan 14 00:58:01.151000 audit: BPF prog-id=150 op=UNLOAD Jan 14 00:58:01.151000 audit[3529]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3446 pid=3529 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:58:01.151000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6132636532653864313933306265303232343666666465346532613336 Jan 14 00:58:01.151000 audit: BPF prog-id=152 op=LOAD Jan 14 00:58:01.151000 audit[3529]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001a86e8 a2=98 a3=0 items=0 ppid=3446 pid=3529 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:58:01.151000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6132636532653864313933306265303232343666666465346532613336 Jan 14 00:58:01.183784 containerd[1939]: time="2026-01-14T00:58:01.183736335Z" level=info msg="StartContainer for \"a2ce2e8d1930be02246ffde4e2a3608fac43ad6296a606c44ac7d6a0bfcc64d2\" returns successfully" Jan 14 00:58:01.382664 kubelet[3293]: I0114 00:58:01.382301 3293 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-m45sf" podStartSLOduration=2.382281563 podStartE2EDuration="2.382281563s" podCreationTimestamp="2026-01-14 00:57:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-14 00:58:01.382080203 +0000 UTC m=+8.328435056" watchObservedRunningTime="2026-01-14 00:58:01.382281563 +0000 UTC m=+8.328636417" Jan 14 00:58:02.463673 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4115039129.mount: Deactivated successfully. Jan 14 00:58:03.205493 containerd[1939]: time="2026-01-14T00:58:03.205438475Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 00:58:03.207419 containerd[1939]: time="2026-01-14T00:58:03.207368452Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=0" Jan 14 00:58:03.209537 containerd[1939]: time="2026-01-14T00:58:03.209487631Z" level=info msg="ImageCreate event name:\"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 00:58:03.212894 containerd[1939]: time="2026-01-14T00:58:03.212843981Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 00:58:03.213781 containerd[1939]: time="2026-01-14T00:58:03.213528714Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"25057686\" in 2.188544363s" Jan 14 00:58:03.213781 containerd[1939]: time="2026-01-14T00:58:03.213563270Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\"" Jan 14 00:58:03.219731 containerd[1939]: time="2026-01-14T00:58:03.219700671Z" level=info msg="CreateContainer within sandbox \"927aaf7c5f5b56348f4a8f710d79cc4d7be1be3356ba8eb85e5744c5771ed667\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jan 14 00:58:03.245746 containerd[1939]: time="2026-01-14T00:58:03.244172167Z" level=info msg="Container 31e657610061df2a03831e4abace3030f1c96544e78dd28f12e4579ef8d396c6: CDI devices from CRI Config.CDIDevices: []" Jan 14 00:58:03.259234 containerd[1939]: time="2026-01-14T00:58:03.256767107Z" level=info msg="CreateContainer within sandbox \"927aaf7c5f5b56348f4a8f710d79cc4d7be1be3356ba8eb85e5744c5771ed667\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"31e657610061df2a03831e4abace3030f1c96544e78dd28f12e4579ef8d396c6\"" Jan 14 00:58:03.259234 containerd[1939]: time="2026-01-14T00:58:03.257968357Z" level=info msg="StartContainer for \"31e657610061df2a03831e4abace3030f1c96544e78dd28f12e4579ef8d396c6\"" Jan 14 00:58:03.261829 containerd[1939]: time="2026-01-14T00:58:03.261544097Z" level=info msg="connecting to shim 31e657610061df2a03831e4abace3030f1c96544e78dd28f12e4579ef8d396c6" address="unix:///run/containerd/s/7f5ce4b54665a7bea5f0471378e79b758ae01c8e0d8d3aa25eb32f222d6148a2" protocol=ttrpc version=3 Jan 14 00:58:03.282389 systemd[1]: Started cri-containerd-31e657610061df2a03831e4abace3030f1c96544e78dd28f12e4579ef8d396c6.scope - libcontainer container 31e657610061df2a03831e4abace3030f1c96544e78dd28f12e4579ef8d396c6. Jan 14 00:58:03.292000 audit: BPF prog-id=153 op=LOAD Jan 14 00:58:03.293000 audit: BPF prog-id=154 op=LOAD Jan 14 00:58:03.293000 audit[3566]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=3485 pid=3566 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:58:03.293000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3331653635373631303036316466326130333833316534616261636533 Jan 14 00:58:03.293000 audit: BPF prog-id=154 op=UNLOAD Jan 14 00:58:03.293000 audit[3566]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3485 pid=3566 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:58:03.293000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3331653635373631303036316466326130333833316534616261636533 Jan 14 00:58:03.293000 audit: BPF prog-id=155 op=LOAD Jan 14 00:58:03.293000 audit[3566]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=3485 pid=3566 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:58:03.293000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3331653635373631303036316466326130333833316534616261636533 Jan 14 00:58:03.293000 audit: BPF prog-id=156 op=LOAD Jan 14 00:58:03.293000 audit[3566]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=3485 pid=3566 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:58:03.293000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3331653635373631303036316466326130333833316534616261636533 Jan 14 00:58:03.293000 audit: BPF prog-id=156 op=UNLOAD Jan 14 00:58:03.293000 audit[3566]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3485 pid=3566 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:58:03.293000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3331653635373631303036316466326130333833316534616261636533 Jan 14 00:58:03.293000 audit: BPF prog-id=155 op=UNLOAD Jan 14 00:58:03.293000 audit[3566]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3485 pid=3566 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:58:03.293000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3331653635373631303036316466326130333833316534616261636533 Jan 14 00:58:03.293000 audit: BPF prog-id=157 op=LOAD Jan 14 00:58:03.293000 audit[3566]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=3485 pid=3566 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:58:03.293000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3331653635373631303036316466326130333833316534616261636533 Jan 14 00:58:03.322221 containerd[1939]: time="2026-01-14T00:58:03.322127448Z" level=info msg="StartContainer for \"31e657610061df2a03831e4abace3030f1c96544e78dd28f12e4579ef8d396c6\" returns successfully" Jan 14 00:58:06.254571 kernel: kauditd_printk_skb: 71 callbacks suppressed Jan 14 00:58:06.254698 kernel: audit: type=1325 audit(1768352286.249:478): table=mangle:54 family=2 entries=1 op=nft_register_chain pid=3634 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 00:58:06.249000 audit[3634]: NETFILTER_CFG table=mangle:54 family=2 entries=1 op=nft_register_chain pid=3634 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 00:58:06.256210 kernel: audit: type=1300 audit(1768352286.249:478): arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fff99da0c70 a2=0 a3=7fff99da0c5c items=0 ppid=3541 pid=3634 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:58:06.249000 audit[3634]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fff99da0c70 a2=0 a3=7fff99da0c5c items=0 ppid=3541 pid=3634 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:58:06.249000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Jan 14 00:58:06.265287 kernel: audit: type=1327 audit(1768352286.249:478): proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Jan 14 00:58:06.265326 kernel: audit: type=1325 audit(1768352286.254:479): table=nat:55 family=2 entries=1 op=nft_register_chain pid=3636 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 00:58:06.254000 audit[3636]: NETFILTER_CFG table=nat:55 family=2 entries=1 op=nft_register_chain pid=3636 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 00:58:06.254000 audit[3636]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe386d5fd0 a2=0 a3=7ffe386d5fbc items=0 ppid=3541 pid=3636 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:58:06.268941 kernel: audit: type=1300 audit(1768352286.254:479): arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe386d5fd0 a2=0 a3=7ffe386d5fbc items=0 ppid=3541 pid=3636 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:58:06.254000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Jan 14 00:58:06.275749 kernel: audit: type=1327 audit(1768352286.254:479): proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Jan 14 00:58:06.275819 kernel: audit: type=1325 audit(1768352286.256:480): table=mangle:56 family=10 entries=1 op=nft_register_chain pid=3637 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 00:58:06.256000 audit[3637]: NETFILTER_CFG table=mangle:56 family=10 entries=1 op=nft_register_chain pid=3637 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 00:58:06.256000 audit[3637]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc0eca35d0 a2=0 a3=7ffc0eca35bc items=0 ppid=3541 pid=3637 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:58:06.279578 kernel: audit: type=1300 audit(1768352286.256:480): arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc0eca35d0 a2=0 a3=7ffc0eca35bc items=0 ppid=3541 pid=3637 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:58:06.256000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Jan 14 00:58:06.284463 kernel: audit: type=1327 audit(1768352286.256:480): proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Jan 14 00:58:06.256000 audit[3638]: NETFILTER_CFG table=filter:57 family=2 entries=1 op=nft_register_chain pid=3638 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 00:58:06.289035 kernel: audit: type=1325 audit(1768352286.256:481): table=filter:57 family=2 entries=1 op=nft_register_chain pid=3638 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 00:58:06.256000 audit[3638]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe5f228b90 a2=0 a3=7ffe5f228b7c items=0 ppid=3541 pid=3638 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:58:06.256000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Jan 14 00:58:06.256000 audit[3639]: NETFILTER_CFG table=nat:58 family=10 entries=1 op=nft_register_chain pid=3639 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 00:58:06.256000 audit[3639]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffccacc0260 a2=0 a3=7ffccacc024c items=0 ppid=3541 pid=3639 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:58:06.256000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Jan 14 00:58:06.261000 audit[3640]: NETFILTER_CFG table=filter:59 family=10 entries=1 op=nft_register_chain pid=3640 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 00:58:06.261000 audit[3640]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fff07b40050 a2=0 a3=7fff07b4003c items=0 ppid=3541 pid=3640 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:58:06.261000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Jan 14 00:58:06.399000 audit[3643]: NETFILTER_CFG table=filter:60 family=2 entries=1 op=nft_register_chain pid=3643 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 00:58:06.399000 audit[3643]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7ffdc5377260 a2=0 a3=7ffdc537724c items=0 ppid=3541 pid=3643 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:58:06.399000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Jan 14 00:58:06.437000 audit[3645]: NETFILTER_CFG table=filter:61 family=2 entries=1 op=nft_register_rule pid=3645 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 00:58:06.437000 audit[3645]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffc6cbdb060 a2=0 a3=7ffc6cbdb04c items=0 ppid=3541 pid=3645 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:58:06.437000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276696365 Jan 14 00:58:06.441000 audit[3648]: NETFILTER_CFG table=filter:62 family=2 entries=1 op=nft_register_rule pid=3648 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 00:58:06.441000 audit[3648]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffd07f65040 a2=0 a3=7ffd07f6502c items=0 ppid=3541 pid=3648 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:58:06.441000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669 Jan 14 00:58:06.443000 audit[3649]: NETFILTER_CFG table=filter:63 family=2 entries=1 op=nft_register_chain pid=3649 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 00:58:06.443000 audit[3649]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff5dbe7d90 a2=0 a3=7fff5dbe7d7c items=0 ppid=3541 pid=3649 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:58:06.443000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Jan 14 00:58:06.445000 audit[3651]: NETFILTER_CFG table=filter:64 family=2 entries=1 op=nft_register_rule pid=3651 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 00:58:06.445000 audit[3651]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffc52ed0c40 a2=0 a3=7ffc52ed0c2c items=0 ppid=3541 pid=3651 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:58:06.445000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Jan 14 00:58:06.447000 audit[3652]: NETFILTER_CFG table=filter:65 family=2 entries=1 op=nft_register_chain pid=3652 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 00:58:06.447000 audit[3652]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffdb215cac0 a2=0 a3=7ffdb215caac items=0 ppid=3541 pid=3652 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:58:06.447000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Jan 14 00:58:06.449000 audit[3654]: NETFILTER_CFG table=filter:66 family=2 entries=1 op=nft_register_rule pid=3654 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 00:58:06.449000 audit[3654]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffdf99e3b40 a2=0 a3=7ffdf99e3b2c items=0 ppid=3541 pid=3654 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:58:06.449000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Jan 14 00:58:06.453000 audit[3657]: NETFILTER_CFG table=filter:67 family=2 entries=1 op=nft_register_rule pid=3657 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 00:58:06.453000 audit[3657]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7fffdb2e4130 a2=0 a3=7fffdb2e411c items=0 ppid=3541 pid=3657 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:58:06.453000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D53 Jan 14 00:58:06.454000 audit[3658]: NETFILTER_CFG table=filter:68 family=2 entries=1 op=nft_register_chain pid=3658 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 00:58:06.454000 audit[3658]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffcc884f680 a2=0 a3=7ffcc884f66c items=0 ppid=3541 pid=3658 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:58:06.454000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Jan 14 00:58:06.457000 audit[3660]: NETFILTER_CFG table=filter:69 family=2 entries=1 op=nft_register_rule pid=3660 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 00:58:06.457000 audit[3660]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffd073c53b0 a2=0 a3=7ffd073c539c items=0 ppid=3541 pid=3660 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:58:06.457000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Jan 14 00:58:06.458000 audit[3661]: NETFILTER_CFG table=filter:70 family=2 entries=1 op=nft_register_chain pid=3661 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 00:58:06.458000 audit[3661]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fffa531a820 a2=0 a3=7fffa531a80c items=0 ppid=3541 pid=3661 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:58:06.458000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Jan 14 00:58:06.461000 audit[3663]: NETFILTER_CFG table=filter:71 family=2 entries=1 op=nft_register_rule pid=3663 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 00:58:06.461000 audit[3663]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7fff24144c70 a2=0 a3=7fff24144c5c items=0 ppid=3541 pid=3663 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:58:06.461000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Jan 14 00:58:06.466000 audit[3666]: NETFILTER_CFG table=filter:72 family=2 entries=1 op=nft_register_rule pid=3666 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 00:58:06.466000 audit[3666]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffd3b13a950 a2=0 a3=7ffd3b13a93c items=0 ppid=3541 pid=3666 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:58:06.466000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Jan 14 00:58:06.470000 audit[3669]: NETFILTER_CFG table=filter:73 family=2 entries=1 op=nft_register_rule pid=3669 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 00:58:06.470000 audit[3669]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffd7a28cf20 a2=0 a3=7ffd7a28cf0c items=0 ppid=3541 pid=3669 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:58:06.470000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Jan 14 00:58:06.472000 audit[3670]: NETFILTER_CFG table=nat:74 family=2 entries=1 op=nft_register_chain pid=3670 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 00:58:06.472000 audit[3670]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7fff65fbc020 a2=0 a3=7fff65fbc00c items=0 ppid=3541 pid=3670 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:58:06.472000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Jan 14 00:58:06.475000 audit[3672]: NETFILTER_CFG table=nat:75 family=2 entries=1 op=nft_register_rule pid=3672 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 00:58:06.475000 audit[3672]: SYSCALL arch=c000003e syscall=46 success=yes exit=524 a0=3 a1=7ffd7291ae90 a2=0 a3=7ffd7291ae7c items=0 ppid=3541 pid=3672 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:58:06.475000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 14 00:58:06.479000 audit[3675]: NETFILTER_CFG table=nat:76 family=2 entries=1 op=nft_register_rule pid=3675 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 00:58:06.479000 audit[3675]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7fff02e92320 a2=0 a3=7fff02e9230c items=0 ppid=3541 pid=3675 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:58:06.479000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 14 00:58:06.481000 audit[3676]: NETFILTER_CFG table=nat:77 family=2 entries=1 op=nft_register_chain pid=3676 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 00:58:06.481000 audit[3676]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffcfb7c9900 a2=0 a3=7ffcfb7c98ec items=0 ppid=3541 pid=3676 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:58:06.481000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Jan 14 00:58:06.483000 audit[3678]: NETFILTER_CFG table=nat:78 family=2 entries=1 op=nft_register_rule pid=3678 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 00:58:06.483000 audit[3678]: SYSCALL arch=c000003e syscall=46 success=yes exit=532 a0=3 a1=7fffa8731840 a2=0 a3=7fffa873182c items=0 ppid=3541 pid=3678 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:58:06.483000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Jan 14 00:58:06.545000 audit[3684]: NETFILTER_CFG table=filter:79 family=2 entries=8 op=nft_register_rule pid=3684 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 00:58:06.545000 audit[3684]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffc574fa920 a2=0 a3=7ffc574fa90c items=0 ppid=3541 pid=3684 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:58:06.545000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 00:58:06.555000 audit[3684]: NETFILTER_CFG table=nat:80 family=2 entries=14 op=nft_register_chain pid=3684 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 00:58:06.555000 audit[3684]: SYSCALL arch=c000003e syscall=46 success=yes exit=5508 a0=3 a1=7ffc574fa920 a2=0 a3=7ffc574fa90c items=0 ppid=3541 pid=3684 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:58:06.555000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 00:58:06.557000 audit[3689]: NETFILTER_CFG table=filter:81 family=10 entries=1 op=nft_register_chain pid=3689 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 00:58:06.557000 audit[3689]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7fffea238ac0 a2=0 a3=7fffea238aac items=0 ppid=3541 pid=3689 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:58:06.557000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Jan 14 00:58:06.561000 audit[3691]: NETFILTER_CFG table=filter:82 family=10 entries=2 op=nft_register_chain pid=3691 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 00:58:06.561000 audit[3691]: SYSCALL arch=c000003e syscall=46 success=yes exit=836 a0=3 a1=7fff57060760 a2=0 a3=7fff5706074c items=0 ppid=3541 pid=3691 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:58:06.561000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C6520736572766963 Jan 14 00:58:06.566000 audit[3694]: NETFILTER_CFG table=filter:83 family=10 entries=1 op=nft_register_rule pid=3694 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 00:58:06.566000 audit[3694]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffd6cbe7bb0 a2=0 a3=7ffd6cbe7b9c items=0 ppid=3541 pid=3694 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:58:06.566000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276 Jan 14 00:58:06.567000 audit[3695]: NETFILTER_CFG table=filter:84 family=10 entries=1 op=nft_register_chain pid=3695 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 00:58:06.567000 audit[3695]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffffc888400 a2=0 a3=7ffffc8883ec items=0 ppid=3541 pid=3695 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:58:06.567000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Jan 14 00:58:06.572000 audit[3697]: NETFILTER_CFG table=filter:85 family=10 entries=1 op=nft_register_rule pid=3697 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 00:58:06.572000 audit[3697]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffcb7f78b10 a2=0 a3=7ffcb7f78afc items=0 ppid=3541 pid=3697 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:58:06.572000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Jan 14 00:58:06.574000 audit[3698]: NETFILTER_CFG table=filter:86 family=10 entries=1 op=nft_register_chain pid=3698 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 00:58:06.574000 audit[3698]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffeca526720 a2=0 a3=7ffeca52670c items=0 ppid=3541 pid=3698 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:58:06.574000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Jan 14 00:58:06.577000 audit[3700]: NETFILTER_CFG table=filter:87 family=10 entries=1 op=nft_register_rule pid=3700 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 00:58:06.577000 audit[3700]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7fffa0e861b0 a2=0 a3=7fffa0e8619c items=0 ppid=3541 pid=3700 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:58:06.577000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B554245 Jan 14 00:58:06.581000 audit[3703]: NETFILTER_CFG table=filter:88 family=10 entries=2 op=nft_register_chain pid=3703 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 00:58:06.581000 audit[3703]: SYSCALL arch=c000003e syscall=46 success=yes exit=828 a0=3 a1=7ffd82653c60 a2=0 a3=7ffd82653c4c items=0 ppid=3541 pid=3703 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:58:06.581000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Jan 14 00:58:06.582000 audit[3704]: NETFILTER_CFG table=filter:89 family=10 entries=1 op=nft_register_chain pid=3704 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 00:58:06.582000 audit[3704]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffdca271790 a2=0 a3=7ffdca27177c items=0 ppid=3541 pid=3704 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:58:06.582000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Jan 14 00:58:06.585000 audit[3706]: NETFILTER_CFG table=filter:90 family=10 entries=1 op=nft_register_rule pid=3706 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 00:58:06.585000 audit[3706]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffd44880550 a2=0 a3=7ffd4488053c items=0 ppid=3541 pid=3706 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:58:06.585000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Jan 14 00:58:06.586000 audit[3707]: NETFILTER_CFG table=filter:91 family=10 entries=1 op=nft_register_chain pid=3707 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 00:58:06.586000 audit[3707]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe6fff1f70 a2=0 a3=7ffe6fff1f5c items=0 ppid=3541 pid=3707 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:58:06.586000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Jan 14 00:58:06.589000 audit[3709]: NETFILTER_CFG table=filter:92 family=10 entries=1 op=nft_register_rule pid=3709 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 00:58:06.589000 audit[3709]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffcc79545e0 a2=0 a3=7ffcc79545cc items=0 ppid=3541 pid=3709 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:58:06.589000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Jan 14 00:58:06.594000 audit[3712]: NETFILTER_CFG table=filter:93 family=10 entries=1 op=nft_register_rule pid=3712 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 00:58:06.594000 audit[3712]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7fff45d1e690 a2=0 a3=7fff45d1e67c items=0 ppid=3541 pid=3712 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:58:06.594000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Jan 14 00:58:06.598000 audit[3715]: NETFILTER_CFG table=filter:94 family=10 entries=1 op=nft_register_rule pid=3715 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 00:58:06.598000 audit[3715]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffcfe0bae00 a2=0 a3=7ffcfe0badec items=0 ppid=3541 pid=3715 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:58:06.598000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C Jan 14 00:58:06.599000 audit[3716]: NETFILTER_CFG table=nat:95 family=10 entries=1 op=nft_register_chain pid=3716 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 00:58:06.599000 audit[3716]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7fff4877b7a0 a2=0 a3=7fff4877b78c items=0 ppid=3541 pid=3716 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:58:06.599000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Jan 14 00:58:06.603000 audit[3718]: NETFILTER_CFG table=nat:96 family=10 entries=1 op=nft_register_rule pid=3718 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 00:58:06.603000 audit[3718]: SYSCALL arch=c000003e syscall=46 success=yes exit=524 a0=3 a1=7ffc92f33bd0 a2=0 a3=7ffc92f33bbc items=0 ppid=3541 pid=3718 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:58:06.603000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 14 00:58:06.607000 audit[3721]: NETFILTER_CFG table=nat:97 family=10 entries=1 op=nft_register_rule pid=3721 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 00:58:06.607000 audit[3721]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffd17224c80 a2=0 a3=7ffd17224c6c items=0 ppid=3541 pid=3721 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:58:06.607000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 14 00:58:06.609000 audit[3722]: NETFILTER_CFG table=nat:98 family=10 entries=1 op=nft_register_chain pid=3722 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 00:58:06.609000 audit[3722]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fffd88b5940 a2=0 a3=7fffd88b592c items=0 ppid=3541 pid=3722 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:58:06.609000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Jan 14 00:58:06.611000 audit[3724]: NETFILTER_CFG table=nat:99 family=10 entries=2 op=nft_register_chain pid=3724 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 00:58:06.611000 audit[3724]: SYSCALL arch=c000003e syscall=46 success=yes exit=612 a0=3 a1=7ffe8697b660 a2=0 a3=7ffe8697b64c items=0 ppid=3541 pid=3724 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:58:06.611000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Jan 14 00:58:06.613000 audit[3725]: NETFILTER_CFG table=filter:100 family=10 entries=1 op=nft_register_chain pid=3725 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 00:58:06.613000 audit[3725]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe84bb1ba0 a2=0 a3=7ffe84bb1b8c items=0 ppid=3541 pid=3725 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:58:06.613000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Jan 14 00:58:06.616000 audit[3727]: NETFILTER_CFG table=filter:101 family=10 entries=1 op=nft_register_rule pid=3727 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 00:58:06.616000 audit[3727]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffdc41630e0 a2=0 a3=7ffdc41630cc items=0 ppid=3541 pid=3727 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:58:06.616000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 14 00:58:06.619000 audit[3730]: NETFILTER_CFG table=filter:102 family=10 entries=1 op=nft_register_rule pid=3730 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 00:58:06.619000 audit[3730]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffe4d021b70 a2=0 a3=7ffe4d021b5c items=0 ppid=3541 pid=3730 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:58:06.619000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 14 00:58:06.626000 audit[3732]: NETFILTER_CFG table=filter:103 family=10 entries=3 op=nft_register_rule pid=3732 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Jan 14 00:58:06.626000 audit[3732]: SYSCALL arch=c000003e syscall=46 success=yes exit=2088 a0=3 a1=7ffdfd162880 a2=0 a3=7ffdfd16286c items=0 ppid=3541 pid=3732 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:58:06.626000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 00:58:06.626000 audit[3732]: NETFILTER_CFG table=nat:104 family=10 entries=7 op=nft_register_chain pid=3732 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Jan 14 00:58:06.626000 audit[3732]: SYSCALL arch=c000003e syscall=46 success=yes exit=2056 a0=3 a1=7ffdfd162880 a2=0 a3=7ffdfd16286c items=0 ppid=3541 pid=3732 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:58:06.626000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 00:58:39.676531 sudo[2322]: pam_unix(sudo:session): session closed for user root Jan 14 00:58:39.684254 kernel: kauditd_printk_skb: 143 callbacks suppressed Jan 14 00:58:39.684333 kernel: audit: type=1106 audit(1768352319.675:529): pid=2322 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 14 00:58:39.675000 audit[2322]: USER_END pid=2322 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 14 00:58:39.675000 audit[2322]: CRED_DISP pid=2322 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 14 00:58:39.691304 kernel: audit: type=1104 audit(1768352319.675:530): pid=2322 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 14 00:58:39.761206 sshd[2321]: Connection closed by 68.220.241.50 port 38372 Jan 14 00:58:39.761818 sshd-session[2317]: pam_unix(sshd:session): session closed for user core Jan 14 00:58:39.764000 audit[2317]: USER_END pid=2317 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 00:58:39.773208 kernel: audit: type=1106 audit(1768352319.764:531): pid=2317 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 00:58:39.776834 systemd[1]: sshd@6-172.31.19.12:22-68.220.241.50:38372.service: Deactivated successfully. Jan 14 00:58:39.790142 kernel: audit: type=1104 audit(1768352319.771:532): pid=2317 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 00:58:39.771000 audit[2317]: CRED_DISP pid=2317 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 00:58:39.788485 systemd[1]: session-8.scope: Deactivated successfully. Jan 14 00:58:39.790796 systemd[1]: session-8.scope: Consumed 5.427s CPU time, 153.7M memory peak. Jan 14 00:58:39.794478 systemd-logind[1922]: Session 8 logged out. Waiting for processes to exit. Jan 14 00:58:39.804119 kernel: audit: type=1131 audit(1768352319.775:533): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-172.31.19.12:22-68.220.241.50:38372 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:58:39.775000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-172.31.19.12:22-68.220.241.50:38372 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:58:39.803699 systemd-logind[1922]: Removed session 8. Jan 14 00:58:40.284000 audit[3786]: NETFILTER_CFG table=filter:105 family=2 entries=15 op=nft_register_rule pid=3786 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 00:58:40.291206 kernel: audit: type=1325 audit(1768352320.284:534): table=filter:105 family=2 entries=15 op=nft_register_rule pid=3786 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 00:58:40.284000 audit[3786]: SYSCALL arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7ffd1c52c1d0 a2=0 a3=7ffd1c52c1bc items=0 ppid=3541 pid=3786 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:58:40.300224 kernel: audit: type=1300 audit(1768352320.284:534): arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7ffd1c52c1d0 a2=0 a3=7ffd1c52c1bc items=0 ppid=3541 pid=3786 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:58:40.284000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 00:58:40.305208 kernel: audit: type=1327 audit(1768352320.284:534): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 00:58:40.305685 kernel: audit: type=1325 audit(1768352320.290:535): table=nat:106 family=2 entries=12 op=nft_register_rule pid=3786 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 00:58:40.290000 audit[3786]: NETFILTER_CFG table=nat:106 family=2 entries=12 op=nft_register_rule pid=3786 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 00:58:40.290000 audit[3786]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffd1c52c1d0 a2=0 a3=0 items=0 ppid=3541 pid=3786 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:58:40.316209 kernel: audit: type=1300 audit(1768352320.290:535): arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffd1c52c1d0 a2=0 a3=0 items=0 ppid=3541 pid=3786 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:58:40.290000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 00:58:40.336000 audit[3788]: NETFILTER_CFG table=filter:107 family=2 entries=16 op=nft_register_rule pid=3788 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 00:58:40.336000 audit[3788]: SYSCALL arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7ffe10db0d20 a2=0 a3=7ffe10db0d0c items=0 ppid=3541 pid=3788 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:58:40.336000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 00:58:40.344000 audit[3788]: NETFILTER_CFG table=nat:108 family=2 entries=12 op=nft_register_rule pid=3788 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 00:58:40.344000 audit[3788]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffe10db0d20 a2=0 a3=0 items=0 ppid=3541 pid=3788 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:58:40.344000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 00:58:42.820000 audit[3792]: NETFILTER_CFG table=filter:109 family=2 entries=17 op=nft_register_rule pid=3792 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 00:58:42.820000 audit[3792]: SYSCALL arch=c000003e syscall=46 success=yes exit=6736 a0=3 a1=7fff3aee8700 a2=0 a3=7fff3aee86ec items=0 ppid=3541 pid=3792 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:58:42.820000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 00:58:42.826000 audit[3792]: NETFILTER_CFG table=nat:110 family=2 entries=12 op=nft_register_rule pid=3792 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 00:58:42.826000 audit[3792]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7fff3aee8700 a2=0 a3=0 items=0 ppid=3541 pid=3792 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:58:42.826000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 00:58:42.842000 audit[3794]: NETFILTER_CFG table=filter:111 family=2 entries=18 op=nft_register_rule pid=3794 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 00:58:42.842000 audit[3794]: SYSCALL arch=c000003e syscall=46 success=yes exit=6736 a0=3 a1=7fff67b132f0 a2=0 a3=7fff67b132dc items=0 ppid=3541 pid=3794 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:58:42.842000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 00:58:42.846000 audit[3794]: NETFILTER_CFG table=nat:112 family=2 entries=12 op=nft_register_rule pid=3794 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 00:58:42.846000 audit[3794]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7fff67b132f0 a2=0 a3=0 items=0 ppid=3541 pid=3794 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:58:42.846000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 00:58:43.860000 audit[3796]: NETFILTER_CFG table=filter:113 family=2 entries=19 op=nft_register_rule pid=3796 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 00:58:43.860000 audit[3796]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffc1d7c2230 a2=0 a3=7ffc1d7c221c items=0 ppid=3541 pid=3796 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:58:43.860000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 00:58:43.866000 audit[3796]: NETFILTER_CFG table=nat:114 family=2 entries=12 op=nft_register_rule pid=3796 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 00:58:43.866000 audit[3796]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffc1d7c2230 a2=0 a3=0 items=0 ppid=3541 pid=3796 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:58:43.866000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 00:58:44.792296 kubelet[3293]: I0114 00:58:44.792224 3293 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7dcd859c48-mxcrz" podStartSLOduration=43.597281838 podStartE2EDuration="45.787473915s" podCreationTimestamp="2026-01-14 00:57:59 +0000 UTC" firstStartedPulling="2026-01-14 00:58:01.024379073 +0000 UTC m=+7.970733904" lastFinishedPulling="2026-01-14 00:58:03.214571151 +0000 UTC m=+10.160925981" observedRunningTime="2026-01-14 00:58:03.387305165 +0000 UTC m=+10.333660016" watchObservedRunningTime="2026-01-14 00:58:44.787473915 +0000 UTC m=+51.733828766" Jan 14 00:58:44.809701 systemd[1]: Created slice kubepods-besteffort-pod123b6aa4_68b3_4e4b_8b3f_4c2f76fc5c2b.slice - libcontainer container kubepods-besteffort-pod123b6aa4_68b3_4e4b_8b3f_4c2f76fc5c2b.slice. Jan 14 00:58:44.914352 kubelet[3293]: I0114 00:58:44.914266 3293 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7jrbx\" (UniqueName: \"kubernetes.io/projected/123b6aa4-68b3-4e4b-8b3f-4c2f76fc5c2b-kube-api-access-7jrbx\") pod \"calico-typha-569f8f6c8f-6v45x\" (UID: \"123b6aa4-68b3-4e4b-8b3f-4c2f76fc5c2b\") " pod="calico-system/calico-typha-569f8f6c8f-6v45x" Jan 14 00:58:44.914352 kubelet[3293]: I0114 00:58:44.914319 3293 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/123b6aa4-68b3-4e4b-8b3f-4c2f76fc5c2b-tigera-ca-bundle\") pod \"calico-typha-569f8f6c8f-6v45x\" (UID: \"123b6aa4-68b3-4e4b-8b3f-4c2f76fc5c2b\") " pod="calico-system/calico-typha-569f8f6c8f-6v45x" Jan 14 00:58:44.914557 kubelet[3293]: I0114 00:58:44.914357 3293 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/123b6aa4-68b3-4e4b-8b3f-4c2f76fc5c2b-typha-certs\") pod \"calico-typha-569f8f6c8f-6v45x\" (UID: \"123b6aa4-68b3-4e4b-8b3f-4c2f76fc5c2b\") " pod="calico-system/calico-typha-569f8f6c8f-6v45x" Jan 14 00:58:44.931277 kernel: kauditd_printk_skb: 25 callbacks suppressed Jan 14 00:58:44.931408 kernel: audit: type=1325 audit(1768352324.925:544): table=filter:115 family=2 entries=21 op=nft_register_rule pid=3799 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 00:58:44.925000 audit[3799]: NETFILTER_CFG table=filter:115 family=2 entries=21 op=nft_register_rule pid=3799 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 00:58:44.925000 audit[3799]: SYSCALL arch=c000003e syscall=46 success=yes exit=8224 a0=3 a1=7ffca55b9a00 a2=0 a3=7ffca55b99ec items=0 ppid=3541 pid=3799 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:58:44.942238 kernel: audit: type=1300 audit(1768352324.925:544): arch=c000003e syscall=46 success=yes exit=8224 a0=3 a1=7ffca55b9a00 a2=0 a3=7ffca55b99ec items=0 ppid=3541 pid=3799 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:58:44.942395 kernel: audit: type=1327 audit(1768352324.925:544): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 00:58:44.925000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 00:58:44.942000 audit[3799]: NETFILTER_CFG table=nat:116 family=2 entries=12 op=nft_register_rule pid=3799 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 00:58:44.947372 kernel: audit: type=1325 audit(1768352324.942:545): table=nat:116 family=2 entries=12 op=nft_register_rule pid=3799 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 00:58:44.955137 kernel: audit: type=1300 audit(1768352324.942:545): arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffca55b9a00 a2=0 a3=0 items=0 ppid=3541 pid=3799 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:58:44.942000 audit[3799]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffca55b9a00 a2=0 a3=0 items=0 ppid=3541 pid=3799 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:58:44.942000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 00:58:44.959292 kernel: audit: type=1327 audit(1768352324.942:545): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 00:58:45.070962 systemd[1]: Created slice kubepods-besteffort-pod6ca87c96_5126_4889_92b4_ab44a916225f.slice - libcontainer container kubepods-besteffort-pod6ca87c96_5126_4889_92b4_ab44a916225f.slice. Jan 14 00:58:45.116030 kubelet[3293]: I0114 00:58:45.115992 3293 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/6ca87c96-5126-4889-92b4-ab44a916225f-cni-net-dir\") pod \"calico-node-kccgl\" (UID: \"6ca87c96-5126-4889-92b4-ab44a916225f\") " pod="calico-system/calico-node-kccgl" Jan 14 00:58:45.116274 kubelet[3293]: I0114 00:58:45.116247 3293 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/6ca87c96-5126-4889-92b4-ab44a916225f-cni-log-dir\") pod \"calico-node-kccgl\" (UID: \"6ca87c96-5126-4889-92b4-ab44a916225f\") " pod="calico-system/calico-node-kccgl" Jan 14 00:58:45.116274 kubelet[3293]: I0114 00:58:45.116273 3293 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/6ca87c96-5126-4889-92b4-ab44a916225f-flexvol-driver-host\") pod \"calico-node-kccgl\" (UID: \"6ca87c96-5126-4889-92b4-ab44a916225f\") " pod="calico-system/calico-node-kccgl" Jan 14 00:58:45.116405 kubelet[3293]: I0114 00:58:45.116289 3293 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/6ca87c96-5126-4889-92b4-ab44a916225f-var-lib-calico\") pod \"calico-node-kccgl\" (UID: \"6ca87c96-5126-4889-92b4-ab44a916225f\") " pod="calico-system/calico-node-kccgl" Jan 14 00:58:45.116405 kubelet[3293]: I0114 00:58:45.116303 3293 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/6ca87c96-5126-4889-92b4-ab44a916225f-var-run-calico\") pod \"calico-node-kccgl\" (UID: \"6ca87c96-5126-4889-92b4-ab44a916225f\") " pod="calico-system/calico-node-kccgl" Jan 14 00:58:45.116405 kubelet[3293]: I0114 00:58:45.116322 3293 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/6ca87c96-5126-4889-92b4-ab44a916225f-cni-bin-dir\") pod \"calico-node-kccgl\" (UID: \"6ca87c96-5126-4889-92b4-ab44a916225f\") " pod="calico-system/calico-node-kccgl" Jan 14 00:58:45.116405 kubelet[3293]: I0114 00:58:45.116335 3293 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6ca87c96-5126-4889-92b4-ab44a916225f-xtables-lock\") pod \"calico-node-kccgl\" (UID: \"6ca87c96-5126-4889-92b4-ab44a916225f\") " pod="calico-system/calico-node-kccgl" Jan 14 00:58:45.116405 kubelet[3293]: I0114 00:58:45.116354 3293 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fzplz\" (UniqueName: \"kubernetes.io/projected/6ca87c96-5126-4889-92b4-ab44a916225f-kube-api-access-fzplz\") pod \"calico-node-kccgl\" (UID: \"6ca87c96-5126-4889-92b4-ab44a916225f\") " pod="calico-system/calico-node-kccgl" Jan 14 00:58:45.116530 kubelet[3293]: I0114 00:58:45.116372 3293 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/6ca87c96-5126-4889-92b4-ab44a916225f-node-certs\") pod \"calico-node-kccgl\" (UID: \"6ca87c96-5126-4889-92b4-ab44a916225f\") " pod="calico-system/calico-node-kccgl" Jan 14 00:58:45.116530 kubelet[3293]: I0114 00:58:45.116389 3293 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6ca87c96-5126-4889-92b4-ab44a916225f-tigera-ca-bundle\") pod \"calico-node-kccgl\" (UID: \"6ca87c96-5126-4889-92b4-ab44a916225f\") " pod="calico-system/calico-node-kccgl" Jan 14 00:58:45.116530 kubelet[3293]: I0114 00:58:45.116406 3293 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6ca87c96-5126-4889-92b4-ab44a916225f-lib-modules\") pod \"calico-node-kccgl\" (UID: \"6ca87c96-5126-4889-92b4-ab44a916225f\") " pod="calico-system/calico-node-kccgl" Jan 14 00:58:45.116530 kubelet[3293]: I0114 00:58:45.116422 3293 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/6ca87c96-5126-4889-92b4-ab44a916225f-policysync\") pod \"calico-node-kccgl\" (UID: \"6ca87c96-5126-4889-92b4-ab44a916225f\") " pod="calico-system/calico-node-kccgl" Jan 14 00:58:45.180603 containerd[1939]: time="2026-01-14T00:58:45.180545818Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-569f8f6c8f-6v45x,Uid:123b6aa4-68b3-4e4b-8b3f-4c2f76fc5c2b,Namespace:calico-system,Attempt:0,}" Jan 14 00:58:45.206510 containerd[1939]: time="2026-01-14T00:58:45.206470585Z" level=info msg="connecting to shim 3588c9352adc6dcf0363bdb8f8cd96239cb5271733a66db273fe3e7a0a0a157a" address="unix:///run/containerd/s/e92846c460716e5fcb7a380f9a5eb9b53a10f5736851489e3a359d0ac657c45c" namespace=k8s.io protocol=ttrpc version=3 Jan 14 00:58:45.229674 kubelet[3293]: E0114 00:58:45.229636 3293 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:58:45.229674 kubelet[3293]: W0114 00:58:45.229663 3293 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:58:45.229925 kubelet[3293]: E0114 00:58:45.229684 3293 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:58:45.235148 kubelet[3293]: E0114 00:58:45.235116 3293 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:58:45.235148 kubelet[3293]: W0114 00:58:45.235136 3293 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:58:45.235148 kubelet[3293]: E0114 00:58:45.235153 3293 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:58:45.246068 kubelet[3293]: E0114 00:58:45.245287 3293 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:58:45.246068 kubelet[3293]: W0114 00:58:45.245308 3293 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:58:45.246068 kubelet[3293]: E0114 00:58:45.245326 3293 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:58:45.257411 systemd[1]: Started cri-containerd-3588c9352adc6dcf0363bdb8f8cd96239cb5271733a66db273fe3e7a0a0a157a.scope - libcontainer container 3588c9352adc6dcf0363bdb8f8cd96239cb5271733a66db273fe3e7a0a0a157a. Jan 14 00:58:45.263107 kubelet[3293]: E0114 00:58:45.263064 3293 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7tkkg" podUID="e723b976-5fd1-4159-bd8a-f3fe80761ec5" Jan 14 00:58:45.284000 audit: BPF prog-id=158 op=LOAD Jan 14 00:58:45.287373 kernel: audit: type=1334 audit(1768352325.284:546): prog-id=158 op=LOAD Jan 14 00:58:45.286000 audit: BPF prog-id=159 op=LOAD Jan 14 00:58:45.294201 kernel: audit: type=1334 audit(1768352325.286:547): prog-id=159 op=LOAD Jan 14 00:58:45.294257 kernel: audit: type=1300 audit(1768352325.286:547): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=3812 pid=3823 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:58:45.286000 audit[3823]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=3812 pid=3823 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:58:45.295229 kernel: audit: type=1327 audit(1768352325.286:547): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3335383863393335326164633664636630333633626462386638636439 Jan 14 00:58:45.286000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3335383863393335326164633664636630333633626462386638636439 Jan 14 00:58:45.286000 audit: BPF prog-id=159 op=UNLOAD Jan 14 00:58:45.286000 audit[3823]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3812 pid=3823 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:58:45.286000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3335383863393335326164633664636630333633626462386638636439 Jan 14 00:58:45.288000 audit: BPF prog-id=160 op=LOAD Jan 14 00:58:45.288000 audit[3823]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=3812 pid=3823 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:58:45.288000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3335383863393335326164633664636630333633626462386638636439 Jan 14 00:58:45.288000 audit: BPF prog-id=161 op=LOAD Jan 14 00:58:45.288000 audit[3823]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=3812 pid=3823 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:58:45.288000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3335383863393335326164633664636630333633626462386638636439 Jan 14 00:58:45.288000 audit: BPF prog-id=161 op=UNLOAD Jan 14 00:58:45.288000 audit[3823]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3812 pid=3823 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:58:45.288000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3335383863393335326164633664636630333633626462386638636439 Jan 14 00:58:45.288000 audit: BPF prog-id=160 op=UNLOAD Jan 14 00:58:45.288000 audit[3823]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3812 pid=3823 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:58:45.288000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3335383863393335326164633664636630333633626462386638636439 Jan 14 00:58:45.288000 audit: BPF prog-id=162 op=LOAD Jan 14 00:58:45.288000 audit[3823]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=3812 pid=3823 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:58:45.288000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3335383863393335326164633664636630333633626462386638636439 Jan 14 00:58:45.301867 kubelet[3293]: E0114 00:58:45.300244 3293 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:58:45.301867 kubelet[3293]: W0114 00:58:45.300262 3293 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:58:45.301867 kubelet[3293]: E0114 00:58:45.300281 3293 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:58:45.301867 kubelet[3293]: E0114 00:58:45.300438 3293 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:58:45.301867 kubelet[3293]: W0114 00:58:45.300445 3293 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:58:45.301867 kubelet[3293]: E0114 00:58:45.300453 3293 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:58:45.301867 kubelet[3293]: E0114 00:58:45.300571 3293 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:58:45.301867 kubelet[3293]: W0114 00:58:45.300577 3293 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:58:45.301867 kubelet[3293]: E0114 00:58:45.300583 3293 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:58:45.303474 kubelet[3293]: E0114 00:58:45.303456 3293 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:58:45.303553 kubelet[3293]: W0114 00:58:45.303472 3293 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:58:45.303553 kubelet[3293]: E0114 00:58:45.303492 3293 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:58:45.303899 kubelet[3293]: E0114 00:58:45.303885 3293 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:58:45.303899 kubelet[3293]: W0114 00:58:45.303898 3293 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:58:45.303969 kubelet[3293]: E0114 00:58:45.303908 3293 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:58:45.306199 kubelet[3293]: E0114 00:58:45.304431 3293 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:58:45.306199 kubelet[3293]: W0114 00:58:45.304443 3293 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:58:45.306199 kubelet[3293]: E0114 00:58:45.304453 3293 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:58:45.306199 kubelet[3293]: E0114 00:58:45.304758 3293 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:58:45.306199 kubelet[3293]: W0114 00:58:45.304766 3293 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:58:45.306199 kubelet[3293]: E0114 00:58:45.304775 3293 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:58:45.306199 kubelet[3293]: E0114 00:58:45.305436 3293 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:58:45.306199 kubelet[3293]: W0114 00:58:45.305446 3293 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:58:45.306199 kubelet[3293]: E0114 00:58:45.305455 3293 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:58:45.306477 kubelet[3293]: E0114 00:58:45.306385 3293 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:58:45.306477 kubelet[3293]: W0114 00:58:45.306394 3293 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:58:45.306477 kubelet[3293]: E0114 00:58:45.306404 3293 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:58:45.306876 kubelet[3293]: E0114 00:58:45.306861 3293 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:58:45.306876 kubelet[3293]: W0114 00:58:45.306875 3293 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:58:45.306940 kubelet[3293]: E0114 00:58:45.306885 3293 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:58:45.307174 kubelet[3293]: E0114 00:58:45.307161 3293 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:58:45.307174 kubelet[3293]: W0114 00:58:45.307173 3293 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:58:45.307306 kubelet[3293]: E0114 00:58:45.307292 3293 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:58:45.308335 kubelet[3293]: E0114 00:58:45.308320 3293 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:58:45.308378 kubelet[3293]: W0114 00:58:45.308334 3293 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:58:45.308378 kubelet[3293]: E0114 00:58:45.308345 3293 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:58:45.308554 kubelet[3293]: E0114 00:58:45.308542 3293 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:58:45.308554 kubelet[3293]: W0114 00:58:45.308553 3293 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:58:45.308624 kubelet[3293]: E0114 00:58:45.308561 3293 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:58:45.308742 kubelet[3293]: E0114 00:58:45.308725 3293 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:58:45.308771 kubelet[3293]: W0114 00:58:45.308739 3293 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:58:45.308771 kubelet[3293]: E0114 00:58:45.308757 3293 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:58:45.308954 kubelet[3293]: E0114 00:58:45.308942 3293 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:58:45.308954 kubelet[3293]: W0114 00:58:45.308953 3293 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:58:45.309015 kubelet[3293]: E0114 00:58:45.308960 3293 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:58:45.309121 kubelet[3293]: E0114 00:58:45.309111 3293 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:58:45.309121 kubelet[3293]: W0114 00:58:45.309120 3293 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:58:45.309179 kubelet[3293]: E0114 00:58:45.309127 3293 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:58:45.309395 kubelet[3293]: E0114 00:58:45.309382 3293 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:58:45.309395 kubelet[3293]: W0114 00:58:45.309394 3293 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:58:45.309469 kubelet[3293]: E0114 00:58:45.309402 3293 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:58:45.310373 kubelet[3293]: E0114 00:58:45.310353 3293 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:58:45.310373 kubelet[3293]: W0114 00:58:45.310372 3293 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:58:45.310446 kubelet[3293]: E0114 00:58:45.310381 3293 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:58:45.310565 kubelet[3293]: E0114 00:58:45.310553 3293 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:58:45.310565 kubelet[3293]: W0114 00:58:45.310563 3293 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:58:45.310616 kubelet[3293]: E0114 00:58:45.310578 3293 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:58:45.310749 kubelet[3293]: E0114 00:58:45.310734 3293 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:58:45.310781 kubelet[3293]: W0114 00:58:45.310755 3293 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:58:45.310781 kubelet[3293]: E0114 00:58:45.310762 3293 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:58:45.319686 kubelet[3293]: E0114 00:58:45.319656 3293 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:58:45.319686 kubelet[3293]: W0114 00:58:45.319673 3293 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:58:45.319686 kubelet[3293]: E0114 00:58:45.319687 3293 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:58:45.319828 kubelet[3293]: I0114 00:58:45.319725 3293 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/e723b976-5fd1-4159-bd8a-f3fe80761ec5-registration-dir\") pod \"csi-node-driver-7tkkg\" (UID: \"e723b976-5fd1-4159-bd8a-f3fe80761ec5\") " pod="calico-system/csi-node-driver-7tkkg" Jan 14 00:58:45.319926 kubelet[3293]: E0114 00:58:45.319911 3293 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:58:45.319926 kubelet[3293]: W0114 00:58:45.319922 3293 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:58:45.319983 kubelet[3293]: E0114 00:58:45.319930 3293 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:58:45.319983 kubelet[3293]: I0114 00:58:45.319949 3293 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4lxnj\" (UniqueName: \"kubernetes.io/projected/e723b976-5fd1-4159-bd8a-f3fe80761ec5-kube-api-access-4lxnj\") pod \"csi-node-driver-7tkkg\" (UID: \"e723b976-5fd1-4159-bd8a-f3fe80761ec5\") " pod="calico-system/csi-node-driver-7tkkg" Jan 14 00:58:45.320281 kubelet[3293]: E0114 00:58:45.320264 3293 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:58:45.320318 kubelet[3293]: W0114 00:58:45.320280 3293 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:58:45.320318 kubelet[3293]: E0114 00:58:45.320306 3293 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:58:45.320537 kubelet[3293]: E0114 00:58:45.320522 3293 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:58:45.320537 kubelet[3293]: W0114 00:58:45.320534 3293 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:58:45.320593 kubelet[3293]: E0114 00:58:45.320542 3293 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:58:45.321201 kubelet[3293]: I0114 00:58:45.321093 3293 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e723b976-5fd1-4159-bd8a-f3fe80761ec5-kubelet-dir\") pod \"csi-node-driver-7tkkg\" (UID: \"e723b976-5fd1-4159-bd8a-f3fe80761ec5\") " pod="calico-system/csi-node-driver-7tkkg" Jan 14 00:58:45.321201 kubelet[3293]: E0114 00:58:45.321160 3293 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:58:45.321201 kubelet[3293]: W0114 00:58:45.321166 3293 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:58:45.321201 kubelet[3293]: E0114 00:58:45.321200 3293 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:58:45.321628 kubelet[3293]: E0114 00:58:45.321435 3293 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:58:45.321628 kubelet[3293]: W0114 00:58:45.321446 3293 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:58:45.321628 kubelet[3293]: E0114 00:58:45.321461 3293 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:58:45.321986 kubelet[3293]: E0114 00:58:45.321970 3293 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:58:45.321986 kubelet[3293]: W0114 00:58:45.321983 3293 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:58:45.322080 kubelet[3293]: E0114 00:58:45.321995 3293 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:58:45.322494 kubelet[3293]: E0114 00:58:45.322431 3293 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:58:45.322494 kubelet[3293]: W0114 00:58:45.322453 3293 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:58:45.322494 kubelet[3293]: E0114 00:58:45.322462 3293 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:58:45.322696 kubelet[3293]: I0114 00:58:45.322576 3293 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/e723b976-5fd1-4159-bd8a-f3fe80761ec5-socket-dir\") pod \"csi-node-driver-7tkkg\" (UID: \"e723b976-5fd1-4159-bd8a-f3fe80761ec5\") " pod="calico-system/csi-node-driver-7tkkg" Jan 14 00:58:45.322888 kubelet[3293]: E0114 00:58:45.322869 3293 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:58:45.322888 kubelet[3293]: W0114 00:58:45.322886 3293 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:58:45.322953 kubelet[3293]: E0114 00:58:45.322898 3293 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:58:45.323387 kubelet[3293]: E0114 00:58:45.323217 3293 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:58:45.323387 kubelet[3293]: W0114 00:58:45.323227 3293 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:58:45.323387 kubelet[3293]: E0114 00:58:45.323235 3293 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:58:45.323559 kubelet[3293]: E0114 00:58:45.323464 3293 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:58:45.323559 kubelet[3293]: W0114 00:58:45.323470 3293 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:58:45.323559 kubelet[3293]: E0114 00:58:45.323478 3293 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:58:45.323855 kubelet[3293]: E0114 00:58:45.323768 3293 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:58:45.323855 kubelet[3293]: W0114 00:58:45.323778 3293 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:58:45.323855 kubelet[3293]: E0114 00:58:45.323786 3293 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:58:45.324274 kubelet[3293]: E0114 00:58:45.324260 3293 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:58:45.324274 kubelet[3293]: W0114 00:58:45.324273 3293 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:58:45.324423 kubelet[3293]: E0114 00:58:45.324283 3293 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:58:45.324423 kubelet[3293]: I0114 00:58:45.324307 3293 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/e723b976-5fd1-4159-bd8a-f3fe80761ec5-varrun\") pod \"csi-node-driver-7tkkg\" (UID: \"e723b976-5fd1-4159-bd8a-f3fe80761ec5\") " pod="calico-system/csi-node-driver-7tkkg" Jan 14 00:58:45.324714 kubelet[3293]: E0114 00:58:45.324662 3293 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:58:45.324714 kubelet[3293]: W0114 00:58:45.324674 3293 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:58:45.324714 kubelet[3293]: E0114 00:58:45.324683 3293 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:58:45.325284 kubelet[3293]: E0114 00:58:45.325244 3293 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:58:45.325284 kubelet[3293]: W0114 00:58:45.325256 3293 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:58:45.325284 kubelet[3293]: E0114 00:58:45.325265 3293 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:58:45.342822 containerd[1939]: time="2026-01-14T00:58:45.342776275Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-569f8f6c8f-6v45x,Uid:123b6aa4-68b3-4e4b-8b3f-4c2f76fc5c2b,Namespace:calico-system,Attempt:0,} returns sandbox id \"3588c9352adc6dcf0363bdb8f8cd96239cb5271733a66db273fe3e7a0a0a157a\"" Jan 14 00:58:45.344320 containerd[1939]: time="2026-01-14T00:58:45.344292620Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Jan 14 00:58:45.378042 containerd[1939]: time="2026-01-14T00:58:45.378005559Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-kccgl,Uid:6ca87c96-5126-4889-92b4-ab44a916225f,Namespace:calico-system,Attempt:0,}" Jan 14 00:58:45.406849 containerd[1939]: time="2026-01-14T00:58:45.406228810Z" level=info msg="connecting to shim 24a50de305f99c1ea4041ff98db872c71226f48215e9d7862ed01cfd13d90934" address="unix:///run/containerd/s/9db06182455f04e444547c68e3c2f603280889c9ef54fc85b02cc8805351b18f" namespace=k8s.io protocol=ttrpc version=3 Jan 14 00:58:45.426341 kubelet[3293]: E0114 00:58:45.426308 3293 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:58:45.426341 kubelet[3293]: W0114 00:58:45.426335 3293 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:58:45.426607 kubelet[3293]: E0114 00:58:45.426362 3293 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:58:45.426656 kubelet[3293]: E0114 00:58:45.426638 3293 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:58:45.426656 kubelet[3293]: W0114 00:58:45.426649 3293 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:58:45.426804 kubelet[3293]: E0114 00:58:45.426663 3293 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:58:45.426935 kubelet[3293]: E0114 00:58:45.426920 3293 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:58:45.427003 kubelet[3293]: W0114 00:58:45.426935 3293 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:58:45.427003 kubelet[3293]: E0114 00:58:45.426946 3293 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:58:45.427238 kubelet[3293]: E0114 00:58:45.427177 3293 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:58:45.427238 kubelet[3293]: W0114 00:58:45.427205 3293 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:58:45.427238 kubelet[3293]: E0114 00:58:45.427218 3293 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:58:45.428584 kubelet[3293]: E0114 00:58:45.428563 3293 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:58:45.429216 kubelet[3293]: W0114 00:58:45.428581 3293 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:58:45.429216 kubelet[3293]: E0114 00:58:45.429207 3293 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:58:45.430029 kubelet[3293]: E0114 00:58:45.429975 3293 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:58:45.430153 kubelet[3293]: W0114 00:58:45.430076 3293 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:58:45.430153 kubelet[3293]: E0114 00:58:45.430093 3293 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:58:45.430692 kubelet[3293]: E0114 00:58:45.430574 3293 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:58:45.430692 kubelet[3293]: W0114 00:58:45.430591 3293 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:58:45.430692 kubelet[3293]: E0114 00:58:45.430604 3293 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:58:45.432472 kubelet[3293]: E0114 00:58:45.430962 3293 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:58:45.432472 kubelet[3293]: W0114 00:58:45.430972 3293 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:58:45.432472 kubelet[3293]: E0114 00:58:45.430987 3293 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:58:45.432472 kubelet[3293]: E0114 00:58:45.431633 3293 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:58:45.432472 kubelet[3293]: W0114 00:58:45.431646 3293 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:58:45.432472 kubelet[3293]: E0114 00:58:45.431659 3293 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:58:45.432472 kubelet[3293]: E0114 00:58:45.431840 3293 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:58:45.432472 kubelet[3293]: W0114 00:58:45.431849 3293 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:58:45.432472 kubelet[3293]: E0114 00:58:45.431859 3293 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:58:45.432472 kubelet[3293]: E0114 00:58:45.432020 3293 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:58:45.432898 kubelet[3293]: W0114 00:58:45.432028 3293 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:58:45.432898 kubelet[3293]: E0114 00:58:45.432040 3293 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:58:45.432898 kubelet[3293]: E0114 00:58:45.432373 3293 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:58:45.432898 kubelet[3293]: W0114 00:58:45.432383 3293 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:58:45.432898 kubelet[3293]: E0114 00:58:45.432395 3293 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:58:45.432898 kubelet[3293]: E0114 00:58:45.432858 3293 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:58:45.432898 kubelet[3293]: W0114 00:58:45.432869 3293 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:58:45.432898 kubelet[3293]: E0114 00:58:45.432882 3293 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:58:45.433355 kubelet[3293]: E0114 00:58:45.433340 3293 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:58:45.433355 kubelet[3293]: W0114 00:58:45.433355 3293 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:58:45.433453 kubelet[3293]: E0114 00:58:45.433368 3293 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:58:45.434008 kubelet[3293]: E0114 00:58:45.433987 3293 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:58:45.434008 kubelet[3293]: W0114 00:58:45.434006 3293 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:58:45.434123 kubelet[3293]: E0114 00:58:45.434019 3293 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:58:45.434952 kubelet[3293]: E0114 00:58:45.434928 3293 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:58:45.434952 kubelet[3293]: W0114 00:58:45.434946 3293 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:58:45.435070 kubelet[3293]: E0114 00:58:45.434960 3293 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:58:45.435358 kubelet[3293]: E0114 00:58:45.435336 3293 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:58:45.435358 kubelet[3293]: W0114 00:58:45.435353 3293 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:58:45.435470 kubelet[3293]: E0114 00:58:45.435367 3293 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:58:45.435913 kubelet[3293]: E0114 00:58:45.435883 3293 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:58:45.435913 kubelet[3293]: W0114 00:58:45.435897 3293 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:58:45.436023 kubelet[3293]: E0114 00:58:45.435916 3293 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:58:45.436358 kubelet[3293]: E0114 00:58:45.436341 3293 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:58:45.436358 kubelet[3293]: W0114 00:58:45.436358 3293 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:58:45.436469 kubelet[3293]: E0114 00:58:45.436371 3293 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:58:45.437371 kubelet[3293]: E0114 00:58:45.436739 3293 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:58:45.437371 kubelet[3293]: W0114 00:58:45.436751 3293 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:58:45.437371 kubelet[3293]: E0114 00:58:45.436764 3293 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:58:45.437371 kubelet[3293]: E0114 00:58:45.437000 3293 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:58:45.437371 kubelet[3293]: W0114 00:58:45.437010 3293 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:58:45.437371 kubelet[3293]: E0114 00:58:45.437021 3293 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:58:45.438023 kubelet[3293]: E0114 00:58:45.437917 3293 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:58:45.438023 kubelet[3293]: W0114 00:58:45.437930 3293 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:58:45.438023 kubelet[3293]: E0114 00:58:45.437944 3293 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:58:45.438602 kubelet[3293]: E0114 00:58:45.438583 3293 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:58:45.438602 kubelet[3293]: W0114 00:58:45.438600 3293 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:58:45.438703 kubelet[3293]: E0114 00:58:45.438613 3293 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:58:45.439527 kubelet[3293]: E0114 00:58:45.439456 3293 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:58:45.439527 kubelet[3293]: W0114 00:58:45.439471 3293 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:58:45.439527 kubelet[3293]: E0114 00:58:45.439484 3293 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:58:45.440077 kubelet[3293]: E0114 00:58:45.440037 3293 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:58:45.440077 kubelet[3293]: W0114 00:58:45.440050 3293 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:58:45.440077 kubelet[3293]: E0114 00:58:45.440063 3293 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:58:45.449365 kubelet[3293]: E0114 00:58:45.449111 3293 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:58:45.449365 kubelet[3293]: W0114 00:58:45.449131 3293 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:58:45.449365 kubelet[3293]: E0114 00:58:45.449151 3293 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:58:45.450512 systemd[1]: Started cri-containerd-24a50de305f99c1ea4041ff98db872c71226f48215e9d7862ed01cfd13d90934.scope - libcontainer container 24a50de305f99c1ea4041ff98db872c71226f48215e9d7862ed01cfd13d90934. Jan 14 00:58:45.463000 audit: BPF prog-id=163 op=LOAD Jan 14 00:58:45.464000 audit: BPF prog-id=164 op=LOAD Jan 14 00:58:45.464000 audit[3925]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=3913 pid=3925 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:58:45.464000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3234613530646533303566393963316561343034316666393864623837 Jan 14 00:58:45.464000 audit: BPF prog-id=164 op=UNLOAD Jan 14 00:58:45.464000 audit[3925]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3913 pid=3925 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:58:45.464000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3234613530646533303566393963316561343034316666393864623837 Jan 14 00:58:45.464000 audit: BPF prog-id=165 op=LOAD Jan 14 00:58:45.464000 audit[3925]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=3913 pid=3925 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:58:45.464000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3234613530646533303566393963316561343034316666393864623837 Jan 14 00:58:45.464000 audit: BPF prog-id=166 op=LOAD Jan 14 00:58:45.464000 audit[3925]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=3913 pid=3925 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:58:45.464000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3234613530646533303566393963316561343034316666393864623837 Jan 14 00:58:45.464000 audit: BPF prog-id=166 op=UNLOAD Jan 14 00:58:45.464000 audit[3925]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3913 pid=3925 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:58:45.464000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3234613530646533303566393963316561343034316666393864623837 Jan 14 00:58:45.464000 audit: BPF prog-id=165 op=UNLOAD Jan 14 00:58:45.464000 audit[3925]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3913 pid=3925 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:58:45.464000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3234613530646533303566393963316561343034316666393864623837 Jan 14 00:58:45.464000 audit: BPF prog-id=167 op=LOAD Jan 14 00:58:45.464000 audit[3925]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=3913 pid=3925 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:58:45.464000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3234613530646533303566393963316561343034316666393864623837 Jan 14 00:58:45.495205 containerd[1939]: time="2026-01-14T00:58:45.495132478Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-kccgl,Uid:6ca87c96-5126-4889-92b4-ab44a916225f,Namespace:calico-system,Attempt:0,} returns sandbox id \"24a50de305f99c1ea4041ff98db872c71226f48215e9d7862ed01cfd13d90934\"" Jan 14 00:58:46.529889 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1833775273.mount: Deactivated successfully. Jan 14 00:58:47.157074 containerd[1939]: time="2026-01-14T00:58:47.157024105Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 00:58:47.159213 containerd[1939]: time="2026-01-14T00:58:47.158944437Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=33735893" Jan 14 00:58:47.165116 containerd[1939]: time="2026-01-14T00:58:47.164913058Z" level=info msg="ImageCreate event name:\"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 00:58:47.167098 containerd[1939]: time="2026-01-14T00:58:47.167049323Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 00:58:47.168026 containerd[1939]: time="2026-01-14T00:58:47.167603947Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"35234482\" in 1.823273431s" Jan 14 00:58:47.168026 containerd[1939]: time="2026-01-14T00:58:47.167640582Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\"" Jan 14 00:58:47.168889 containerd[1939]: time="2026-01-14T00:58:47.168851397Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Jan 14 00:58:47.190237 containerd[1939]: time="2026-01-14T00:58:47.190132163Z" level=info msg="CreateContainer within sandbox \"3588c9352adc6dcf0363bdb8f8cd96239cb5271733a66db273fe3e7a0a0a157a\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jan 14 00:58:47.199209 containerd[1939]: time="2026-01-14T00:58:47.198588106Z" level=info msg="Container 795d23558da9286bf751c62e5da8d8910432e1651bc851a81543b229a06e3c15: CDI devices from CRI Config.CDIDevices: []" Jan 14 00:58:47.207578 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2861988424.mount: Deactivated successfully. Jan 14 00:58:47.210695 containerd[1939]: time="2026-01-14T00:58:47.210658809Z" level=info msg="CreateContainer within sandbox \"3588c9352adc6dcf0363bdb8f8cd96239cb5271733a66db273fe3e7a0a0a157a\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"795d23558da9286bf751c62e5da8d8910432e1651bc851a81543b229a06e3c15\"" Jan 14 00:58:47.211392 containerd[1939]: time="2026-01-14T00:58:47.211366828Z" level=info msg="StartContainer for \"795d23558da9286bf751c62e5da8d8910432e1651bc851a81543b229a06e3c15\"" Jan 14 00:58:47.212622 containerd[1939]: time="2026-01-14T00:58:47.212594413Z" level=info msg="connecting to shim 795d23558da9286bf751c62e5da8d8910432e1651bc851a81543b229a06e3c15" address="unix:///run/containerd/s/e92846c460716e5fcb7a380f9a5eb9b53a10f5736851489e3a359d0ac657c45c" protocol=ttrpc version=3 Jan 14 00:58:47.263393 systemd[1]: Started cri-containerd-795d23558da9286bf751c62e5da8d8910432e1651bc851a81543b229a06e3c15.scope - libcontainer container 795d23558da9286bf751c62e5da8d8910432e1651bc851a81543b229a06e3c15. Jan 14 00:58:47.276000 audit: BPF prog-id=168 op=LOAD Jan 14 00:58:47.276000 audit: BPF prog-id=169 op=LOAD Jan 14 00:58:47.276000 audit[3987]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=3812 pid=3987 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:58:47.276000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3739356432333535386461393238366266373531633632653564613864 Jan 14 00:58:47.276000 audit: BPF prog-id=169 op=UNLOAD Jan 14 00:58:47.276000 audit[3987]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3812 pid=3987 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:58:47.276000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3739356432333535386461393238366266373531633632653564613864 Jan 14 00:58:47.277000 audit: BPF prog-id=170 op=LOAD Jan 14 00:58:47.277000 audit[3987]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=3812 pid=3987 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:58:47.277000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3739356432333535386461393238366266373531633632653564613864 Jan 14 00:58:47.277000 audit: BPF prog-id=171 op=LOAD Jan 14 00:58:47.277000 audit[3987]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=3812 pid=3987 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:58:47.277000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3739356432333535386461393238366266373531633632653564613864 Jan 14 00:58:47.277000 audit: BPF prog-id=171 op=UNLOAD Jan 14 00:58:47.277000 audit[3987]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3812 pid=3987 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:58:47.277000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3739356432333535386461393238366266373531633632653564613864 Jan 14 00:58:47.277000 audit: BPF prog-id=170 op=UNLOAD Jan 14 00:58:47.277000 audit[3987]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3812 pid=3987 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:58:47.277000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3739356432333535386461393238366266373531633632653564613864 Jan 14 00:58:47.277000 audit: BPF prog-id=172 op=LOAD Jan 14 00:58:47.277000 audit[3987]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=3812 pid=3987 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:58:47.277000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3739356432333535386461393238366266373531633632653564613864 Jan 14 00:58:47.302211 kubelet[3293]: E0114 00:58:47.302118 3293 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7tkkg" podUID="e723b976-5fd1-4159-bd8a-f3fe80761ec5" Jan 14 00:58:47.344846 containerd[1939]: time="2026-01-14T00:58:47.344799960Z" level=info msg="StartContainer for \"795d23558da9286bf751c62e5da8d8910432e1651bc851a81543b229a06e3c15\" returns successfully" Jan 14 00:58:47.531991 kubelet[3293]: E0114 00:58:47.531880 3293 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:58:47.533380 kubelet[3293]: W0114 00:58:47.533144 3293 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:58:47.535290 kubelet[3293]: E0114 00:58:47.535225 3293 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:58:47.536372 kubelet[3293]: E0114 00:58:47.536346 3293 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:58:47.536745 kubelet[3293]: W0114 00:58:47.536578 3293 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:58:47.536745 kubelet[3293]: E0114 00:58:47.536607 3293 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:58:47.537923 kubelet[3293]: E0114 00:58:47.537856 3293 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:58:47.538286 kubelet[3293]: W0114 00:58:47.538113 3293 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:58:47.538286 kubelet[3293]: E0114 00:58:47.538135 3293 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:58:47.540968 kubelet[3293]: E0114 00:58:47.540824 3293 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:58:47.540968 kubelet[3293]: W0114 00:58:47.540928 3293 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:58:47.541677 kubelet[3293]: E0114 00:58:47.540949 3293 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:58:47.542057 kubelet[3293]: E0114 00:58:47.542039 3293 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:58:47.542249 kubelet[3293]: W0114 00:58:47.542057 3293 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:58:47.542249 kubelet[3293]: E0114 00:58:47.542071 3293 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:58:47.543899 kubelet[3293]: E0114 00:58:47.543881 3293 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:58:47.543899 kubelet[3293]: W0114 00:58:47.543899 3293 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:58:47.544344 kubelet[3293]: E0114 00:58:47.543912 3293 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:58:47.544454 kubelet[3293]: E0114 00:58:47.544433 3293 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:58:47.545047 kubelet[3293]: W0114 00:58:47.545003 3293 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:58:47.545047 kubelet[3293]: E0114 00:58:47.545031 3293 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:58:47.545608 kubelet[3293]: E0114 00:58:47.545592 3293 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:58:47.545677 kubelet[3293]: W0114 00:58:47.545608 3293 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:58:47.545677 kubelet[3293]: E0114 00:58:47.545621 3293 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:58:47.546961 kubelet[3293]: E0114 00:58:47.546940 3293 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:58:47.546961 kubelet[3293]: W0114 00:58:47.546959 3293 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:58:47.547085 kubelet[3293]: E0114 00:58:47.546972 3293 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:58:47.547467 kubelet[3293]: E0114 00:58:47.547404 3293 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:58:47.547467 kubelet[3293]: W0114 00:58:47.547418 3293 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:58:47.547467 kubelet[3293]: E0114 00:58:47.547429 3293 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:58:47.547997 kubelet[3293]: E0114 00:58:47.547980 3293 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:58:47.547997 kubelet[3293]: W0114 00:58:47.547997 3293 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:58:47.548332 kubelet[3293]: E0114 00:58:47.548010 3293 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:58:47.549563 kubelet[3293]: E0114 00:58:47.549544 3293 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:58:47.549660 kubelet[3293]: W0114 00:58:47.549576 3293 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:58:47.549660 kubelet[3293]: E0114 00:58:47.549590 3293 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:58:47.549867 kubelet[3293]: E0114 00:58:47.549819 3293 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:58:47.549867 kubelet[3293]: W0114 00:58:47.549829 3293 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:58:47.549867 kubelet[3293]: E0114 00:58:47.549841 3293 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:58:47.550100 kubelet[3293]: E0114 00:58:47.550041 3293 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:58:47.550100 kubelet[3293]: W0114 00:58:47.550053 3293 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:58:47.550100 kubelet[3293]: E0114 00:58:47.550066 3293 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:58:47.550672 kubelet[3293]: E0114 00:58:47.550298 3293 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:58:47.550672 kubelet[3293]: W0114 00:58:47.550309 3293 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:58:47.550672 kubelet[3293]: E0114 00:58:47.550320 3293 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:58:47.552857 kubelet[3293]: E0114 00:58:47.552839 3293 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:58:47.552857 kubelet[3293]: W0114 00:58:47.552857 3293 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:58:47.552975 kubelet[3293]: E0114 00:58:47.552871 3293 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:58:47.554524 kubelet[3293]: E0114 00:58:47.554476 3293 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:58:47.554524 kubelet[3293]: W0114 00:58:47.554490 3293 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:58:47.554524 kubelet[3293]: E0114 00:58:47.554502 3293 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:58:47.554911 kubelet[3293]: E0114 00:58:47.554783 3293 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:58:47.554911 kubelet[3293]: W0114 00:58:47.554793 3293 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:58:47.554911 kubelet[3293]: E0114 00:58:47.554807 3293 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:58:47.555828 kubelet[3293]: E0114 00:58:47.555792 3293 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:58:47.555828 kubelet[3293]: W0114 00:58:47.555806 3293 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:58:47.555828 kubelet[3293]: E0114 00:58:47.555819 3293 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:58:47.557202 kubelet[3293]: E0114 00:58:47.556051 3293 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:58:47.557202 kubelet[3293]: W0114 00:58:47.556063 3293 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:58:47.557202 kubelet[3293]: E0114 00:58:47.556074 3293 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:58:47.557202 kubelet[3293]: E0114 00:58:47.556398 3293 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:58:47.557202 kubelet[3293]: W0114 00:58:47.556408 3293 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:58:47.557202 kubelet[3293]: E0114 00:58:47.556421 3293 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:58:47.557202 kubelet[3293]: E0114 00:58:47.556672 3293 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:58:47.557202 kubelet[3293]: W0114 00:58:47.556682 3293 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:58:47.557202 kubelet[3293]: E0114 00:58:47.556694 3293 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:58:47.557202 kubelet[3293]: E0114 00:58:47.556937 3293 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:58:47.557784 kubelet[3293]: W0114 00:58:47.556947 3293 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:58:47.557784 kubelet[3293]: E0114 00:58:47.556958 3293 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:58:47.557784 kubelet[3293]: E0114 00:58:47.557234 3293 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:58:47.557784 kubelet[3293]: W0114 00:58:47.557244 3293 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:58:47.557784 kubelet[3293]: E0114 00:58:47.557257 3293 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:58:47.558288 kubelet[3293]: E0114 00:58:47.558261 3293 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:58:47.558288 kubelet[3293]: W0114 00:58:47.558281 3293 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:58:47.558538 kubelet[3293]: E0114 00:58:47.558295 3293 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:58:47.558694 kubelet[3293]: E0114 00:58:47.558554 3293 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:58:47.558694 kubelet[3293]: W0114 00:58:47.558565 3293 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:58:47.558694 kubelet[3293]: E0114 00:58:47.558614 3293 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:58:47.559104 kubelet[3293]: E0114 00:58:47.558986 3293 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:58:47.559104 kubelet[3293]: W0114 00:58:47.558997 3293 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:58:47.559104 kubelet[3293]: E0114 00:58:47.559008 3293 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:58:47.559563 kubelet[3293]: E0114 00:58:47.559296 3293 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:58:47.559563 kubelet[3293]: W0114 00:58:47.559306 3293 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:58:47.559563 kubelet[3293]: E0114 00:58:47.559318 3293 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:58:47.560612 kubelet[3293]: E0114 00:58:47.560587 3293 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:58:47.560612 kubelet[3293]: W0114 00:58:47.560604 3293 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:58:47.560860 kubelet[3293]: E0114 00:58:47.560617 3293 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:58:47.560860 kubelet[3293]: E0114 00:58:47.560838 3293 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:58:47.560860 kubelet[3293]: W0114 00:58:47.560849 3293 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:58:47.560990 kubelet[3293]: E0114 00:58:47.560860 3293 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:58:47.561354 kubelet[3293]: E0114 00:58:47.561106 3293 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:58:47.561354 kubelet[3293]: W0114 00:58:47.561118 3293 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:58:47.561354 kubelet[3293]: E0114 00:58:47.561130 3293 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:58:47.561526 kubelet[3293]: E0114 00:58:47.561377 3293 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:58:47.561526 kubelet[3293]: W0114 00:58:47.561387 3293 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:58:47.561526 kubelet[3293]: E0114 00:58:47.561400 3293 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:58:47.562147 kubelet[3293]: E0114 00:58:47.562071 3293 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 00:58:47.562147 kubelet[3293]: W0114 00:58:47.562085 3293 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 00:58:47.562147 kubelet[3293]: E0114 00:58:47.562098 3293 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 00:58:48.353536 containerd[1939]: time="2026-01-14T00:58:48.353485406Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 00:58:48.355519 containerd[1939]: time="2026-01-14T00:58:48.355448114Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=0" Jan 14 00:58:48.357672 containerd[1939]: time="2026-01-14T00:58:48.357608538Z" level=info msg="ImageCreate event name:\"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 00:58:48.360951 containerd[1939]: time="2026-01-14T00:58:48.360897096Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 00:58:48.361760 containerd[1939]: time="2026-01-14T00:58:48.361557157Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5941314\" in 1.192668082s" Jan 14 00:58:48.361760 containerd[1939]: time="2026-01-14T00:58:48.361596488Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\"" Jan 14 00:58:48.367745 containerd[1939]: time="2026-01-14T00:58:48.367705076Z" level=info msg="CreateContainer within sandbox \"24a50de305f99c1ea4041ff98db872c71226f48215e9d7862ed01cfd13d90934\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 14 00:58:48.417621 containerd[1939]: time="2026-01-14T00:58:48.417581916Z" level=info msg="Container a34e5e62b1993ecdfa34a0aa8008394ba9cd5bb1c6fb71236e1dfde7f48f3d06: CDI devices from CRI Config.CDIDevices: []" Jan 14 00:58:48.434828 containerd[1939]: time="2026-01-14T00:58:48.434787128Z" level=info msg="CreateContainer within sandbox \"24a50de305f99c1ea4041ff98db872c71226f48215e9d7862ed01cfd13d90934\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"a34e5e62b1993ecdfa34a0aa8008394ba9cd5bb1c6fb71236e1dfde7f48f3d06\"" Jan 14 00:58:48.435519 containerd[1939]: time="2026-01-14T00:58:48.435490761Z" level=info msg="StartContainer for \"a34e5e62b1993ecdfa34a0aa8008394ba9cd5bb1c6fb71236e1dfde7f48f3d06\"" Jan 14 00:58:48.436961 containerd[1939]: time="2026-01-14T00:58:48.436935341Z" level=info msg="connecting to shim a34e5e62b1993ecdfa34a0aa8008394ba9cd5bb1c6fb71236e1dfde7f48f3d06" address="unix:///run/containerd/s/9db06182455f04e444547c68e3c2f603280889c9ef54fc85b02cc8805351b18f" protocol=ttrpc version=3 Jan 14 00:58:48.461406 systemd[1]: Started cri-containerd-a34e5e62b1993ecdfa34a0aa8008394ba9cd5bb1c6fb71236e1dfde7f48f3d06.scope - libcontainer container a34e5e62b1993ecdfa34a0aa8008394ba9cd5bb1c6fb71236e1dfde7f48f3d06. Jan 14 00:58:48.502000 audit: BPF prog-id=173 op=LOAD Jan 14 00:58:48.502000 audit[4061]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001b0488 a2=98 a3=0 items=0 ppid=3913 pid=4061 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:58:48.502000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6133346535653632623139393365636466613334613061613830303833 Jan 14 00:58:48.502000 audit: BPF prog-id=174 op=LOAD Jan 14 00:58:48.502000 audit[4061]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c0001b0218 a2=98 a3=0 items=0 ppid=3913 pid=4061 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:58:48.502000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6133346535653632623139393365636466613334613061613830303833 Jan 14 00:58:48.502000 audit: BPF prog-id=174 op=UNLOAD Jan 14 00:58:48.502000 audit[4061]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3913 pid=4061 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:58:48.502000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6133346535653632623139393365636466613334613061613830303833 Jan 14 00:58:48.502000 audit: BPF prog-id=173 op=UNLOAD Jan 14 00:58:48.502000 audit[4061]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3913 pid=4061 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:58:48.502000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6133346535653632623139393365636466613334613061613830303833 Jan 14 00:58:48.502000 audit: BPF prog-id=175 op=LOAD Jan 14 00:58:48.502000 audit[4061]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001b06e8 a2=98 a3=0 items=0 ppid=3913 pid=4061 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:58:48.502000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6133346535653632623139393365636466613334613061613830303833 Jan 14 00:58:48.509805 kubelet[3293]: I0114 00:58:48.509658 3293 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-569f8f6c8f-6v45x" podStartSLOduration=2.683268184 podStartE2EDuration="4.507939102s" podCreationTimestamp="2026-01-14 00:58:44 +0000 UTC" firstStartedPulling="2026-01-14 00:58:45.343918222 +0000 UTC m=+52.290273055" lastFinishedPulling="2026-01-14 00:58:47.168589142 +0000 UTC m=+54.114943973" observedRunningTime="2026-01-14 00:58:47.54290272 +0000 UTC m=+54.489257573" watchObservedRunningTime="2026-01-14 00:58:48.507939102 +0000 UTC m=+55.454293954" Jan 14 00:58:48.534000 audit[4091]: NETFILTER_CFG table=filter:117 family=2 entries=21 op=nft_register_rule pid=4091 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 00:58:48.534000 audit[4091]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7fff7e644a70 a2=0 a3=7fff7e644a5c items=0 ppid=3541 pid=4091 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:58:48.534000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 00:58:48.540000 audit[4091]: NETFILTER_CFG table=nat:118 family=2 entries=19 op=nft_register_chain pid=4091 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 00:58:48.540000 audit[4091]: SYSCALL arch=c000003e syscall=46 success=yes exit=6276 a0=3 a1=7fff7e644a70 a2=0 a3=7fff7e644a5c items=0 ppid=3541 pid=4091 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:58:48.540000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 00:58:48.554763 systemd[1]: cri-containerd-a34e5e62b1993ecdfa34a0aa8008394ba9cd5bb1c6fb71236e1dfde7f48f3d06.scope: Deactivated successfully. Jan 14 00:58:48.558000 audit: BPF prog-id=175 op=UNLOAD Jan 14 00:58:48.569222 containerd[1939]: time="2026-01-14T00:58:48.569007864Z" level=info msg="StartContainer for \"a34e5e62b1993ecdfa34a0aa8008394ba9cd5bb1c6fb71236e1dfde7f48f3d06\" returns successfully" Jan 14 00:58:48.586636 containerd[1939]: time="2026-01-14T00:58:48.586582114Z" level=info msg="received container exit event container_id:\"a34e5e62b1993ecdfa34a0aa8008394ba9cd5bb1c6fb71236e1dfde7f48f3d06\" id:\"a34e5e62b1993ecdfa34a0aa8008394ba9cd5bb1c6fb71236e1dfde7f48f3d06\" pid:4074 exited_at:{seconds:1768352328 nanos:567121703}" Jan 14 00:58:48.610518 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a34e5e62b1993ecdfa34a0aa8008394ba9cd5bb1c6fb71236e1dfde7f48f3d06-rootfs.mount: Deactivated successfully. Jan 14 00:58:49.302587 kubelet[3293]: E0114 00:58:49.302550 3293 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7tkkg" podUID="e723b976-5fd1-4159-bd8a-f3fe80761ec5" Jan 14 00:58:49.497997 containerd[1939]: time="2026-01-14T00:58:49.497785259Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Jan 14 00:58:51.304558 kubelet[3293]: E0114 00:58:51.304514 3293 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7tkkg" podUID="e723b976-5fd1-4159-bd8a-f3fe80761ec5" Jan 14 00:58:53.301943 kubelet[3293]: E0114 00:58:53.301875 3293 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7tkkg" podUID="e723b976-5fd1-4159-bd8a-f3fe80761ec5" Jan 14 00:58:54.039583 containerd[1939]: time="2026-01-14T00:58:54.039533149Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 00:58:54.040504 containerd[1939]: time="2026-01-14T00:58:54.040470171Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=70442291" Jan 14 00:58:54.041610 containerd[1939]: time="2026-01-14T00:58:54.041568392Z" level=info msg="ImageCreate event name:\"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 00:58:54.043648 containerd[1939]: time="2026-01-14T00:58:54.043523208Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 00:58:54.053087 containerd[1939]: time="2026-01-14T00:58:54.044163490Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"71941459\" in 4.546174104s" Jan 14 00:58:54.053087 containerd[1939]: time="2026-01-14T00:58:54.053086185Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\"" Jan 14 00:58:54.066856 containerd[1939]: time="2026-01-14T00:58:54.066816572Z" level=info msg="CreateContainer within sandbox \"24a50de305f99c1ea4041ff98db872c71226f48215e9d7862ed01cfd13d90934\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 14 00:58:54.078343 containerd[1939]: time="2026-01-14T00:58:54.077329714Z" level=info msg="Container a8cc43c690c7696539b8af90625f344daa7c35d77183f1e2e7f3816ffa447fb8: CDI devices from CRI Config.CDIDevices: []" Jan 14 00:58:54.086963 containerd[1939]: time="2026-01-14T00:58:54.086908042Z" level=info msg="CreateContainer within sandbox \"24a50de305f99c1ea4041ff98db872c71226f48215e9d7862ed01cfd13d90934\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"a8cc43c690c7696539b8af90625f344daa7c35d77183f1e2e7f3816ffa447fb8\"" Jan 14 00:58:54.087520 containerd[1939]: time="2026-01-14T00:58:54.087426789Z" level=info msg="StartContainer for \"a8cc43c690c7696539b8af90625f344daa7c35d77183f1e2e7f3816ffa447fb8\"" Jan 14 00:58:54.088717 containerd[1939]: time="2026-01-14T00:58:54.088688502Z" level=info msg="connecting to shim a8cc43c690c7696539b8af90625f344daa7c35d77183f1e2e7f3816ffa447fb8" address="unix:///run/containerd/s/9db06182455f04e444547c68e3c2f603280889c9ef54fc85b02cc8805351b18f" protocol=ttrpc version=3 Jan 14 00:58:54.112395 systemd[1]: Started cri-containerd-a8cc43c690c7696539b8af90625f344daa7c35d77183f1e2e7f3816ffa447fb8.scope - libcontainer container a8cc43c690c7696539b8af90625f344daa7c35d77183f1e2e7f3816ffa447fb8. Jan 14 00:58:54.181000 audit: BPF prog-id=176 op=LOAD Jan 14 00:58:54.183894 kernel: kauditd_printk_skb: 84 callbacks suppressed Jan 14 00:58:54.183967 kernel: audit: type=1334 audit(1768352334.181:578): prog-id=176 op=LOAD Jan 14 00:58:54.186851 kernel: audit: type=1300 audit(1768352334.181:578): arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=3913 pid=4126 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:58:54.181000 audit[4126]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=3913 pid=4126 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:58:54.191767 kernel: audit: type=1327 audit(1768352334.181:578): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6138636334336336393063373639363533396238616639303632356633 Jan 14 00:58:54.181000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6138636334336336393063373639363533396238616639303632356633 Jan 14 00:58:54.181000 audit: BPF prog-id=177 op=LOAD Jan 14 00:58:54.203902 kernel: audit: type=1334 audit(1768352334.181:579): prog-id=177 op=LOAD Jan 14 00:58:54.203980 kernel: audit: type=1300 audit(1768352334.181:579): arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=3913 pid=4126 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:58:54.181000 audit[4126]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=3913 pid=4126 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:58:54.209121 kernel: audit: type=1327 audit(1768352334.181:579): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6138636334336336393063373639363533396238616639303632356633 Jan 14 00:58:54.181000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6138636334336336393063373639363533396238616639303632356633 Jan 14 00:58:54.181000 audit: BPF prog-id=177 op=UNLOAD Jan 14 00:58:54.215451 kernel: audit: type=1334 audit(1768352334.181:580): prog-id=177 op=UNLOAD Jan 14 00:58:54.220552 kernel: audit: type=1300 audit(1768352334.181:580): arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3913 pid=4126 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:58:54.220598 kernel: audit: type=1327 audit(1768352334.181:580): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6138636334336336393063373639363533396238616639303632356633 Jan 14 00:58:54.181000 audit[4126]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3913 pid=4126 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:58:54.181000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6138636334336336393063373639363533396238616639303632356633 Jan 14 00:58:54.222124 kernel: audit: type=1334 audit(1768352334.181:581): prog-id=176 op=UNLOAD Jan 14 00:58:54.181000 audit: BPF prog-id=176 op=UNLOAD Jan 14 00:58:54.181000 audit[4126]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3913 pid=4126 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:58:54.181000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6138636334336336393063373639363533396238616639303632356633 Jan 14 00:58:54.181000 audit: BPF prog-id=178 op=LOAD Jan 14 00:58:54.181000 audit[4126]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=3913 pid=4126 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:58:54.181000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6138636334336336393063373639363533396238616639303632356633 Jan 14 00:58:54.226781 containerd[1939]: time="2026-01-14T00:58:54.226589145Z" level=info msg="StartContainer for \"a8cc43c690c7696539b8af90625f344daa7c35d77183f1e2e7f3816ffa447fb8\" returns successfully" Jan 14 00:58:55.302420 kubelet[3293]: E0114 00:58:55.302260 3293 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7tkkg" podUID="e723b976-5fd1-4159-bd8a-f3fe80761ec5" Jan 14 00:58:55.320050 systemd[1]: cri-containerd-a8cc43c690c7696539b8af90625f344daa7c35d77183f1e2e7f3816ffa447fb8.scope: Deactivated successfully. Jan 14 00:58:55.320941 systemd[1]: cri-containerd-a8cc43c690c7696539b8af90625f344daa7c35d77183f1e2e7f3816ffa447fb8.scope: Consumed 477ms CPU time, 160.9M memory peak, 5.9M read from disk, 171.3M written to disk. Jan 14 00:58:55.323000 audit: BPF prog-id=178 op=UNLOAD Jan 14 00:58:55.328466 containerd[1939]: time="2026-01-14T00:58:55.328412814Z" level=info msg="received container exit event container_id:\"a8cc43c690c7696539b8af90625f344daa7c35d77183f1e2e7f3816ffa447fb8\" id:\"a8cc43c690c7696539b8af90625f344daa7c35d77183f1e2e7f3816ffa447fb8\" pid:4138 exited_at:{seconds:1768352335 nanos:322069660}" Jan 14 00:58:55.376389 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a8cc43c690c7696539b8af90625f344daa7c35d77183f1e2e7f3816ffa447fb8-rootfs.mount: Deactivated successfully. Jan 14 00:58:55.397342 kubelet[3293]: I0114 00:58:55.397282 3293 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jan 14 00:58:55.534638 containerd[1939]: time="2026-01-14T00:58:55.534364883Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Jan 14 00:58:55.569276 systemd[1]: Created slice kubepods-burstable-pod6399b59d_47f1_4ce4_83ea_ea3fb09c0249.slice - libcontainer container kubepods-burstable-pod6399b59d_47f1_4ce4_83ea_ea3fb09c0249.slice. Jan 14 00:58:55.580393 systemd[1]: Created slice kubepods-burstable-pod68eb86c9_3a24_4178_8dfd_2032dfe5776a.slice - libcontainer container kubepods-burstable-pod68eb86c9_3a24_4178_8dfd_2032dfe5776a.slice. Jan 14 00:58:55.595319 systemd[1]: Created slice kubepods-besteffort-podf05abc55_0515_49ca_aacb_ebde63d756a4.slice - libcontainer container kubepods-besteffort-podf05abc55_0515_49ca_aacb_ebde63d756a4.slice. Jan 14 00:58:55.608074 kubelet[3293]: I0114 00:58:55.608035 3293 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s6jp9\" (UniqueName: \"kubernetes.io/projected/68eb86c9-3a24-4178-8dfd-2032dfe5776a-kube-api-access-s6jp9\") pod \"coredns-674b8bbfcf-mm224\" (UID: \"68eb86c9-3a24-4178-8dfd-2032dfe5776a\") " pod="kube-system/coredns-674b8bbfcf-mm224" Jan 14 00:58:55.608074 kubelet[3293]: I0114 00:58:55.608075 3293 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-glhgl\" (UniqueName: \"kubernetes.io/projected/7449b447-f9d5-45e2-8001-6763bc56b2d8-kube-api-access-glhgl\") pod \"calico-apiserver-cdb9b59bb-pr6gg\" (UID: \"7449b447-f9d5-45e2-8001-6763bc56b2d8\") " pod="calico-apiserver/calico-apiserver-cdb9b59bb-pr6gg" Jan 14 00:58:55.608396 kubelet[3293]: I0114 00:58:55.608113 3293 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vrl9h\" (UniqueName: \"kubernetes.io/projected/6399b59d-47f1-4ce4-83ea-ea3fb09c0249-kube-api-access-vrl9h\") pod \"coredns-674b8bbfcf-w9259\" (UID: \"6399b59d-47f1-4ce4-83ea-ea3fb09c0249\") " pod="kube-system/coredns-674b8bbfcf-w9259" Jan 14 00:58:55.608396 kubelet[3293]: I0114 00:58:55.608150 3293 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f05abc55-0515-49ca-aacb-ebde63d756a4-tigera-ca-bundle\") pod \"calico-kube-controllers-6b4df85555-29mjx\" (UID: \"f05abc55-0515-49ca-aacb-ebde63d756a4\") " pod="calico-system/calico-kube-controllers-6b4df85555-29mjx" Jan 14 00:58:55.610575 systemd[1]: Created slice kubepods-besteffort-pod7449b447_f9d5_45e2_8001_6763bc56b2d8.slice - libcontainer container kubepods-besteffort-pod7449b447_f9d5_45e2_8001_6763bc56b2d8.slice. Jan 14 00:58:55.610911 kubelet[3293]: I0114 00:58:55.608175 3293 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/7449b447-f9d5-45e2-8001-6763bc56b2d8-calico-apiserver-certs\") pod \"calico-apiserver-cdb9b59bb-pr6gg\" (UID: \"7449b447-f9d5-45e2-8001-6763bc56b2d8\") " pod="calico-apiserver/calico-apiserver-cdb9b59bb-pr6gg" Jan 14 00:58:55.611458 kubelet[3293]: I0114 00:58:55.610957 3293 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qp8hb\" (UniqueName: \"kubernetes.io/projected/f05abc55-0515-49ca-aacb-ebde63d756a4-kube-api-access-qp8hb\") pod \"calico-kube-controllers-6b4df85555-29mjx\" (UID: \"f05abc55-0515-49ca-aacb-ebde63d756a4\") " pod="calico-system/calico-kube-controllers-6b4df85555-29mjx" Jan 14 00:58:55.611458 kubelet[3293]: I0114 00:58:55.611002 3293 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/68eb86c9-3a24-4178-8dfd-2032dfe5776a-config-volume\") pod \"coredns-674b8bbfcf-mm224\" (UID: \"68eb86c9-3a24-4178-8dfd-2032dfe5776a\") " pod="kube-system/coredns-674b8bbfcf-mm224" Jan 14 00:58:55.611458 kubelet[3293]: I0114 00:58:55.611034 3293 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6399b59d-47f1-4ce4-83ea-ea3fb09c0249-config-volume\") pod \"coredns-674b8bbfcf-w9259\" (UID: \"6399b59d-47f1-4ce4-83ea-ea3fb09c0249\") " pod="kube-system/coredns-674b8bbfcf-w9259" Jan 14 00:58:55.628074 systemd[1]: Created slice kubepods-besteffort-pod13972001_c667_49c0_9374_a2bbe47d8026.slice - libcontainer container kubepods-besteffort-pod13972001_c667_49c0_9374_a2bbe47d8026.slice. Jan 14 00:58:55.642130 systemd[1]: Created slice kubepods-besteffort-pod5e14ba26_eb09_4a70_a4a6_9ad8bd987906.slice - libcontainer container kubepods-besteffort-pod5e14ba26_eb09_4a70_a4a6_9ad8bd987906.slice. Jan 14 00:58:55.648552 systemd[1]: Created slice kubepods-besteffort-pod25ffadd4_59b8_4f2d_9557_8aa31d4bee36.slice - libcontainer container kubepods-besteffort-pod25ffadd4_59b8_4f2d_9557_8aa31d4bee36.slice. Jan 14 00:58:55.712056 kubelet[3293]: I0114 00:58:55.711950 3293 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6hskc\" (UniqueName: \"kubernetes.io/projected/5e14ba26-eb09-4a70-a4a6-9ad8bd987906-kube-api-access-6hskc\") pod \"calico-apiserver-cdb9b59bb-cpdlw\" (UID: \"5e14ba26-eb09-4a70-a4a6-9ad8bd987906\") " pod="calico-apiserver/calico-apiserver-cdb9b59bb-cpdlw" Jan 14 00:58:55.712974 kubelet[3293]: I0114 00:58:55.712366 3293 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/25ffadd4-59b8-4f2d-9557-8aa31d4bee36-whisker-backend-key-pair\") pod \"whisker-69899b58d4-6wzxj\" (UID: \"25ffadd4-59b8-4f2d-9557-8aa31d4bee36\") " pod="calico-system/whisker-69899b58d4-6wzxj" Jan 14 00:58:55.712974 kubelet[3293]: I0114 00:58:55.712405 3293 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jwdzd\" (UniqueName: \"kubernetes.io/projected/25ffadd4-59b8-4f2d-9557-8aa31d4bee36-kube-api-access-jwdzd\") pod \"whisker-69899b58d4-6wzxj\" (UID: \"25ffadd4-59b8-4f2d-9557-8aa31d4bee36\") " pod="calico-system/whisker-69899b58d4-6wzxj" Jan 14 00:58:55.712974 kubelet[3293]: I0114 00:58:55.712451 3293 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/5e14ba26-eb09-4a70-a4a6-9ad8bd987906-calico-apiserver-certs\") pod \"calico-apiserver-cdb9b59bb-cpdlw\" (UID: \"5e14ba26-eb09-4a70-a4a6-9ad8bd987906\") " pod="calico-apiserver/calico-apiserver-cdb9b59bb-cpdlw" Jan 14 00:58:55.712974 kubelet[3293]: I0114 00:58:55.712520 3293 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/13972001-c667-49c0-9374-a2bbe47d8026-goldmane-ca-bundle\") pod \"goldmane-666569f655-xlvwp\" (UID: \"13972001-c667-49c0-9374-a2bbe47d8026\") " pod="calico-system/goldmane-666569f655-xlvwp" Jan 14 00:58:55.712974 kubelet[3293]: I0114 00:58:55.712568 3293 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/13972001-c667-49c0-9374-a2bbe47d8026-goldmane-key-pair\") pod \"goldmane-666569f655-xlvwp\" (UID: \"13972001-c667-49c0-9374-a2bbe47d8026\") " pod="calico-system/goldmane-666569f655-xlvwp" Jan 14 00:58:55.713301 kubelet[3293]: I0114 00:58:55.712661 3293 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/25ffadd4-59b8-4f2d-9557-8aa31d4bee36-whisker-ca-bundle\") pod \"whisker-69899b58d4-6wzxj\" (UID: \"25ffadd4-59b8-4f2d-9557-8aa31d4bee36\") " pod="calico-system/whisker-69899b58d4-6wzxj" Jan 14 00:58:55.713301 kubelet[3293]: I0114 00:58:55.712690 3293 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/13972001-c667-49c0-9374-a2bbe47d8026-config\") pod \"goldmane-666569f655-xlvwp\" (UID: \"13972001-c667-49c0-9374-a2bbe47d8026\") " pod="calico-system/goldmane-666569f655-xlvwp" Jan 14 00:58:55.713301 kubelet[3293]: I0114 00:58:55.712714 3293 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2qw6k\" (UniqueName: \"kubernetes.io/projected/13972001-c667-49c0-9374-a2bbe47d8026-kube-api-access-2qw6k\") pod \"goldmane-666569f655-xlvwp\" (UID: \"13972001-c667-49c0-9374-a2bbe47d8026\") " pod="calico-system/goldmane-666569f655-xlvwp" Jan 14 00:58:55.877627 containerd[1939]: time="2026-01-14T00:58:55.877507894Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-w9259,Uid:6399b59d-47f1-4ce4-83ea-ea3fb09c0249,Namespace:kube-system,Attempt:0,}" Jan 14 00:58:55.894714 containerd[1939]: time="2026-01-14T00:58:55.894681998Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-mm224,Uid:68eb86c9-3a24-4178-8dfd-2032dfe5776a,Namespace:kube-system,Attempt:0,}" Jan 14 00:58:55.907624 containerd[1939]: time="2026-01-14T00:58:55.907474696Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6b4df85555-29mjx,Uid:f05abc55-0515-49ca-aacb-ebde63d756a4,Namespace:calico-system,Attempt:0,}" Jan 14 00:58:55.925069 containerd[1939]: time="2026-01-14T00:58:55.925024983Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-cdb9b59bb-pr6gg,Uid:7449b447-f9d5-45e2-8001-6763bc56b2d8,Namespace:calico-apiserver,Attempt:0,}" Jan 14 00:58:55.936303 containerd[1939]: time="2026-01-14T00:58:55.936237642Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-xlvwp,Uid:13972001-c667-49c0-9374-a2bbe47d8026,Namespace:calico-system,Attempt:0,}" Jan 14 00:58:55.950947 containerd[1939]: time="2026-01-14T00:58:55.950891127Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-cdb9b59bb-cpdlw,Uid:5e14ba26-eb09-4a70-a4a6-9ad8bd987906,Namespace:calico-apiserver,Attempt:0,}" Jan 14 00:58:55.952673 containerd[1939]: time="2026-01-14T00:58:55.952633919Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-69899b58d4-6wzxj,Uid:25ffadd4-59b8-4f2d-9557-8aa31d4bee36,Namespace:calico-system,Attempt:0,}" Jan 14 00:58:57.368987 systemd[1]: Created slice kubepods-besteffort-pode723b976_5fd1_4159_bd8a_f3fe80761ec5.slice - libcontainer container kubepods-besteffort-pode723b976_5fd1_4159_bd8a_f3fe80761ec5.slice. Jan 14 00:58:57.423484 containerd[1939]: time="2026-01-14T00:58:57.423438770Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-7tkkg,Uid:e723b976-5fd1-4159-bd8a-f3fe80761ec5,Namespace:calico-system,Attempt:0,}" Jan 14 00:58:58.642062 containerd[1939]: time="2026-01-14T00:58:58.642024640Z" level=error msg="Failed to destroy network for sandbox \"d8fa3e37ec365137dfc63c333299b0ccd40b80dd0a7b2a6984aa7d4725a80974\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 00:58:58.648422 systemd[1]: run-netns-cni\x2dcf85276f\x2dca1d\x2ddedd\x2d626c\x2d392ce219c51c.mount: Deactivated successfully. Jan 14 00:58:58.656387 containerd[1939]: time="2026-01-14T00:58:58.656345709Z" level=error msg="Failed to destroy network for sandbox \"d66631cf2b8894b54d8f9b42e881bd4bd051863ba9c79cdf8cea3a1254af642e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 00:58:58.659490 containerd[1939]: time="2026-01-14T00:58:58.659439227Z" level=error msg="Failed to destroy network for sandbox \"7171485478e6a8c1b2083825e4d0e54d5e8896d9e58ed13b5f54d1bb2d346d51\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 00:58:58.661839 systemd[1]: run-netns-cni\x2d4926f54b\x2d32d5\x2dd666\x2d45c6\x2d841ee0b2e1ab.mount: Deactivated successfully. Jan 14 00:58:58.669218 containerd[1939]: time="2026-01-14T00:58:58.668959690Z" level=error msg="Failed to destroy network for sandbox \"5e1c17a46bab0ab924cf4cf27f3accacafdfd4bb4d40acf05e171659e8dbc45a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 00:58:58.670474 systemd[1]: run-netns-cni\x2dd642dfab\x2d9f63\x2d8aa5\x2d20a9\x2d2cf54b93fa0e.mount: Deactivated successfully. Jan 14 00:58:58.678368 systemd[1]: run-netns-cni\x2d51f6e17a\x2db419\x2dd692\x2dda6b\x2dd5c08c924089.mount: Deactivated successfully. Jan 14 00:58:58.685651 containerd[1939]: time="2026-01-14T00:58:58.685595760Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-mm224,Uid:68eb86c9-3a24-4178-8dfd-2032dfe5776a,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"d8fa3e37ec365137dfc63c333299b0ccd40b80dd0a7b2a6984aa7d4725a80974\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 00:58:58.690507 containerd[1939]: time="2026-01-14T00:58:58.690467969Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-xlvwp,Uid:13972001-c667-49c0-9374-a2bbe47d8026,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"d66631cf2b8894b54d8f9b42e881bd4bd051863ba9c79cdf8cea3a1254af642e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 00:58:58.694003 containerd[1939]: time="2026-01-14T00:58:58.692257555Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-w9259,Uid:6399b59d-47f1-4ce4-83ea-ea3fb09c0249,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"7171485478e6a8c1b2083825e4d0e54d5e8896d9e58ed13b5f54d1bb2d346d51\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 00:58:58.694003 containerd[1939]: time="2026-01-14T00:58:58.692363861Z" level=error msg="Failed to destroy network for sandbox \"847c2873acacc811a9a3ba944b7627eb80708ef6153a1cd15886a6170694a6d0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 00:58:58.696974 systemd[1]: run-netns-cni\x2db899516b\x2d813e\x2d3c63\x2dd4e1\x2d88911542c9a6.mount: Deactivated successfully. Jan 14 00:58:58.700670 containerd[1939]: time="2026-01-14T00:58:58.700634948Z" level=error msg="Failed to destroy network for sandbox \"1bbef34561efccc90966cddf90a57c945e903852d135e77cfe199778ff2409b0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 00:58:58.701262 containerd[1939]: time="2026-01-14T00:58:58.701233796Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-cdb9b59bb-pr6gg,Uid:7449b447-f9d5-45e2-8001-6763bc56b2d8,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"5e1c17a46bab0ab924cf4cf27f3accacafdfd4bb4d40acf05e171659e8dbc45a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 00:58:58.709129 containerd[1939]: time="2026-01-14T00:58:58.709045891Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-7tkkg,Uid:e723b976-5fd1-4159-bd8a-f3fe80761ec5,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"847c2873acacc811a9a3ba944b7627eb80708ef6153a1cd15886a6170694a6d0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 00:58:58.710939 kubelet[3293]: E0114 00:58:58.710565 3293 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"847c2873acacc811a9a3ba944b7627eb80708ef6153a1cd15886a6170694a6d0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 00:58:58.710939 kubelet[3293]: E0114 00:58:58.710569 3293 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d8fa3e37ec365137dfc63c333299b0ccd40b80dd0a7b2a6984aa7d4725a80974\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 00:58:58.710939 kubelet[3293]: E0114 00:58:58.710624 3293 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d66631cf2b8894b54d8f9b42e881bd4bd051863ba9c79cdf8cea3a1254af642e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 00:58:58.710939 kubelet[3293]: E0114 00:58:58.710643 3293 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"847c2873acacc811a9a3ba944b7627eb80708ef6153a1cd15886a6170694a6d0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-7tkkg" Jan 14 00:58:58.711389 kubelet[3293]: E0114 00:58:58.710664 3293 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"847c2873acacc811a9a3ba944b7627eb80708ef6153a1cd15886a6170694a6d0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-7tkkg" Jan 14 00:58:58.711389 kubelet[3293]: E0114 00:58:58.710664 3293 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d66631cf2b8894b54d8f9b42e881bd4bd051863ba9c79cdf8cea3a1254af642e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-xlvwp" Jan 14 00:58:58.711389 kubelet[3293]: E0114 00:58:58.710694 3293 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d66631cf2b8894b54d8f9b42e881bd4bd051863ba9c79cdf8cea3a1254af642e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-xlvwp" Jan 14 00:58:58.712100 kubelet[3293]: E0114 00:58:58.710716 3293 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-7tkkg_calico-system(e723b976-5fd1-4159-bd8a-f3fe80761ec5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-7tkkg_calico-system(e723b976-5fd1-4159-bd8a-f3fe80761ec5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"847c2873acacc811a9a3ba944b7627eb80708ef6153a1cd15886a6170694a6d0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-7tkkg" podUID="e723b976-5fd1-4159-bd8a-f3fe80761ec5" Jan 14 00:58:58.712100 kubelet[3293]: E0114 00:58:58.710754 3293 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-xlvwp_calico-system(13972001-c667-49c0-9374-a2bbe47d8026)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-xlvwp_calico-system(13972001-c667-49c0-9374-a2bbe47d8026)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d66631cf2b8894b54d8f9b42e881bd4bd051863ba9c79cdf8cea3a1254af642e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-xlvwp" podUID="13972001-c667-49c0-9374-a2bbe47d8026" Jan 14 00:58:58.712100 kubelet[3293]: E0114 00:58:58.710800 3293 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5e1c17a46bab0ab924cf4cf27f3accacafdfd4bb4d40acf05e171659e8dbc45a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 00:58:58.712270 kubelet[3293]: E0114 00:58:58.710831 3293 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5e1c17a46bab0ab924cf4cf27f3accacafdfd4bb4d40acf05e171659e8dbc45a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-cdb9b59bb-pr6gg" Jan 14 00:58:58.712270 kubelet[3293]: E0114 00:58:58.710844 3293 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5e1c17a46bab0ab924cf4cf27f3accacafdfd4bb4d40acf05e171659e8dbc45a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-cdb9b59bb-pr6gg" Jan 14 00:58:58.712270 kubelet[3293]: E0114 00:58:58.710870 3293 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-cdb9b59bb-pr6gg_calico-apiserver(7449b447-f9d5-45e2-8001-6763bc56b2d8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-cdb9b59bb-pr6gg_calico-apiserver(7449b447-f9d5-45e2-8001-6763bc56b2d8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5e1c17a46bab0ab924cf4cf27f3accacafdfd4bb4d40acf05e171659e8dbc45a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-cdb9b59bb-pr6gg" podUID="7449b447-f9d5-45e2-8001-6763bc56b2d8" Jan 14 00:58:58.712368 kubelet[3293]: E0114 00:58:58.710914 3293 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7171485478e6a8c1b2083825e4d0e54d5e8896d9e58ed13b5f54d1bb2d346d51\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 00:58:58.712368 kubelet[3293]: E0114 00:58:58.710929 3293 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7171485478e6a8c1b2083825e4d0e54d5e8896d9e58ed13b5f54d1bb2d346d51\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-w9259" Jan 14 00:58:58.712368 kubelet[3293]: E0114 00:58:58.710940 3293 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7171485478e6a8c1b2083825e4d0e54d5e8896d9e58ed13b5f54d1bb2d346d51\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-w9259" Jan 14 00:58:58.712450 kubelet[3293]: E0114 00:58:58.710964 3293 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-w9259_kube-system(6399b59d-47f1-4ce4-83ea-ea3fb09c0249)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-w9259_kube-system(6399b59d-47f1-4ce4-83ea-ea3fb09c0249)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7171485478e6a8c1b2083825e4d0e54d5e8896d9e58ed13b5f54d1bb2d346d51\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-w9259" podUID="6399b59d-47f1-4ce4-83ea-ea3fb09c0249" Jan 14 00:58:58.712450 kubelet[3293]: E0114 00:58:58.711000 3293 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d8fa3e37ec365137dfc63c333299b0ccd40b80dd0a7b2a6984aa7d4725a80974\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-mm224" Jan 14 00:58:58.712450 kubelet[3293]: E0114 00:58:58.711011 3293 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d8fa3e37ec365137dfc63c333299b0ccd40b80dd0a7b2a6984aa7d4725a80974\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-mm224" Jan 14 00:58:58.712540 kubelet[3293]: E0114 00:58:58.711036 3293 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-mm224_kube-system(68eb86c9-3a24-4178-8dfd-2032dfe5776a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-mm224_kube-system(68eb86c9-3a24-4178-8dfd-2032dfe5776a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d8fa3e37ec365137dfc63c333299b0ccd40b80dd0a7b2a6984aa7d4725a80974\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-mm224" podUID="68eb86c9-3a24-4178-8dfd-2032dfe5776a" Jan 14 00:58:58.713960 containerd[1939]: time="2026-01-14T00:58:58.713933706Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-69899b58d4-6wzxj,Uid:25ffadd4-59b8-4f2d-9557-8aa31d4bee36,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"1bbef34561efccc90966cddf90a57c945e903852d135e77cfe199778ff2409b0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 00:58:58.714452 kubelet[3293]: E0114 00:58:58.714298 3293 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1bbef34561efccc90966cddf90a57c945e903852d135e77cfe199778ff2409b0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 00:58:58.714452 kubelet[3293]: E0114 00:58:58.714333 3293 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1bbef34561efccc90966cddf90a57c945e903852d135e77cfe199778ff2409b0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-69899b58d4-6wzxj" Jan 14 00:58:58.714452 kubelet[3293]: E0114 00:58:58.714349 3293 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1bbef34561efccc90966cddf90a57c945e903852d135e77cfe199778ff2409b0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-69899b58d4-6wzxj" Jan 14 00:58:58.714570 kubelet[3293]: E0114 00:58:58.714389 3293 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-69899b58d4-6wzxj_calico-system(25ffadd4-59b8-4f2d-9557-8aa31d4bee36)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-69899b58d4-6wzxj_calico-system(25ffadd4-59b8-4f2d-9557-8aa31d4bee36)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1bbef34561efccc90966cddf90a57c945e903852d135e77cfe199778ff2409b0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-69899b58d4-6wzxj" podUID="25ffadd4-59b8-4f2d-9557-8aa31d4bee36" Jan 14 00:58:58.737749 containerd[1939]: time="2026-01-14T00:58:58.737641123Z" level=error msg="Failed to destroy network for sandbox \"f1ff7da8d14d7aeee29cc8b6092b119197b1bab0540b84579c7e2c01fdc43198\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 00:58:58.738022 containerd[1939]: time="2026-01-14T00:58:58.737990412Z" level=error msg="Failed to destroy network for sandbox \"129d9f35e20fab30ba5b9bb72391143f9f6daf8d0237d45149aacf235248a2a7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 00:58:58.740342 containerd[1939]: time="2026-01-14T00:58:58.740309558Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-cdb9b59bb-cpdlw,Uid:5e14ba26-eb09-4a70-a4a6-9ad8bd987906,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"129d9f35e20fab30ba5b9bb72391143f9f6daf8d0237d45149aacf235248a2a7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 00:58:58.741620 kubelet[3293]: E0114 00:58:58.741477 3293 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"129d9f35e20fab30ba5b9bb72391143f9f6daf8d0237d45149aacf235248a2a7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 00:58:58.741620 kubelet[3293]: E0114 00:58:58.741529 3293 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"129d9f35e20fab30ba5b9bb72391143f9f6daf8d0237d45149aacf235248a2a7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-cdb9b59bb-cpdlw" Jan 14 00:58:58.741620 kubelet[3293]: E0114 00:58:58.741550 3293 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"129d9f35e20fab30ba5b9bb72391143f9f6daf8d0237d45149aacf235248a2a7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-cdb9b59bb-cpdlw" Jan 14 00:58:58.741759 kubelet[3293]: E0114 00:58:58.741597 3293 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-cdb9b59bb-cpdlw_calico-apiserver(5e14ba26-eb09-4a70-a4a6-9ad8bd987906)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-cdb9b59bb-cpdlw_calico-apiserver(5e14ba26-eb09-4a70-a4a6-9ad8bd987906)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"129d9f35e20fab30ba5b9bb72391143f9f6daf8d0237d45149aacf235248a2a7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-cdb9b59bb-cpdlw" podUID="5e14ba26-eb09-4a70-a4a6-9ad8bd987906" Jan 14 00:58:58.743082 containerd[1939]: time="2026-01-14T00:58:58.742990188Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6b4df85555-29mjx,Uid:f05abc55-0515-49ca-aacb-ebde63d756a4,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"f1ff7da8d14d7aeee29cc8b6092b119197b1bab0540b84579c7e2c01fdc43198\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 00:58:58.743552 kubelet[3293]: E0114 00:58:58.743236 3293 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f1ff7da8d14d7aeee29cc8b6092b119197b1bab0540b84579c7e2c01fdc43198\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 00:58:58.743644 kubelet[3293]: E0114 00:58:58.743571 3293 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f1ff7da8d14d7aeee29cc8b6092b119197b1bab0540b84579c7e2c01fdc43198\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6b4df85555-29mjx" Jan 14 00:58:58.743644 kubelet[3293]: E0114 00:58:58.743597 3293 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f1ff7da8d14d7aeee29cc8b6092b119197b1bab0540b84579c7e2c01fdc43198\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6b4df85555-29mjx" Jan 14 00:58:58.743727 kubelet[3293]: E0114 00:58:58.743643 3293 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-6b4df85555-29mjx_calico-system(f05abc55-0515-49ca-aacb-ebde63d756a4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-6b4df85555-29mjx_calico-system(f05abc55-0515-49ca-aacb-ebde63d756a4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f1ff7da8d14d7aeee29cc8b6092b119197b1bab0540b84579c7e2c01fdc43198\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6b4df85555-29mjx" podUID="f05abc55-0515-49ca-aacb-ebde63d756a4" Jan 14 00:58:59.646106 systemd[1]: run-netns-cni\x2dd39bf304\x2d6859\x2dd662\x2d42b2\x2d2cd5ac089e1e.mount: Deactivated successfully. Jan 14 00:58:59.646235 systemd[1]: run-netns-cni\x2db9a9afa5\x2df8cf\x2d36c4\x2d3046\x2d782b7b4dea94.mount: Deactivated successfully. Jan 14 00:58:59.646318 systemd[1]: run-netns-cni\x2d48cda38e\x2d2c38\x2d912e\x2d686d\x2d2c7dd96b016f.mount: Deactivated successfully. Jan 14 00:59:05.841740 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3764953919.mount: Deactivated successfully. Jan 14 00:59:05.871091 containerd[1939]: time="2026-01-14T00:59:05.870699612Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 00:59:05.875642 containerd[1939]: time="2026-01-14T00:59:05.875599780Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=156880025" Jan 14 00:59:05.876162 containerd[1939]: time="2026-01-14T00:59:05.876120482Z" level=info msg="ImageCreate event name:\"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 00:59:05.878676 containerd[1939]: time="2026-01-14T00:59:05.878627345Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 00:59:05.879456 containerd[1939]: time="2026-01-14T00:59:05.879113300Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"156883537\" in 10.344699651s" Jan 14 00:59:05.879456 containerd[1939]: time="2026-01-14T00:59:05.879139180Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\"" Jan 14 00:59:05.938681 containerd[1939]: time="2026-01-14T00:59:05.938638956Z" level=info msg="CreateContainer within sandbox \"24a50de305f99c1ea4041ff98db872c71226f48215e9d7862ed01cfd13d90934\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 14 00:59:05.997630 containerd[1939]: time="2026-01-14T00:59:05.996439136Z" level=info msg="Container 6089ee594d294d98f17d35043d4c2b2b031b14d4d475d086aed423b214453d8f: CDI devices from CRI Config.CDIDevices: []" Jan 14 00:59:05.998530 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2207273398.mount: Deactivated successfully. Jan 14 00:59:06.051209 containerd[1939]: time="2026-01-14T00:59:06.051154853Z" level=info msg="CreateContainer within sandbox \"24a50de305f99c1ea4041ff98db872c71226f48215e9d7862ed01cfd13d90934\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"6089ee594d294d98f17d35043d4c2b2b031b14d4d475d086aed423b214453d8f\"" Jan 14 00:59:06.052173 containerd[1939]: time="2026-01-14T00:59:06.052097125Z" level=info msg="StartContainer for \"6089ee594d294d98f17d35043d4c2b2b031b14d4d475d086aed423b214453d8f\"" Jan 14 00:59:06.056746 containerd[1939]: time="2026-01-14T00:59:06.056719613Z" level=info msg="connecting to shim 6089ee594d294d98f17d35043d4c2b2b031b14d4d475d086aed423b214453d8f" address="unix:///run/containerd/s/9db06182455f04e444547c68e3c2f603280889c9ef54fc85b02cc8805351b18f" protocol=ttrpc version=3 Jan 14 00:59:06.172446 systemd[1]: Started cri-containerd-6089ee594d294d98f17d35043d4c2b2b031b14d4d475d086aed423b214453d8f.scope - libcontainer container 6089ee594d294d98f17d35043d4c2b2b031b14d4d475d086aed423b214453d8f. Jan 14 00:59:06.250354 kernel: kauditd_printk_skb: 6 callbacks suppressed Jan 14 00:59:06.258504 kernel: audit: type=1334 audit(1768352346.246:584): prog-id=179 op=LOAD Jan 14 00:59:06.258557 kernel: audit: type=1300 audit(1768352346.246:584): arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0000da488 a2=98 a3=0 items=0 ppid=3913 pid=4394 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:59:06.246000 audit: BPF prog-id=179 op=LOAD Jan 14 00:59:06.246000 audit[4394]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0000da488 a2=98 a3=0 items=0 ppid=3913 pid=4394 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:59:06.262657 kernel: audit: type=1327 audit(1768352346.246:584): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3630383965653539346432393464393866313764333530343364346332 Jan 14 00:59:06.246000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3630383965653539346432393464393866313764333530343364346332 Jan 14 00:59:06.266994 kernel: audit: type=1334 audit(1768352346.249:585): prog-id=180 op=LOAD Jan 14 00:59:06.249000 audit: BPF prog-id=180 op=LOAD Jan 14 00:59:06.274616 kernel: audit: type=1300 audit(1768352346.249:585): arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c0000da218 a2=98 a3=0 items=0 ppid=3913 pid=4394 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:59:06.249000 audit[4394]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c0000da218 a2=98 a3=0 items=0 ppid=3913 pid=4394 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:59:06.249000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3630383965653539346432393464393866313764333530343364346332 Jan 14 00:59:06.282789 kernel: audit: type=1327 audit(1768352346.249:585): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3630383965653539346432393464393866313764333530343364346332 Jan 14 00:59:06.282891 kernel: audit: type=1334 audit(1768352346.249:586): prog-id=180 op=UNLOAD Jan 14 00:59:06.249000 audit: BPF prog-id=180 op=UNLOAD Jan 14 00:59:06.289117 kernel: audit: type=1300 audit(1768352346.249:586): arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3913 pid=4394 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:59:06.249000 audit[4394]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3913 pid=4394 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:59:06.249000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3630383965653539346432393464393866313764333530343364346332 Jan 14 00:59:06.295249 kernel: audit: type=1327 audit(1768352346.249:586): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3630383965653539346432393464393866313764333530343364346332 Jan 14 00:59:06.249000 audit: BPF prog-id=179 op=UNLOAD Jan 14 00:59:06.249000 audit[4394]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3913 pid=4394 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:59:06.297283 kernel: audit: type=1334 audit(1768352346.249:587): prog-id=179 op=UNLOAD Jan 14 00:59:06.249000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3630383965653539346432393464393866313764333530343364346332 Jan 14 00:59:06.249000 audit: BPF prog-id=181 op=LOAD Jan 14 00:59:06.249000 audit[4394]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0000da6e8 a2=98 a3=0 items=0 ppid=3913 pid=4394 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:59:06.249000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3630383965653539346432393464393866313764333530343364346332 Jan 14 00:59:06.376615 containerd[1939]: time="2026-01-14T00:59:06.376569793Z" level=info msg="StartContainer for \"6089ee594d294d98f17d35043d4c2b2b031b14d4d475d086aed423b214453d8f\" returns successfully" Jan 14 00:59:06.690845 kubelet[3293]: I0114 00:59:06.687456 3293 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-kccgl" podStartSLOduration=1.3052877729999999 podStartE2EDuration="21.687420084s" podCreationTimestamp="2026-01-14 00:58:45 +0000 UTC" firstStartedPulling="2026-01-14 00:58:45.497658557 +0000 UTC m=+52.444013396" lastFinishedPulling="2026-01-14 00:59:05.879790878 +0000 UTC m=+72.826145707" observedRunningTime="2026-01-14 00:59:06.674985826 +0000 UTC m=+73.621340679" watchObservedRunningTime="2026-01-14 00:59:06.687420084 +0000 UTC m=+73.633774935" Jan 14 00:59:08.596233 kubelet[3293]: I0114 00:59:08.596177 3293 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 14 00:59:09.306748 containerd[1939]: time="2026-01-14T00:59:09.306576176Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6b4df85555-29mjx,Uid:f05abc55-0515-49ca-aacb-ebde63d756a4,Namespace:calico-system,Attempt:0,}" Jan 14 00:59:09.372535 containerd[1939]: time="2026-01-14T00:59:09.372489156Z" level=error msg="Failed to destroy network for sandbox \"a4158077f08e991334af41edd1f171908cb28ccabb66a95ead32a6e15068c37d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 00:59:09.374750 systemd[1]: run-netns-cni\x2db1c6f721\x2d1bcd\x2d1eb4\x2d522b\x2d5a2be9ba00b8.mount: Deactivated successfully. Jan 14 00:59:09.376249 containerd[1939]: time="2026-01-14T00:59:09.376116013Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6b4df85555-29mjx,Uid:f05abc55-0515-49ca-aacb-ebde63d756a4,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"a4158077f08e991334af41edd1f171908cb28ccabb66a95ead32a6e15068c37d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 00:59:09.376668 kubelet[3293]: E0114 00:59:09.376496 3293 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a4158077f08e991334af41edd1f171908cb28ccabb66a95ead32a6e15068c37d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 00:59:09.376668 kubelet[3293]: E0114 00:59:09.376552 3293 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a4158077f08e991334af41edd1f171908cb28ccabb66a95ead32a6e15068c37d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6b4df85555-29mjx" Jan 14 00:59:09.376668 kubelet[3293]: E0114 00:59:09.376573 3293 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a4158077f08e991334af41edd1f171908cb28ccabb66a95ead32a6e15068c37d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6b4df85555-29mjx" Jan 14 00:59:09.377017 kubelet[3293]: E0114 00:59:09.376704 3293 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-6b4df85555-29mjx_calico-system(f05abc55-0515-49ca-aacb-ebde63d756a4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-6b4df85555-29mjx_calico-system(f05abc55-0515-49ca-aacb-ebde63d756a4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a4158077f08e991334af41edd1f171908cb28ccabb66a95ead32a6e15068c37d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6b4df85555-29mjx" podUID="f05abc55-0515-49ca-aacb-ebde63d756a4" Jan 14 00:59:10.300756 containerd[1939]: time="2026-01-14T00:59:10.300718292Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-cdb9b59bb-pr6gg,Uid:7449b447-f9d5-45e2-8001-6763bc56b2d8,Namespace:calico-apiserver,Attempt:0,}" Jan 14 00:59:10.366858 containerd[1939]: time="2026-01-14T00:59:10.366811114Z" level=error msg="Failed to destroy network for sandbox \"6c45ff20f5a314c5871fb5f74c434e8dc65e18e788b6e88286bedf6d448fd183\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 00:59:10.368777 systemd[1]: run-netns-cni\x2daea1c1f6\x2d28c4\x2d2f4d\x2d066b\x2d69e20e2ba8f2.mount: Deactivated successfully. Jan 14 00:59:10.377211 containerd[1939]: time="2026-01-14T00:59:10.377079518Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-cdb9b59bb-pr6gg,Uid:7449b447-f9d5-45e2-8001-6763bc56b2d8,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"6c45ff20f5a314c5871fb5f74c434e8dc65e18e788b6e88286bedf6d448fd183\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 00:59:10.377398 kubelet[3293]: E0114 00:59:10.377350 3293 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6c45ff20f5a314c5871fb5f74c434e8dc65e18e788b6e88286bedf6d448fd183\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 00:59:10.377688 kubelet[3293]: E0114 00:59:10.377430 3293 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6c45ff20f5a314c5871fb5f74c434e8dc65e18e788b6e88286bedf6d448fd183\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-cdb9b59bb-pr6gg" Jan 14 00:59:10.377688 kubelet[3293]: E0114 00:59:10.377451 3293 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6c45ff20f5a314c5871fb5f74c434e8dc65e18e788b6e88286bedf6d448fd183\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-cdb9b59bb-pr6gg" Jan 14 00:59:10.377688 kubelet[3293]: E0114 00:59:10.377516 3293 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-cdb9b59bb-pr6gg_calico-apiserver(7449b447-f9d5-45e2-8001-6763bc56b2d8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-cdb9b59bb-pr6gg_calico-apiserver(7449b447-f9d5-45e2-8001-6763bc56b2d8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6c45ff20f5a314c5871fb5f74c434e8dc65e18e788b6e88286bedf6d448fd183\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-cdb9b59bb-pr6gg" podUID="7449b447-f9d5-45e2-8001-6763bc56b2d8" Jan 14 00:59:10.483246 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jan 14 00:59:10.484729 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jan 14 00:59:11.371066 kubelet[3293]: I0114 00:59:11.370476 3293 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/25ffadd4-59b8-4f2d-9557-8aa31d4bee36-whisker-ca-bundle\") pod \"25ffadd4-59b8-4f2d-9557-8aa31d4bee36\" (UID: \"25ffadd4-59b8-4f2d-9557-8aa31d4bee36\") " Jan 14 00:59:11.371066 kubelet[3293]: I0114 00:59:11.370553 3293 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/25ffadd4-59b8-4f2d-9557-8aa31d4bee36-whisker-backend-key-pair\") pod \"25ffadd4-59b8-4f2d-9557-8aa31d4bee36\" (UID: \"25ffadd4-59b8-4f2d-9557-8aa31d4bee36\") " Jan 14 00:59:11.371066 kubelet[3293]: I0114 00:59:11.370592 3293 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jwdzd\" (UniqueName: \"kubernetes.io/projected/25ffadd4-59b8-4f2d-9557-8aa31d4bee36-kube-api-access-jwdzd\") pod \"25ffadd4-59b8-4f2d-9557-8aa31d4bee36\" (UID: \"25ffadd4-59b8-4f2d-9557-8aa31d4bee36\") " Jan 14 00:59:11.386841 systemd[1]: var-lib-kubelet-pods-25ffadd4\x2d59b8\x2d4f2d\x2d9557\x2d8aa31d4bee36-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2djwdzd.mount: Deactivated successfully. Jan 14 00:59:11.391905 kubelet[3293]: I0114 00:59:11.386040 3293 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25ffadd4-59b8-4f2d-9557-8aa31d4bee36-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "25ffadd4-59b8-4f2d-9557-8aa31d4bee36" (UID: "25ffadd4-59b8-4f2d-9557-8aa31d4bee36"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 14 00:59:11.393615 kubelet[3293]: I0114 00:59:11.393589 3293 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25ffadd4-59b8-4f2d-9557-8aa31d4bee36-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "25ffadd4-59b8-4f2d-9557-8aa31d4bee36" (UID: "25ffadd4-59b8-4f2d-9557-8aa31d4bee36"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 14 00:59:11.395113 systemd[1]: var-lib-kubelet-pods-25ffadd4\x2d59b8\x2d4f2d\x2d9557\x2d8aa31d4bee36-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Jan 14 00:59:11.399603 kubelet[3293]: I0114 00:59:11.399555 3293 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25ffadd4-59b8-4f2d-9557-8aa31d4bee36-kube-api-access-jwdzd" (OuterVolumeSpecName: "kube-api-access-jwdzd") pod "25ffadd4-59b8-4f2d-9557-8aa31d4bee36" (UID: "25ffadd4-59b8-4f2d-9557-8aa31d4bee36"). InnerVolumeSpecName "kube-api-access-jwdzd". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 14 00:59:11.471502 kubelet[3293]: I0114 00:59:11.471459 3293 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-jwdzd\" (UniqueName: \"kubernetes.io/projected/25ffadd4-59b8-4f2d-9557-8aa31d4bee36-kube-api-access-jwdzd\") on node \"ip-172-31-19-12\" DevicePath \"\"" Jan 14 00:59:11.471502 kubelet[3293]: I0114 00:59:11.471502 3293 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/25ffadd4-59b8-4f2d-9557-8aa31d4bee36-whisker-ca-bundle\") on node \"ip-172-31-19-12\" DevicePath \"\"" Jan 14 00:59:11.471718 kubelet[3293]: I0114 00:59:11.471514 3293 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/25ffadd4-59b8-4f2d-9557-8aa31d4bee36-whisker-backend-key-pair\") on node \"ip-172-31-19-12\" DevicePath \"\"" Jan 14 00:59:11.614530 systemd[1]: Removed slice kubepods-besteffort-pod25ffadd4_59b8_4f2d_9557_8aa31d4bee36.slice - libcontainer container kubepods-besteffort-pod25ffadd4_59b8_4f2d_9557_8aa31d4bee36.slice. Jan 14 00:59:11.929553 systemd[1]: Created slice kubepods-besteffort-pod5410ee2c_498a_49d0_bc39_5e704c2599b9.slice - libcontainer container kubepods-besteffort-pod5410ee2c_498a_49d0_bc39_5e704c2599b9.slice. Jan 14 00:59:11.974808 kubelet[3293]: I0114 00:59:11.974752 3293 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-92d7b\" (UniqueName: \"kubernetes.io/projected/5410ee2c-498a-49d0-bc39-5e704c2599b9-kube-api-access-92d7b\") pod \"whisker-5c479bf86-lp2t6\" (UID: \"5410ee2c-498a-49d0-bc39-5e704c2599b9\") " pod="calico-system/whisker-5c479bf86-lp2t6" Jan 14 00:59:11.976368 kubelet[3293]: I0114 00:59:11.976338 3293 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/5410ee2c-498a-49d0-bc39-5e704c2599b9-whisker-backend-key-pair\") pod \"whisker-5c479bf86-lp2t6\" (UID: \"5410ee2c-498a-49d0-bc39-5e704c2599b9\") " pod="calico-system/whisker-5c479bf86-lp2t6" Jan 14 00:59:11.976547 kubelet[3293]: I0114 00:59:11.976381 3293 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5410ee2c-498a-49d0-bc39-5e704c2599b9-whisker-ca-bundle\") pod \"whisker-5c479bf86-lp2t6\" (UID: \"5410ee2c-498a-49d0-bc39-5e704c2599b9\") " pod="calico-system/whisker-5c479bf86-lp2t6" Jan 14 00:59:12.234246 containerd[1939]: time="2026-01-14T00:59:12.233637580Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5c479bf86-lp2t6,Uid:5410ee2c-498a-49d0-bc39-5e704c2599b9,Namespace:calico-system,Attempt:0,}" Jan 14 00:59:12.303288 containerd[1939]: time="2026-01-14T00:59:12.303159336Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-mm224,Uid:68eb86c9-3a24-4178-8dfd-2032dfe5776a,Namespace:kube-system,Attempt:0,}" Jan 14 00:59:12.305945 containerd[1939]: time="2026-01-14T00:59:12.305081295Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-cdb9b59bb-cpdlw,Uid:5e14ba26-eb09-4a70-a4a6-9ad8bd987906,Namespace:calico-apiserver,Attempt:0,}" Jan 14 00:59:12.305945 containerd[1939]: time="2026-01-14T00:59:12.305132991Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-xlvwp,Uid:13972001-c667-49c0-9374-a2bbe47d8026,Namespace:calico-system,Attempt:0,}" Jan 14 00:59:13.169000 audit: BPF prog-id=182 op=LOAD Jan 14 00:59:13.171689 kernel: kauditd_printk_skb: 5 callbacks suppressed Jan 14 00:59:13.172427 kernel: audit: type=1334 audit(1768352353.169:589): prog-id=182 op=LOAD Jan 14 00:59:13.169000 audit[4741]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffd9c57eaf0 a2=98 a3=1fffffffffffffff items=0 ppid=4638 pid=4741 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:59:13.179227 kernel: audit: type=1300 audit(1768352353.169:589): arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffd9c57eaf0 a2=98 a3=1fffffffffffffff items=0 ppid=4638 pid=4741 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:59:13.169000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 14 00:59:13.186080 kernel: audit: type=1327 audit(1768352353.169:589): proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 14 00:59:13.186152 kernel: audit: type=1334 audit(1768352353.169:590): prog-id=182 op=UNLOAD Jan 14 00:59:13.169000 audit: BPF prog-id=182 op=UNLOAD Jan 14 00:59:13.169000 audit[4741]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffd9c57eac0 a3=0 items=0 ppid=4638 pid=4741 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:59:13.193819 kernel: audit: type=1300 audit(1768352353.169:590): arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffd9c57eac0 a3=0 items=0 ppid=4638 pid=4741 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:59:13.169000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 14 00:59:13.201200 kernel: audit: type=1327 audit(1768352353.169:590): proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 14 00:59:13.169000 audit: BPF prog-id=183 op=LOAD Jan 14 00:59:13.208495 kernel: audit: type=1334 audit(1768352353.169:591): prog-id=183 op=LOAD Jan 14 00:59:13.208575 kernel: audit: type=1300 audit(1768352353.169:591): arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffd9c57e9d0 a2=94 a3=3 items=0 ppid=4638 pid=4741 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:59:13.213570 kernel: audit: type=1327 audit(1768352353.169:591): proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 14 00:59:13.169000 audit[4741]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffd9c57e9d0 a2=94 a3=3 items=0 ppid=4638 pid=4741 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:59:13.169000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 14 00:59:13.215048 kernel: audit: type=1334 audit(1768352353.171:592): prog-id=183 op=UNLOAD Jan 14 00:59:13.171000 audit: BPF prog-id=183 op=UNLOAD Jan 14 00:59:13.171000 audit[4741]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffd9c57e9d0 a2=94 a3=3 items=0 ppid=4638 pid=4741 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:59:13.171000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 14 00:59:13.171000 audit: BPF prog-id=184 op=LOAD Jan 14 00:59:13.171000 audit[4741]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffd9c57ea10 a2=94 a3=7ffd9c57ebf0 items=0 ppid=4638 pid=4741 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:59:13.171000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 14 00:59:13.171000 audit: BPF prog-id=184 op=UNLOAD Jan 14 00:59:13.171000 audit[4741]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffd9c57ea10 a2=94 a3=7ffd9c57ebf0 items=0 ppid=4638 pid=4741 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:59:13.171000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 14 00:59:13.178000 audit: BPF prog-id=185 op=LOAD Jan 14 00:59:13.178000 audit[4742]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffe5df8d800 a2=98 a3=3 items=0 ppid=4638 pid=4742 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:59:13.178000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 00:59:13.178000 audit: BPF prog-id=185 op=UNLOAD Jan 14 00:59:13.178000 audit[4742]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffe5df8d7d0 a3=0 items=0 ppid=4638 pid=4742 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:59:13.178000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 00:59:13.184000 audit: BPF prog-id=186 op=LOAD Jan 14 00:59:13.184000 audit[4742]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffe5df8d5f0 a2=94 a3=54428f items=0 ppid=4638 pid=4742 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:59:13.184000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 00:59:13.184000 audit: BPF prog-id=186 op=UNLOAD Jan 14 00:59:13.184000 audit[4742]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffe5df8d5f0 a2=94 a3=54428f items=0 ppid=4638 pid=4742 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:59:13.184000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 00:59:13.184000 audit: BPF prog-id=187 op=LOAD Jan 14 00:59:13.184000 audit[4742]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffe5df8d620 a2=94 a3=2 items=0 ppid=4638 pid=4742 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:59:13.184000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 00:59:13.184000 audit: BPF prog-id=187 op=UNLOAD Jan 14 00:59:13.184000 audit[4742]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffe5df8d620 a2=0 a3=2 items=0 ppid=4638 pid=4742 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:59:13.184000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 00:59:13.339584 containerd[1939]: time="2026-01-14T00:59:13.339534399Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-w9259,Uid:6399b59d-47f1-4ce4-83ea-ea3fb09c0249,Namespace:kube-system,Attempt:0,}" Jan 14 00:59:13.342781 kubelet[3293]: I0114 00:59:13.342746 3293 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25ffadd4-59b8-4f2d-9557-8aa31d4bee36" path="/var/lib/kubelet/pods/25ffadd4-59b8-4f2d-9557-8aa31d4bee36/volumes" Jan 14 00:59:13.367000 audit: BPF prog-id=188 op=LOAD Jan 14 00:59:13.367000 audit[4742]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffe5df8d4e0 a2=94 a3=1 items=0 ppid=4638 pid=4742 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:59:13.367000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 00:59:13.367000 audit: BPF prog-id=188 op=UNLOAD Jan 14 00:59:13.367000 audit[4742]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffe5df8d4e0 a2=94 a3=1 items=0 ppid=4638 pid=4742 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:59:13.367000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 00:59:13.379000 audit: BPF prog-id=189 op=LOAD Jan 14 00:59:13.379000 audit[4742]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffe5df8d4d0 a2=94 a3=4 items=0 ppid=4638 pid=4742 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:59:13.379000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 00:59:13.379000 audit: BPF prog-id=189 op=UNLOAD Jan 14 00:59:13.379000 audit[4742]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7ffe5df8d4d0 a2=0 a3=4 items=0 ppid=4638 pid=4742 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:59:13.379000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 00:59:13.380000 audit: BPF prog-id=190 op=LOAD Jan 14 00:59:13.380000 audit[4742]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffe5df8d330 a2=94 a3=5 items=0 ppid=4638 pid=4742 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:59:13.380000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 00:59:13.380000 audit: BPF prog-id=190 op=UNLOAD Jan 14 00:59:13.380000 audit[4742]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffe5df8d330 a2=0 a3=5 items=0 ppid=4638 pid=4742 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:59:13.380000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 00:59:13.380000 audit: BPF prog-id=191 op=LOAD Jan 14 00:59:13.380000 audit[4742]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffe5df8d550 a2=94 a3=6 items=0 ppid=4638 pid=4742 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:59:13.380000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 00:59:13.380000 audit: BPF prog-id=191 op=UNLOAD Jan 14 00:59:13.380000 audit[4742]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7ffe5df8d550 a2=0 a3=6 items=0 ppid=4638 pid=4742 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:59:13.380000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 00:59:13.381000 audit: BPF prog-id=192 op=LOAD Jan 14 00:59:13.381000 audit[4742]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffe5df8cd00 a2=94 a3=88 items=0 ppid=4638 pid=4742 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:59:13.381000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 00:59:13.381000 audit: BPF prog-id=193 op=LOAD Jan 14 00:59:13.381000 audit[4742]: SYSCALL arch=c000003e syscall=321 success=yes exit=7 a0=5 a1=7ffe5df8cb80 a2=94 a3=2 items=0 ppid=4638 pid=4742 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:59:13.381000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 00:59:13.381000 audit: BPF prog-id=193 op=UNLOAD Jan 14 00:59:13.381000 audit[4742]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=7 a1=7ffe5df8cbb0 a2=0 a3=7ffe5df8ccb0 items=0 ppid=4638 pid=4742 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:59:13.381000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 00:59:13.382000 audit: BPF prog-id=192 op=UNLOAD Jan 14 00:59:13.382000 audit[4742]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=20acdd10 a2=0 a3=b922f482d491fcc3 items=0 ppid=4638 pid=4742 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:59:13.382000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 00:59:13.473000 audit: BPF prog-id=194 op=LOAD Jan 14 00:59:13.473000 audit[4758]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffea0f2b120 a2=98 a3=1999999999999999 items=0 ppid=4638 pid=4758 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:59:13.473000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 14 00:59:13.473000 audit: BPF prog-id=194 op=UNLOAD Jan 14 00:59:13.473000 audit[4758]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffea0f2b0f0 a3=0 items=0 ppid=4638 pid=4758 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:59:13.473000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 14 00:59:13.473000 audit: BPF prog-id=195 op=LOAD Jan 14 00:59:13.473000 audit[4758]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffea0f2b000 a2=94 a3=ffff items=0 ppid=4638 pid=4758 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:59:13.473000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 14 00:59:13.474000 audit: BPF prog-id=195 op=UNLOAD Jan 14 00:59:13.474000 audit[4758]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffea0f2b000 a2=94 a3=ffff items=0 ppid=4638 pid=4758 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:59:13.474000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 14 00:59:13.474000 audit: BPF prog-id=196 op=LOAD Jan 14 00:59:13.474000 audit[4758]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffea0f2b040 a2=94 a3=7ffea0f2b220 items=0 ppid=4638 pid=4758 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:59:13.474000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 14 00:59:13.474000 audit: BPF prog-id=196 op=UNLOAD Jan 14 00:59:13.474000 audit[4758]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffea0f2b040 a2=94 a3=7ffea0f2b220 items=0 ppid=4638 pid=4758 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:59:13.474000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 14 00:59:13.544061 (udev-worker)[4770]: Network interface NamePolicy= disabled on kernel command line. Jan 14 00:59:13.549894 systemd-networkd[1548]: vxlan.calico: Link UP Jan 14 00:59:13.550313 systemd-networkd[1548]: vxlan.calico: Gained carrier Jan 14 00:59:13.595065 (udev-worker)[4785]: Network interface NamePolicy= disabled on kernel command line. Jan 14 00:59:13.597000 audit: BPF prog-id=197 op=LOAD Jan 14 00:59:13.597000 audit[4787]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fff822ea170 a2=98 a3=0 items=0 ppid=4638 pid=4787 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:59:13.597000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 14 00:59:13.598000 audit: BPF prog-id=197 op=UNLOAD Jan 14 00:59:13.598000 audit[4787]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7fff822ea140 a3=0 items=0 ppid=4638 pid=4787 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:59:13.598000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 14 00:59:13.627000 audit: BPF prog-id=198 op=LOAD Jan 14 00:59:13.627000 audit[4787]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fff822e9f80 a2=94 a3=54428f items=0 ppid=4638 pid=4787 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:59:13.627000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 14 00:59:13.627000 audit: BPF prog-id=198 op=UNLOAD Jan 14 00:59:13.627000 audit[4787]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7fff822e9f80 a2=94 a3=54428f items=0 ppid=4638 pid=4787 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:59:13.627000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 14 00:59:13.627000 audit: BPF prog-id=199 op=LOAD Jan 14 00:59:13.627000 audit[4787]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fff822e9fb0 a2=94 a3=2 items=0 ppid=4638 pid=4787 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:59:13.627000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 14 00:59:13.627000 audit: BPF prog-id=199 op=UNLOAD Jan 14 00:59:13.627000 audit[4787]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7fff822e9fb0 a2=0 a3=2 items=0 ppid=4638 pid=4787 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:59:13.627000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 14 00:59:13.627000 audit: BPF prog-id=200 op=LOAD Jan 14 00:59:13.627000 audit[4787]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7fff822e9d60 a2=94 a3=4 items=0 ppid=4638 pid=4787 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:59:13.627000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 14 00:59:13.627000 audit: BPF prog-id=200 op=UNLOAD Jan 14 00:59:13.627000 audit[4787]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7fff822e9d60 a2=94 a3=4 items=0 ppid=4638 pid=4787 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:59:13.627000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 14 00:59:13.627000 audit: BPF prog-id=201 op=LOAD Jan 14 00:59:13.627000 audit[4787]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7fff822e9e60 a2=94 a3=7fff822e9fe0 items=0 ppid=4638 pid=4787 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:59:13.627000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 14 00:59:13.627000 audit: BPF prog-id=201 op=UNLOAD Jan 14 00:59:13.627000 audit[4787]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7fff822e9e60 a2=0 a3=7fff822e9fe0 items=0 ppid=4638 pid=4787 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:59:13.627000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 14 00:59:13.628000 audit: BPF prog-id=202 op=LOAD Jan 14 00:59:13.628000 audit[4787]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7fff822e9590 a2=94 a3=2 items=0 ppid=4638 pid=4787 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:59:13.628000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 14 00:59:13.628000 audit: BPF prog-id=202 op=UNLOAD Jan 14 00:59:13.628000 audit[4787]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7fff822e9590 a2=0 a3=2 items=0 ppid=4638 pid=4787 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:59:13.628000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 14 00:59:13.628000 audit: BPF prog-id=203 op=LOAD Jan 14 00:59:13.628000 audit[4787]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7fff822e9690 a2=94 a3=30 items=0 ppid=4638 pid=4787 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:59:13.628000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 14 00:59:13.789000 audit: BPF prog-id=204 op=LOAD Jan 14 00:59:13.789000 audit[4795]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffd62a61960 a2=98 a3=0 items=0 ppid=4638 pid=4795 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:59:13.789000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 00:59:13.789000 audit: BPF prog-id=204 op=UNLOAD Jan 14 00:59:13.789000 audit[4795]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffd62a61930 a3=0 items=0 ppid=4638 pid=4795 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:59:13.789000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 00:59:13.789000 audit: BPF prog-id=205 op=LOAD Jan 14 00:59:13.789000 audit[4795]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffd62a61750 a2=94 a3=54428f items=0 ppid=4638 pid=4795 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:59:13.789000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 00:59:13.789000 audit: BPF prog-id=205 op=UNLOAD Jan 14 00:59:13.789000 audit[4795]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffd62a61750 a2=94 a3=54428f items=0 ppid=4638 pid=4795 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:59:13.789000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 00:59:13.789000 audit: BPF prog-id=206 op=LOAD Jan 14 00:59:13.789000 audit[4795]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffd62a61780 a2=94 a3=2 items=0 ppid=4638 pid=4795 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:59:13.789000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 00:59:13.789000 audit: BPF prog-id=206 op=UNLOAD Jan 14 00:59:13.789000 audit[4795]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffd62a61780 a2=0 a3=2 items=0 ppid=4638 pid=4795 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:59:13.789000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 00:59:13.947000 audit: BPF prog-id=207 op=LOAD Jan 14 00:59:13.947000 audit[4795]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffd62a61640 a2=94 a3=1 items=0 ppid=4638 pid=4795 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:59:13.947000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 00:59:13.947000 audit: BPF prog-id=207 op=UNLOAD Jan 14 00:59:13.947000 audit[4795]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffd62a61640 a2=94 a3=1 items=0 ppid=4638 pid=4795 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:59:13.947000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 00:59:13.958000 audit: BPF prog-id=208 op=LOAD Jan 14 00:59:13.958000 audit[4795]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffd62a61630 a2=94 a3=4 items=0 ppid=4638 pid=4795 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:59:13.958000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 00:59:13.959000 audit: BPF prog-id=208 op=UNLOAD Jan 14 00:59:13.959000 audit[4795]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7ffd62a61630 a2=0 a3=4 items=0 ppid=4638 pid=4795 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:59:13.959000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 00:59:13.959000 audit: BPF prog-id=209 op=LOAD Jan 14 00:59:13.959000 audit[4795]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffd62a61490 a2=94 a3=5 items=0 ppid=4638 pid=4795 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:59:13.959000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 00:59:13.959000 audit: BPF prog-id=209 op=UNLOAD Jan 14 00:59:13.959000 audit[4795]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffd62a61490 a2=0 a3=5 items=0 ppid=4638 pid=4795 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:59:13.959000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 00:59:13.960000 audit: BPF prog-id=210 op=LOAD Jan 14 00:59:13.960000 audit[4795]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffd62a616b0 a2=94 a3=6 items=0 ppid=4638 pid=4795 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:59:13.960000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 00:59:13.960000 audit: BPF prog-id=210 op=UNLOAD Jan 14 00:59:13.960000 audit[4795]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7ffd62a616b0 a2=0 a3=6 items=0 ppid=4638 pid=4795 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:59:13.960000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 00:59:13.960000 audit: BPF prog-id=211 op=LOAD Jan 14 00:59:13.960000 audit[4795]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffd62a60e60 a2=94 a3=88 items=0 ppid=4638 pid=4795 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:59:13.960000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 00:59:13.960000 audit: BPF prog-id=212 op=LOAD Jan 14 00:59:13.960000 audit[4795]: SYSCALL arch=c000003e syscall=321 success=yes exit=7 a0=5 a1=7ffd62a60ce0 a2=94 a3=2 items=0 ppid=4638 pid=4795 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:59:13.960000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 00:59:13.961000 audit: BPF prog-id=212 op=UNLOAD Jan 14 00:59:13.961000 audit[4795]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=7 a1=7ffd62a60d10 a2=0 a3=7ffd62a60e10 items=0 ppid=4638 pid=4795 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:59:13.961000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 00:59:13.961000 audit: BPF prog-id=211 op=UNLOAD Jan 14 00:59:13.961000 audit[4795]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=27153d10 a2=0 a3=28e11555538f1ed9 items=0 ppid=4638 pid=4795 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:59:13.961000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 00:59:13.968000 audit: BPF prog-id=203 op=UNLOAD Jan 14 00:59:13.968000 audit[4638]: SYSCALL arch=c000003e syscall=263 success=yes exit=0 a0=ffffffffffffff9c a1=c000ce8180 a2=0 a3=0 items=0 ppid=4627 pid=4638 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="calico-node" exe="/usr/bin/calico-node" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:59:13.968000 audit: PROCTITLE proctitle=63616C69636F2D6E6F6465002D66656C6978 Jan 14 00:59:14.094000 audit[4814]: NETFILTER_CFG table=nat:119 family=2 entries=15 op=nft_register_chain pid=4814 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 14 00:59:14.094000 audit[4814]: SYSCALL arch=c000003e syscall=46 success=yes exit=5084 a0=3 a1=7ffc524fc580 a2=0 a3=7ffc524fc56c items=0 ppid=4638 pid=4814 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:59:14.094000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 14 00:59:14.099000 audit[4818]: NETFILTER_CFG table=mangle:120 family=2 entries=16 op=nft_register_chain pid=4818 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 14 00:59:14.099000 audit[4818]: SYSCALL arch=c000003e syscall=46 success=yes exit=6868 a0=3 a1=7fff31838630 a2=0 a3=7fff3183861c items=0 ppid=4638 pid=4818 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:59:14.099000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 14 00:59:14.102000 audit[4816]: NETFILTER_CFG table=raw:121 family=2 entries=21 op=nft_register_chain pid=4816 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 14 00:59:14.102000 audit[4816]: SYSCALL arch=c000003e syscall=46 success=yes exit=8452 a0=3 a1=7fff98e480e0 a2=0 a3=7fff98e480cc items=0 ppid=4638 pid=4816 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:59:14.102000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 14 00:59:14.110000 audit[4820]: NETFILTER_CFG table=filter:122 family=2 entries=39 op=nft_register_chain pid=4820 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 14 00:59:14.110000 audit[4820]: SYSCALL arch=c000003e syscall=46 success=yes exit=18968 a0=3 a1=7ffd127d99a0 a2=0 a3=7ffd127d998c items=0 ppid=4638 pid=4820 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:59:14.110000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 14 00:59:14.301315 containerd[1939]: time="2026-01-14T00:59:14.301173281Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-7tkkg,Uid:e723b976-5fd1-4159-bd8a-f3fe80761ec5,Namespace:calico-system,Attempt:0,}" Jan 14 00:59:15.307502 systemd-networkd[1548]: vxlan.calico: Gained IPv6LL Jan 14 00:59:15.881439 systemd-networkd[1548]: cali01a5d1589e8: Link UP Jan 14 00:59:15.882159 systemd-networkd[1548]: cali01a5d1589e8: Gained carrier Jan 14 00:59:15.902288 containerd[1939]: 2026-01-14 00:59:12.417 [INFO][4603] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 14 00:59:15.902288 containerd[1939]: 2026-01-14 00:59:12.642 [INFO][4603] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--19--12-k8s-calico--apiserver--cdb9b59bb--cpdlw-eth0 calico-apiserver-cdb9b59bb- calico-apiserver 5e14ba26-eb09-4a70-a4a6-9ad8bd987906 863 0 2026-01-14 00:58:40 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:cdb9b59bb projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-19-12 calico-apiserver-cdb9b59bb-cpdlw eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali01a5d1589e8 [] [] }} ContainerID="6adde7f4aea09c3c132547f0d7b6723c118c3d94824db48ac78758d033c77148" Namespace="calico-apiserver" Pod="calico-apiserver-cdb9b59bb-cpdlw" WorkloadEndpoint="ip--172--31--19--12-k8s-calico--apiserver--cdb9b59bb--cpdlw-" Jan 14 00:59:15.902288 containerd[1939]: 2026-01-14 00:59:12.643 [INFO][4603] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="6adde7f4aea09c3c132547f0d7b6723c118c3d94824db48ac78758d033c77148" Namespace="calico-apiserver" Pod="calico-apiserver-cdb9b59bb-cpdlw" WorkloadEndpoint="ip--172--31--19--12-k8s-calico--apiserver--cdb9b59bb--cpdlw-eth0" Jan 14 00:59:15.902288 containerd[1939]: 2026-01-14 00:59:15.766 [INFO][4662] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6adde7f4aea09c3c132547f0d7b6723c118c3d94824db48ac78758d033c77148" HandleID="k8s-pod-network.6adde7f4aea09c3c132547f0d7b6723c118c3d94824db48ac78758d033c77148" Workload="ip--172--31--19--12-k8s-calico--apiserver--cdb9b59bb--cpdlw-eth0" Jan 14 00:59:15.902797 containerd[1939]: 2026-01-14 00:59:15.768 [INFO][4662] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="6adde7f4aea09c3c132547f0d7b6723c118c3d94824db48ac78758d033c77148" HandleID="k8s-pod-network.6adde7f4aea09c3c132547f0d7b6723c118c3d94824db48ac78758d033c77148" Workload="ip--172--31--19--12-k8s-calico--apiserver--cdb9b59bb--cpdlw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000343720), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ip-172-31-19-12", "pod":"calico-apiserver-cdb9b59bb-cpdlw", "timestamp":"2026-01-14 00:59:15.766772182 +0000 UTC"}, Hostname:"ip-172-31-19-12", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 14 00:59:15.902797 containerd[1939]: 2026-01-14 00:59:15.768 [INFO][4662] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 14 00:59:15.902797 containerd[1939]: 2026-01-14 00:59:15.768 [INFO][4662] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 14 00:59:15.902797 containerd[1939]: 2026-01-14 00:59:15.769 [INFO][4662] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-19-12' Jan 14 00:59:15.902797 containerd[1939]: 2026-01-14 00:59:15.783 [INFO][4662] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.6adde7f4aea09c3c132547f0d7b6723c118c3d94824db48ac78758d033c77148" host="ip-172-31-19-12" Jan 14 00:59:15.902797 containerd[1939]: 2026-01-14 00:59:15.846 [INFO][4662] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-19-12" Jan 14 00:59:15.902797 containerd[1939]: 2026-01-14 00:59:15.852 [INFO][4662] ipam/ipam.go 511: Trying affinity for 192.168.64.192/26 host="ip-172-31-19-12" Jan 14 00:59:15.902797 containerd[1939]: 2026-01-14 00:59:15.854 [INFO][4662] ipam/ipam.go 158: Attempting to load block cidr=192.168.64.192/26 host="ip-172-31-19-12" Jan 14 00:59:15.902797 containerd[1939]: 2026-01-14 00:59:15.856 [INFO][4662] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.64.192/26 host="ip-172-31-19-12" Jan 14 00:59:15.903024 containerd[1939]: 2026-01-14 00:59:15.856 [INFO][4662] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.64.192/26 handle="k8s-pod-network.6adde7f4aea09c3c132547f0d7b6723c118c3d94824db48ac78758d033c77148" host="ip-172-31-19-12" Jan 14 00:59:15.903024 containerd[1939]: 2026-01-14 00:59:15.857 [INFO][4662] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.6adde7f4aea09c3c132547f0d7b6723c118c3d94824db48ac78758d033c77148 Jan 14 00:59:15.903024 containerd[1939]: 2026-01-14 00:59:15.865 [INFO][4662] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.64.192/26 handle="k8s-pod-network.6adde7f4aea09c3c132547f0d7b6723c118c3d94824db48ac78758d033c77148" host="ip-172-31-19-12" Jan 14 00:59:15.903024 containerd[1939]: 2026-01-14 00:59:15.872 [INFO][4662] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.64.193/26] block=192.168.64.192/26 handle="k8s-pod-network.6adde7f4aea09c3c132547f0d7b6723c118c3d94824db48ac78758d033c77148" host="ip-172-31-19-12" Jan 14 00:59:15.903024 containerd[1939]: 2026-01-14 00:59:15.873 [INFO][4662] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.64.193/26] handle="k8s-pod-network.6adde7f4aea09c3c132547f0d7b6723c118c3d94824db48ac78758d033c77148" host="ip-172-31-19-12" Jan 14 00:59:15.903024 containerd[1939]: 2026-01-14 00:59:15.873 [INFO][4662] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 14 00:59:15.903024 containerd[1939]: 2026-01-14 00:59:15.873 [INFO][4662] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.64.193/26] IPv6=[] ContainerID="6adde7f4aea09c3c132547f0d7b6723c118c3d94824db48ac78758d033c77148" HandleID="k8s-pod-network.6adde7f4aea09c3c132547f0d7b6723c118c3d94824db48ac78758d033c77148" Workload="ip--172--31--19--12-k8s-calico--apiserver--cdb9b59bb--cpdlw-eth0" Jan 14 00:59:15.904660 containerd[1939]: 2026-01-14 00:59:15.876 [INFO][4603] cni-plugin/k8s.go 418: Populated endpoint ContainerID="6adde7f4aea09c3c132547f0d7b6723c118c3d94824db48ac78758d033c77148" Namespace="calico-apiserver" Pod="calico-apiserver-cdb9b59bb-cpdlw" WorkloadEndpoint="ip--172--31--19--12-k8s-calico--apiserver--cdb9b59bb--cpdlw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--12-k8s-calico--apiserver--cdb9b59bb--cpdlw-eth0", GenerateName:"calico-apiserver-cdb9b59bb-", Namespace:"calico-apiserver", SelfLink:"", UID:"5e14ba26-eb09-4a70-a4a6-9ad8bd987906", ResourceVersion:"863", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 0, 58, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"cdb9b59bb", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-12", ContainerID:"", Pod:"calico-apiserver-cdb9b59bb-cpdlw", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.64.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali01a5d1589e8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 00:59:15.904760 containerd[1939]: 2026-01-14 00:59:15.876 [INFO][4603] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.64.193/32] ContainerID="6adde7f4aea09c3c132547f0d7b6723c118c3d94824db48ac78758d033c77148" Namespace="calico-apiserver" Pod="calico-apiserver-cdb9b59bb-cpdlw" WorkloadEndpoint="ip--172--31--19--12-k8s-calico--apiserver--cdb9b59bb--cpdlw-eth0" Jan 14 00:59:15.904760 containerd[1939]: 2026-01-14 00:59:15.876 [INFO][4603] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali01a5d1589e8 ContainerID="6adde7f4aea09c3c132547f0d7b6723c118c3d94824db48ac78758d033c77148" Namespace="calico-apiserver" Pod="calico-apiserver-cdb9b59bb-cpdlw" WorkloadEndpoint="ip--172--31--19--12-k8s-calico--apiserver--cdb9b59bb--cpdlw-eth0" Jan 14 00:59:15.904760 containerd[1939]: 2026-01-14 00:59:15.883 [INFO][4603] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6adde7f4aea09c3c132547f0d7b6723c118c3d94824db48ac78758d033c77148" Namespace="calico-apiserver" Pod="calico-apiserver-cdb9b59bb-cpdlw" WorkloadEndpoint="ip--172--31--19--12-k8s-calico--apiserver--cdb9b59bb--cpdlw-eth0" Jan 14 00:59:15.904909 containerd[1939]: 2026-01-14 00:59:15.883 [INFO][4603] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="6adde7f4aea09c3c132547f0d7b6723c118c3d94824db48ac78758d033c77148" Namespace="calico-apiserver" Pod="calico-apiserver-cdb9b59bb-cpdlw" WorkloadEndpoint="ip--172--31--19--12-k8s-calico--apiserver--cdb9b59bb--cpdlw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--12-k8s-calico--apiserver--cdb9b59bb--cpdlw-eth0", GenerateName:"calico-apiserver-cdb9b59bb-", Namespace:"calico-apiserver", SelfLink:"", UID:"5e14ba26-eb09-4a70-a4a6-9ad8bd987906", ResourceVersion:"863", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 0, 58, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"cdb9b59bb", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-12", ContainerID:"6adde7f4aea09c3c132547f0d7b6723c118c3d94824db48ac78758d033c77148", Pod:"calico-apiserver-cdb9b59bb-cpdlw", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.64.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali01a5d1589e8", MAC:"da:86:db:13:c6:71", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 00:59:15.904983 containerd[1939]: 2026-01-14 00:59:15.898 [INFO][4603] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="6adde7f4aea09c3c132547f0d7b6723c118c3d94824db48ac78758d033c77148" Namespace="calico-apiserver" Pod="calico-apiserver-cdb9b59bb-cpdlw" WorkloadEndpoint="ip--172--31--19--12-k8s-calico--apiserver--cdb9b59bb--cpdlw-eth0" Jan 14 00:59:15.991278 systemd-networkd[1548]: calie10e6efd824: Link UP Jan 14 00:59:15.995125 systemd-networkd[1548]: calie10e6efd824: Gained carrier Jan 14 00:59:16.018003 containerd[1939]: 2026-01-14 00:59:12.267 [INFO][4572] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 14 00:59:16.018003 containerd[1939]: 2026-01-14 00:59:12.642 [INFO][4572] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--19--12-k8s-whisker--5c479bf86--lp2t6-eth0 whisker-5c479bf86- calico-system 5410ee2c-498a-49d0-bc39-5e704c2599b9 953 0 2026-01-14 00:59:11 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:5c479bf86 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ip-172-31-19-12 whisker-5c479bf86-lp2t6 eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calie10e6efd824 [] [] }} ContainerID="6c6e2c2da560f039ab19a69c87db7250180e6568e2956846e16677c96458ef99" Namespace="calico-system" Pod="whisker-5c479bf86-lp2t6" WorkloadEndpoint="ip--172--31--19--12-k8s-whisker--5c479bf86--lp2t6-" Jan 14 00:59:16.018003 containerd[1939]: 2026-01-14 00:59:12.642 [INFO][4572] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="6c6e2c2da560f039ab19a69c87db7250180e6568e2956846e16677c96458ef99" Namespace="calico-system" Pod="whisker-5c479bf86-lp2t6" WorkloadEndpoint="ip--172--31--19--12-k8s-whisker--5c479bf86--lp2t6-eth0" Jan 14 00:59:16.018003 containerd[1939]: 2026-01-14 00:59:15.766 [INFO][4658] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6c6e2c2da560f039ab19a69c87db7250180e6568e2956846e16677c96458ef99" HandleID="k8s-pod-network.6c6e2c2da560f039ab19a69c87db7250180e6568e2956846e16677c96458ef99" Workload="ip--172--31--19--12-k8s-whisker--5c479bf86--lp2t6-eth0" Jan 14 00:59:16.018345 containerd[1939]: 2026-01-14 00:59:15.768 [INFO][4658] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="6c6e2c2da560f039ab19a69c87db7250180e6568e2956846e16677c96458ef99" HandleID="k8s-pod-network.6c6e2c2da560f039ab19a69c87db7250180e6568e2956846e16677c96458ef99" Workload="ip--172--31--19--12-k8s-whisker--5c479bf86--lp2t6-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0001f4310), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-19-12", "pod":"whisker-5c479bf86-lp2t6", "timestamp":"2026-01-14 00:59:15.766449077 +0000 UTC"}, Hostname:"ip-172-31-19-12", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 14 00:59:16.018345 containerd[1939]: 2026-01-14 00:59:15.768 [INFO][4658] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 14 00:59:16.018345 containerd[1939]: 2026-01-14 00:59:15.873 [INFO][4658] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 14 00:59:16.018345 containerd[1939]: 2026-01-14 00:59:15.873 [INFO][4658] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-19-12' Jan 14 00:59:16.018345 containerd[1939]: 2026-01-14 00:59:15.886 [INFO][4658] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.6c6e2c2da560f039ab19a69c87db7250180e6568e2956846e16677c96458ef99" host="ip-172-31-19-12" Jan 14 00:59:16.018345 containerd[1939]: 2026-01-14 00:59:15.945 [INFO][4658] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-19-12" Jan 14 00:59:16.018345 containerd[1939]: 2026-01-14 00:59:15.952 [INFO][4658] ipam/ipam.go 511: Trying affinity for 192.168.64.192/26 host="ip-172-31-19-12" Jan 14 00:59:16.018345 containerd[1939]: 2026-01-14 00:59:15.954 [INFO][4658] ipam/ipam.go 158: Attempting to load block cidr=192.168.64.192/26 host="ip-172-31-19-12" Jan 14 00:59:16.018345 containerd[1939]: 2026-01-14 00:59:15.957 [INFO][4658] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.64.192/26 host="ip-172-31-19-12" Jan 14 00:59:16.018345 containerd[1939]: 2026-01-14 00:59:15.957 [INFO][4658] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.64.192/26 handle="k8s-pod-network.6c6e2c2da560f039ab19a69c87db7250180e6568e2956846e16677c96458ef99" host="ip-172-31-19-12" Jan 14 00:59:16.020099 containerd[1939]: 2026-01-14 00:59:15.960 [INFO][4658] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.6c6e2c2da560f039ab19a69c87db7250180e6568e2956846e16677c96458ef99 Jan 14 00:59:16.020099 containerd[1939]: 2026-01-14 00:59:15.964 [INFO][4658] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.64.192/26 handle="k8s-pod-network.6c6e2c2da560f039ab19a69c87db7250180e6568e2956846e16677c96458ef99" host="ip-172-31-19-12" Jan 14 00:59:16.020099 containerd[1939]: 2026-01-14 00:59:15.971 [INFO][4658] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.64.194/26] block=192.168.64.192/26 handle="k8s-pod-network.6c6e2c2da560f039ab19a69c87db7250180e6568e2956846e16677c96458ef99" host="ip-172-31-19-12" Jan 14 00:59:16.020099 containerd[1939]: 2026-01-14 00:59:15.971 [INFO][4658] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.64.194/26] handle="k8s-pod-network.6c6e2c2da560f039ab19a69c87db7250180e6568e2956846e16677c96458ef99" host="ip-172-31-19-12" Jan 14 00:59:16.020099 containerd[1939]: 2026-01-14 00:59:15.971 [INFO][4658] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 14 00:59:16.020099 containerd[1939]: 2026-01-14 00:59:15.971 [INFO][4658] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.64.194/26] IPv6=[] ContainerID="6c6e2c2da560f039ab19a69c87db7250180e6568e2956846e16677c96458ef99" HandleID="k8s-pod-network.6c6e2c2da560f039ab19a69c87db7250180e6568e2956846e16677c96458ef99" Workload="ip--172--31--19--12-k8s-whisker--5c479bf86--lp2t6-eth0" Jan 14 00:59:16.020376 containerd[1939]: 2026-01-14 00:59:15.979 [INFO][4572] cni-plugin/k8s.go 418: Populated endpoint ContainerID="6c6e2c2da560f039ab19a69c87db7250180e6568e2956846e16677c96458ef99" Namespace="calico-system" Pod="whisker-5c479bf86-lp2t6" WorkloadEndpoint="ip--172--31--19--12-k8s-whisker--5c479bf86--lp2t6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--12-k8s-whisker--5c479bf86--lp2t6-eth0", GenerateName:"whisker-5c479bf86-", Namespace:"calico-system", SelfLink:"", UID:"5410ee2c-498a-49d0-bc39-5e704c2599b9", ResourceVersion:"953", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 0, 59, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"5c479bf86", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-12", ContainerID:"", Pod:"whisker-5c479bf86-lp2t6", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.64.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calie10e6efd824", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 00:59:16.020376 containerd[1939]: 2026-01-14 00:59:15.979 [INFO][4572] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.64.194/32] ContainerID="6c6e2c2da560f039ab19a69c87db7250180e6568e2956846e16677c96458ef99" Namespace="calico-system" Pod="whisker-5c479bf86-lp2t6" WorkloadEndpoint="ip--172--31--19--12-k8s-whisker--5c479bf86--lp2t6-eth0" Jan 14 00:59:16.020481 containerd[1939]: 2026-01-14 00:59:15.980 [INFO][4572] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie10e6efd824 ContainerID="6c6e2c2da560f039ab19a69c87db7250180e6568e2956846e16677c96458ef99" Namespace="calico-system" Pod="whisker-5c479bf86-lp2t6" WorkloadEndpoint="ip--172--31--19--12-k8s-whisker--5c479bf86--lp2t6-eth0" Jan 14 00:59:16.020481 containerd[1939]: 2026-01-14 00:59:15.997 [INFO][4572] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6c6e2c2da560f039ab19a69c87db7250180e6568e2956846e16677c96458ef99" Namespace="calico-system" Pod="whisker-5c479bf86-lp2t6" WorkloadEndpoint="ip--172--31--19--12-k8s-whisker--5c479bf86--lp2t6-eth0" Jan 14 00:59:16.020540 containerd[1939]: 2026-01-14 00:59:15.998 [INFO][4572] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="6c6e2c2da560f039ab19a69c87db7250180e6568e2956846e16677c96458ef99" Namespace="calico-system" Pod="whisker-5c479bf86-lp2t6" WorkloadEndpoint="ip--172--31--19--12-k8s-whisker--5c479bf86--lp2t6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--12-k8s-whisker--5c479bf86--lp2t6-eth0", GenerateName:"whisker-5c479bf86-", Namespace:"calico-system", SelfLink:"", UID:"5410ee2c-498a-49d0-bc39-5e704c2599b9", ResourceVersion:"953", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 0, 59, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"5c479bf86", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-12", ContainerID:"6c6e2c2da560f039ab19a69c87db7250180e6568e2956846e16677c96458ef99", Pod:"whisker-5c479bf86-lp2t6", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.64.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calie10e6efd824", MAC:"0a:c9:d1:c5:07:db", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 00:59:16.020597 containerd[1939]: 2026-01-14 00:59:16.014 [INFO][4572] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="6c6e2c2da560f039ab19a69c87db7250180e6568e2956846e16677c96458ef99" Namespace="calico-system" Pod="whisker-5c479bf86-lp2t6" WorkloadEndpoint="ip--172--31--19--12-k8s-whisker--5c479bf86--lp2t6-eth0" Jan 14 00:59:16.112697 systemd-networkd[1548]: cali90da3e2c685: Link UP Jan 14 00:59:16.117813 systemd-networkd[1548]: cali90da3e2c685: Gained carrier Jan 14 00:59:16.177966 containerd[1939]: 2026-01-14 00:59:12.401 [INFO][4583] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 14 00:59:16.177966 containerd[1939]: 2026-01-14 00:59:12.642 [INFO][4583] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--19--12-k8s-coredns--674b8bbfcf--mm224-eth0 coredns-674b8bbfcf- kube-system 68eb86c9-3a24-4178-8dfd-2032dfe5776a 860 0 2026-01-14 00:57:59 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-19-12 coredns-674b8bbfcf-mm224 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali90da3e2c685 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="a55202ac637ff2210c0687302198a51fbbac3f0657c4226023a407ccaa79a998" Namespace="kube-system" Pod="coredns-674b8bbfcf-mm224" WorkloadEndpoint="ip--172--31--19--12-k8s-coredns--674b8bbfcf--mm224-" Jan 14 00:59:16.177966 containerd[1939]: 2026-01-14 00:59:12.642 [INFO][4583] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a55202ac637ff2210c0687302198a51fbbac3f0657c4226023a407ccaa79a998" Namespace="kube-system" Pod="coredns-674b8bbfcf-mm224" WorkloadEndpoint="ip--172--31--19--12-k8s-coredns--674b8bbfcf--mm224-eth0" Jan 14 00:59:16.177966 containerd[1939]: 2026-01-14 00:59:15.766 [INFO][4665] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a55202ac637ff2210c0687302198a51fbbac3f0657c4226023a407ccaa79a998" HandleID="k8s-pod-network.a55202ac637ff2210c0687302198a51fbbac3f0657c4226023a407ccaa79a998" Workload="ip--172--31--19--12-k8s-coredns--674b8bbfcf--mm224-eth0" Jan 14 00:59:16.178300 containerd[1939]: 2026-01-14 00:59:15.768 [INFO][4665] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="a55202ac637ff2210c0687302198a51fbbac3f0657c4226023a407ccaa79a998" HandleID="k8s-pod-network.a55202ac637ff2210c0687302198a51fbbac3f0657c4226023a407ccaa79a998" Workload="ip--172--31--19--12-k8s-coredns--674b8bbfcf--mm224-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003343a0), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-19-12", "pod":"coredns-674b8bbfcf-mm224", "timestamp":"2026-01-14 00:59:15.76680413 +0000 UTC"}, Hostname:"ip-172-31-19-12", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 14 00:59:16.178300 containerd[1939]: 2026-01-14 00:59:15.769 [INFO][4665] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 14 00:59:16.178300 containerd[1939]: 2026-01-14 00:59:15.971 [INFO][4665] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 14 00:59:16.178300 containerd[1939]: 2026-01-14 00:59:15.971 [INFO][4665] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-19-12' Jan 14 00:59:16.178300 containerd[1939]: 2026-01-14 00:59:15.997 [INFO][4665] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.a55202ac637ff2210c0687302198a51fbbac3f0657c4226023a407ccaa79a998" host="ip-172-31-19-12" Jan 14 00:59:16.178300 containerd[1939]: 2026-01-14 00:59:16.047 [INFO][4665] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-19-12" Jan 14 00:59:16.178300 containerd[1939]: 2026-01-14 00:59:16.053 [INFO][4665] ipam/ipam.go 511: Trying affinity for 192.168.64.192/26 host="ip-172-31-19-12" Jan 14 00:59:16.178300 containerd[1939]: 2026-01-14 00:59:16.055 [INFO][4665] ipam/ipam.go 158: Attempting to load block cidr=192.168.64.192/26 host="ip-172-31-19-12" Jan 14 00:59:16.178300 containerd[1939]: 2026-01-14 00:59:16.057 [INFO][4665] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.64.192/26 host="ip-172-31-19-12" Jan 14 00:59:16.178300 containerd[1939]: 2026-01-14 00:59:16.057 [INFO][4665] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.64.192/26 handle="k8s-pod-network.a55202ac637ff2210c0687302198a51fbbac3f0657c4226023a407ccaa79a998" host="ip-172-31-19-12" Jan 14 00:59:16.180141 containerd[1939]: 2026-01-14 00:59:16.059 [INFO][4665] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.a55202ac637ff2210c0687302198a51fbbac3f0657c4226023a407ccaa79a998 Jan 14 00:59:16.180141 containerd[1939]: 2026-01-14 00:59:16.064 [INFO][4665] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.64.192/26 handle="k8s-pod-network.a55202ac637ff2210c0687302198a51fbbac3f0657c4226023a407ccaa79a998" host="ip-172-31-19-12" Jan 14 00:59:16.180141 containerd[1939]: 2026-01-14 00:59:16.088 [INFO][4665] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.64.195/26] block=192.168.64.192/26 handle="k8s-pod-network.a55202ac637ff2210c0687302198a51fbbac3f0657c4226023a407ccaa79a998" host="ip-172-31-19-12" Jan 14 00:59:16.180141 containerd[1939]: 2026-01-14 00:59:16.088 [INFO][4665] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.64.195/26] handle="k8s-pod-network.a55202ac637ff2210c0687302198a51fbbac3f0657c4226023a407ccaa79a998" host="ip-172-31-19-12" Jan 14 00:59:16.180141 containerd[1939]: 2026-01-14 00:59:16.088 [INFO][4665] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 14 00:59:16.180141 containerd[1939]: 2026-01-14 00:59:16.090 [INFO][4665] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.64.195/26] IPv6=[] ContainerID="a55202ac637ff2210c0687302198a51fbbac3f0657c4226023a407ccaa79a998" HandleID="k8s-pod-network.a55202ac637ff2210c0687302198a51fbbac3f0657c4226023a407ccaa79a998" Workload="ip--172--31--19--12-k8s-coredns--674b8bbfcf--mm224-eth0" Jan 14 00:59:16.180943 containerd[1939]: 2026-01-14 00:59:16.105 [INFO][4583] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a55202ac637ff2210c0687302198a51fbbac3f0657c4226023a407ccaa79a998" Namespace="kube-system" Pod="coredns-674b8bbfcf-mm224" WorkloadEndpoint="ip--172--31--19--12-k8s-coredns--674b8bbfcf--mm224-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--12-k8s-coredns--674b8bbfcf--mm224-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"68eb86c9-3a24-4178-8dfd-2032dfe5776a", ResourceVersion:"860", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 0, 57, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-12", ContainerID:"", Pod:"coredns-674b8bbfcf-mm224", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.64.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali90da3e2c685", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 00:59:16.180943 containerd[1939]: 2026-01-14 00:59:16.105 [INFO][4583] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.64.195/32] ContainerID="a55202ac637ff2210c0687302198a51fbbac3f0657c4226023a407ccaa79a998" Namespace="kube-system" Pod="coredns-674b8bbfcf-mm224" WorkloadEndpoint="ip--172--31--19--12-k8s-coredns--674b8bbfcf--mm224-eth0" Jan 14 00:59:16.180943 containerd[1939]: 2026-01-14 00:59:16.105 [INFO][4583] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali90da3e2c685 ContainerID="a55202ac637ff2210c0687302198a51fbbac3f0657c4226023a407ccaa79a998" Namespace="kube-system" Pod="coredns-674b8bbfcf-mm224" WorkloadEndpoint="ip--172--31--19--12-k8s-coredns--674b8bbfcf--mm224-eth0" Jan 14 00:59:16.180943 containerd[1939]: 2026-01-14 00:59:16.127 [INFO][4583] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a55202ac637ff2210c0687302198a51fbbac3f0657c4226023a407ccaa79a998" Namespace="kube-system" Pod="coredns-674b8bbfcf-mm224" WorkloadEndpoint="ip--172--31--19--12-k8s-coredns--674b8bbfcf--mm224-eth0" Jan 14 00:59:16.180943 containerd[1939]: 2026-01-14 00:59:16.127 [INFO][4583] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a55202ac637ff2210c0687302198a51fbbac3f0657c4226023a407ccaa79a998" Namespace="kube-system" Pod="coredns-674b8bbfcf-mm224" WorkloadEndpoint="ip--172--31--19--12-k8s-coredns--674b8bbfcf--mm224-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--12-k8s-coredns--674b8bbfcf--mm224-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"68eb86c9-3a24-4178-8dfd-2032dfe5776a", ResourceVersion:"860", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 0, 57, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-12", ContainerID:"a55202ac637ff2210c0687302198a51fbbac3f0657c4226023a407ccaa79a998", Pod:"coredns-674b8bbfcf-mm224", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.64.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali90da3e2c685", MAC:"e6:72:27:f1:8b:8d", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 00:59:16.180943 containerd[1939]: 2026-01-14 00:59:16.163 [INFO][4583] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a55202ac637ff2210c0687302198a51fbbac3f0657c4226023a407ccaa79a998" Namespace="kube-system" Pod="coredns-674b8bbfcf-mm224" WorkloadEndpoint="ip--172--31--19--12-k8s-coredns--674b8bbfcf--mm224-eth0" Jan 14 00:59:16.195000 audit[4853]: NETFILTER_CFG table=filter:123 family=2 entries=46 op=nft_register_chain pid=4853 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 14 00:59:16.195000 audit[4853]: SYSCALL arch=c000003e syscall=46 success=yes exit=27020 a0=3 a1=7ffd4e05d4f0 a2=0 a3=7ffd4e05d4dc items=0 ppid=4638 pid=4853 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:59:16.195000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 14 00:59:16.261154 containerd[1939]: time="2026-01-14T00:59:16.260952415Z" level=info msg="connecting to shim 6adde7f4aea09c3c132547f0d7b6723c118c3d94824db48ac78758d033c77148" address="unix:///run/containerd/s/695d4515da23807d1db8ffc75adb56a082f73710d3953d0c9bb389669b49bb0b" namespace=k8s.io protocol=ttrpc version=3 Jan 14 00:59:16.267103 systemd-networkd[1548]: cali40acdcb5b70: Link UP Jan 14 00:59:16.268291 systemd-networkd[1548]: cali40acdcb5b70: Gained carrier Jan 14 00:59:16.289807 containerd[1939]: time="2026-01-14T00:59:16.289758544Z" level=info msg="connecting to shim 6c6e2c2da560f039ab19a69c87db7250180e6568e2956846e16677c96458ef99" address="unix:///run/containerd/s/720d5b61591a3f180f3f22e0a66ce014b3fe28ebab5bcc0ee3888085e23540fe" namespace=k8s.io protocol=ttrpc version=3 Jan 14 00:59:16.334087 containerd[1939]: time="2026-01-14T00:59:16.333451526Z" level=info msg="connecting to shim a55202ac637ff2210c0687302198a51fbbac3f0657c4226023a407ccaa79a998" address="unix:///run/containerd/s/06ca2e921a022175dd23cadb8a2a6b6e5309f3345ec2eb2c74ae229ef6f1e7ba" namespace=k8s.io protocol=ttrpc version=3 Jan 14 00:59:16.351000 audit[4934]: NETFILTER_CFG table=filter:124 family=2 entries=97 op=nft_register_chain pid=4934 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 14 00:59:16.351000 audit[4934]: SYSCALL arch=c000003e syscall=46 success=yes exit=56700 a0=3 a1=7ffc944ee3d0 a2=0 a3=7ffc944ee3bc items=0 ppid=4638 pid=4934 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:59:16.351000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 14 00:59:16.355733 containerd[1939]: 2026-01-14 00:59:12.393 [INFO][4592] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 14 00:59:16.355733 containerd[1939]: 2026-01-14 00:59:12.642 [INFO][4592] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--19--12-k8s-goldmane--666569f655--xlvwp-eth0 goldmane-666569f655- calico-system 13972001-c667-49c0-9374-a2bbe47d8026 861 0 2026-01-14 00:58:42 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:666569f655 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ip-172-31-19-12 goldmane-666569f655-xlvwp eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali40acdcb5b70 [] [] }} ContainerID="26d4f6434e63d8631e9a956bcfc66259bb5aa71b05860a6d0f3007babd693956" Namespace="calico-system" Pod="goldmane-666569f655-xlvwp" WorkloadEndpoint="ip--172--31--19--12-k8s-goldmane--666569f655--xlvwp-" Jan 14 00:59:16.355733 containerd[1939]: 2026-01-14 00:59:12.642 [INFO][4592] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="26d4f6434e63d8631e9a956bcfc66259bb5aa71b05860a6d0f3007babd693956" Namespace="calico-system" Pod="goldmane-666569f655-xlvwp" WorkloadEndpoint="ip--172--31--19--12-k8s-goldmane--666569f655--xlvwp-eth0" Jan 14 00:59:16.355733 containerd[1939]: 2026-01-14 00:59:15.766 [INFO][4660] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="26d4f6434e63d8631e9a956bcfc66259bb5aa71b05860a6d0f3007babd693956" HandleID="k8s-pod-network.26d4f6434e63d8631e9a956bcfc66259bb5aa71b05860a6d0f3007babd693956" Workload="ip--172--31--19--12-k8s-goldmane--666569f655--xlvwp-eth0" Jan 14 00:59:16.355733 containerd[1939]: 2026-01-14 00:59:15.769 [INFO][4660] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="26d4f6434e63d8631e9a956bcfc66259bb5aa71b05860a6d0f3007babd693956" HandleID="k8s-pod-network.26d4f6434e63d8631e9a956bcfc66259bb5aa71b05860a6d0f3007babd693956" Workload="ip--172--31--19--12-k8s-goldmane--666569f655--xlvwp-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000103c30), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-19-12", "pod":"goldmane-666569f655-xlvwp", "timestamp":"2026-01-14 00:59:15.766449327 +0000 UTC"}, Hostname:"ip-172-31-19-12", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 14 00:59:16.355733 containerd[1939]: 2026-01-14 00:59:15.769 [INFO][4660] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 14 00:59:16.355733 containerd[1939]: 2026-01-14 00:59:16.088 [INFO][4660] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 14 00:59:16.355733 containerd[1939]: 2026-01-14 00:59:16.088 [INFO][4660] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-19-12' Jan 14 00:59:16.355733 containerd[1939]: 2026-01-14 00:59:16.104 [INFO][4660] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.26d4f6434e63d8631e9a956bcfc66259bb5aa71b05860a6d0f3007babd693956" host="ip-172-31-19-12" Jan 14 00:59:16.355733 containerd[1939]: 2026-01-14 00:59:16.148 [INFO][4660] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-19-12" Jan 14 00:59:16.355733 containerd[1939]: 2026-01-14 00:59:16.165 [INFO][4660] ipam/ipam.go 511: Trying affinity for 192.168.64.192/26 host="ip-172-31-19-12" Jan 14 00:59:16.355733 containerd[1939]: 2026-01-14 00:59:16.176 [INFO][4660] ipam/ipam.go 158: Attempting to load block cidr=192.168.64.192/26 host="ip-172-31-19-12" Jan 14 00:59:16.355733 containerd[1939]: 2026-01-14 00:59:16.187 [INFO][4660] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.64.192/26 host="ip-172-31-19-12" Jan 14 00:59:16.355733 containerd[1939]: 2026-01-14 00:59:16.188 [INFO][4660] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.64.192/26 handle="k8s-pod-network.26d4f6434e63d8631e9a956bcfc66259bb5aa71b05860a6d0f3007babd693956" host="ip-172-31-19-12" Jan 14 00:59:16.355733 containerd[1939]: 2026-01-14 00:59:16.194 [INFO][4660] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.26d4f6434e63d8631e9a956bcfc66259bb5aa71b05860a6d0f3007babd693956 Jan 14 00:59:16.355733 containerd[1939]: 2026-01-14 00:59:16.211 [INFO][4660] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.64.192/26 handle="k8s-pod-network.26d4f6434e63d8631e9a956bcfc66259bb5aa71b05860a6d0f3007babd693956" host="ip-172-31-19-12" Jan 14 00:59:16.355733 containerd[1939]: 2026-01-14 00:59:16.233 [INFO][4660] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.64.196/26] block=192.168.64.192/26 handle="k8s-pod-network.26d4f6434e63d8631e9a956bcfc66259bb5aa71b05860a6d0f3007babd693956" host="ip-172-31-19-12" Jan 14 00:59:16.355733 containerd[1939]: 2026-01-14 00:59:16.234 [INFO][4660] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.64.196/26] handle="k8s-pod-network.26d4f6434e63d8631e9a956bcfc66259bb5aa71b05860a6d0f3007babd693956" host="ip-172-31-19-12" Jan 14 00:59:16.355733 containerd[1939]: 2026-01-14 00:59:16.234 [INFO][4660] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 14 00:59:16.355733 containerd[1939]: 2026-01-14 00:59:16.234 [INFO][4660] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.64.196/26] IPv6=[] ContainerID="26d4f6434e63d8631e9a956bcfc66259bb5aa71b05860a6d0f3007babd693956" HandleID="k8s-pod-network.26d4f6434e63d8631e9a956bcfc66259bb5aa71b05860a6d0f3007babd693956" Workload="ip--172--31--19--12-k8s-goldmane--666569f655--xlvwp-eth0" Jan 14 00:59:16.357146 containerd[1939]: 2026-01-14 00:59:16.255 [INFO][4592] cni-plugin/k8s.go 418: Populated endpoint ContainerID="26d4f6434e63d8631e9a956bcfc66259bb5aa71b05860a6d0f3007babd693956" Namespace="calico-system" Pod="goldmane-666569f655-xlvwp" WorkloadEndpoint="ip--172--31--19--12-k8s-goldmane--666569f655--xlvwp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--12-k8s-goldmane--666569f655--xlvwp-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"13972001-c667-49c0-9374-a2bbe47d8026", ResourceVersion:"861", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 0, 58, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-12", ContainerID:"", Pod:"goldmane-666569f655-xlvwp", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.64.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali40acdcb5b70", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 00:59:16.357146 containerd[1939]: 2026-01-14 00:59:16.255 [INFO][4592] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.64.196/32] ContainerID="26d4f6434e63d8631e9a956bcfc66259bb5aa71b05860a6d0f3007babd693956" Namespace="calico-system" Pod="goldmane-666569f655-xlvwp" WorkloadEndpoint="ip--172--31--19--12-k8s-goldmane--666569f655--xlvwp-eth0" Jan 14 00:59:16.357146 containerd[1939]: 2026-01-14 00:59:16.255 [INFO][4592] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali40acdcb5b70 ContainerID="26d4f6434e63d8631e9a956bcfc66259bb5aa71b05860a6d0f3007babd693956" Namespace="calico-system" Pod="goldmane-666569f655-xlvwp" WorkloadEndpoint="ip--172--31--19--12-k8s-goldmane--666569f655--xlvwp-eth0" Jan 14 00:59:16.357146 containerd[1939]: 2026-01-14 00:59:16.268 [INFO][4592] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="26d4f6434e63d8631e9a956bcfc66259bb5aa71b05860a6d0f3007babd693956" Namespace="calico-system" Pod="goldmane-666569f655-xlvwp" WorkloadEndpoint="ip--172--31--19--12-k8s-goldmane--666569f655--xlvwp-eth0" Jan 14 00:59:16.357146 containerd[1939]: 2026-01-14 00:59:16.292 [INFO][4592] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="26d4f6434e63d8631e9a956bcfc66259bb5aa71b05860a6d0f3007babd693956" Namespace="calico-system" Pod="goldmane-666569f655-xlvwp" WorkloadEndpoint="ip--172--31--19--12-k8s-goldmane--666569f655--xlvwp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--12-k8s-goldmane--666569f655--xlvwp-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"13972001-c667-49c0-9374-a2bbe47d8026", ResourceVersion:"861", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 0, 58, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-12", ContainerID:"26d4f6434e63d8631e9a956bcfc66259bb5aa71b05860a6d0f3007babd693956", Pod:"goldmane-666569f655-xlvwp", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.64.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali40acdcb5b70", MAC:"7a:1e:e8:a6:80:0e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 00:59:16.357146 containerd[1939]: 2026-01-14 00:59:16.336 [INFO][4592] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="26d4f6434e63d8631e9a956bcfc66259bb5aa71b05860a6d0f3007babd693956" Namespace="calico-system" Pod="goldmane-666569f655-xlvwp" WorkloadEndpoint="ip--172--31--19--12-k8s-goldmane--666569f655--xlvwp-eth0" Jan 14 00:59:16.369764 systemd[1]: Started cri-containerd-6c6e2c2da560f039ab19a69c87db7250180e6568e2956846e16677c96458ef99.scope - libcontainer container 6c6e2c2da560f039ab19a69c87db7250180e6568e2956846e16677c96458ef99. Jan 14 00:59:16.419936 systemd[1]: Started cri-containerd-6adde7f4aea09c3c132547f0d7b6723c118c3d94824db48ac78758d033c77148.scope - libcontainer container 6adde7f4aea09c3c132547f0d7b6723c118c3d94824db48ac78758d033c77148. Jan 14 00:59:16.461447 systemd[1]: Started cri-containerd-a55202ac637ff2210c0687302198a51fbbac3f0657c4226023a407ccaa79a998.scope - libcontainer container a55202ac637ff2210c0687302198a51fbbac3f0657c4226023a407ccaa79a998. Jan 14 00:59:16.466634 containerd[1939]: time="2026-01-14T00:59:16.466558783Z" level=info msg="connecting to shim 26d4f6434e63d8631e9a956bcfc66259bb5aa71b05860a6d0f3007babd693956" address="unix:///run/containerd/s/b85cca7c481a0f10eb9af6e289dde6a8d6a8c6c9595ae457248d26cb9ac582ec" namespace=k8s.io protocol=ttrpc version=3 Jan 14 00:59:16.481000 audit: BPF prog-id=213 op=LOAD Jan 14 00:59:16.483000 audit: BPF prog-id=214 op=LOAD Jan 14 00:59:16.483000 audit[4914]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=4894 pid=4914 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:59:16.483000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3663366532633264613536306630333961623139613639633837646237 Jan 14 00:59:16.483000 audit: BPF prog-id=214 op=UNLOAD Jan 14 00:59:16.483000 audit[4914]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4894 pid=4914 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:59:16.483000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3663366532633264613536306630333961623139613639633837646237 Jan 14 00:59:16.485000 audit: BPF prog-id=215 op=LOAD Jan 14 00:59:16.485000 audit[4914]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=4894 pid=4914 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:59:16.485000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3663366532633264613536306630333961623139613639633837646237 Jan 14 00:59:16.486000 audit: BPF prog-id=216 op=LOAD Jan 14 00:59:16.486000 audit[4914]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=4894 pid=4914 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:59:16.486000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3663366532633264613536306630333961623139613639633837646237 Jan 14 00:59:16.486000 audit: BPF prog-id=216 op=UNLOAD Jan 14 00:59:16.486000 audit[4914]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4894 pid=4914 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:59:16.486000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3663366532633264613536306630333961623139613639633837646237 Jan 14 00:59:16.486000 audit: BPF prog-id=215 op=UNLOAD Jan 14 00:59:16.486000 audit[4914]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4894 pid=4914 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:59:16.486000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3663366532633264613536306630333961623139613639633837646237 Jan 14 00:59:16.486000 audit: BPF prog-id=217 op=LOAD Jan 14 00:59:16.486000 audit[4914]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=4894 pid=4914 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:59:16.486000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3663366532633264613536306630333961623139613639633837646237 Jan 14 00:59:16.498000 audit: BPF prog-id=218 op=LOAD Jan 14 00:59:16.504000 audit: BPF prog-id=219 op=LOAD Jan 14 00:59:16.504000 audit[4963]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106238 a2=98 a3=0 items=0 ppid=4910 pid=4963 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:59:16.504000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6135353230326163363337666632323130633036383733303231393861 Jan 14 00:59:16.504000 audit: BPF prog-id=219 op=UNLOAD Jan 14 00:59:16.504000 audit[4963]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4910 pid=4963 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:59:16.504000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6135353230326163363337666632323130633036383733303231393861 Jan 14 00:59:16.504000 audit: BPF prog-id=220 op=LOAD Jan 14 00:59:16.504000 audit[4963]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106488 a2=98 a3=0 items=0 ppid=4910 pid=4963 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:59:16.504000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6135353230326163363337666632323130633036383733303231393861 Jan 14 00:59:16.504000 audit: BPF prog-id=221 op=LOAD Jan 14 00:59:16.504000 audit[4963]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000106218 a2=98 a3=0 items=0 ppid=4910 pid=4963 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:59:16.504000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6135353230326163363337666632323130633036383733303231393861 Jan 14 00:59:16.504000 audit: BPF prog-id=221 op=UNLOAD Jan 14 00:59:16.504000 audit[4963]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4910 pid=4963 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:59:16.504000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6135353230326163363337666632323130633036383733303231393861 Jan 14 00:59:16.505000 audit: BPF prog-id=220 op=UNLOAD Jan 14 00:59:16.505000 audit[4963]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4910 pid=4963 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:59:16.505000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6135353230326163363337666632323130633036383733303231393861 Jan 14 00:59:16.507000 audit[5033]: NETFILTER_CFG table=filter:125 family=2 entries=52 op=nft_register_chain pid=5033 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 14 00:59:16.507000 audit[5033]: SYSCALL arch=c000003e syscall=46 success=yes exit=27556 a0=3 a1=7ffc2af82800 a2=0 a3=7ffc2af827ec items=0 ppid=4638 pid=5033 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:59:16.507000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 14 00:59:16.506000 audit: BPF prog-id=222 op=LOAD Jan 14 00:59:16.506000 audit[4963]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001066e8 a2=98 a3=0 items=0 ppid=4910 pid=4963 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:59:16.506000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6135353230326163363337666632323130633036383733303231393861 Jan 14 00:59:16.549469 systemd[1]: Started cri-containerd-26d4f6434e63d8631e9a956bcfc66259bb5aa71b05860a6d0f3007babd693956.scope - libcontainer container 26d4f6434e63d8631e9a956bcfc66259bb5aa71b05860a6d0f3007babd693956. Jan 14 00:59:16.622000 audit: BPF prog-id=223 op=LOAD Jan 14 00:59:16.624000 audit: BPF prog-id=224 op=LOAD Jan 14 00:59:16.624000 audit[4925]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001b0238 a2=98 a3=0 items=0 ppid=4877 pid=4925 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:59:16.624000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3661646465376634616561303963336331333235343766306437623637 Jan 14 00:59:16.624000 audit: BPF prog-id=224 op=UNLOAD Jan 14 00:59:16.624000 audit[4925]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4877 pid=4925 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:59:16.624000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3661646465376634616561303963336331333235343766306437623637 Jan 14 00:59:16.627000 audit: BPF prog-id=225 op=LOAD Jan 14 00:59:16.627000 audit[4925]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001b0488 a2=98 a3=0 items=0 ppid=4877 pid=4925 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:59:16.627000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3661646465376634616561303963336331333235343766306437623637 Jan 14 00:59:16.627000 audit: BPF prog-id=226 op=LOAD Jan 14 00:59:16.627000 audit[4925]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c0001b0218 a2=98 a3=0 items=0 ppid=4877 pid=4925 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:59:16.627000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3661646465376634616561303963336331333235343766306437623637 Jan 14 00:59:16.627000 audit: BPF prog-id=226 op=UNLOAD Jan 14 00:59:16.627000 audit[4925]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4877 pid=4925 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:59:16.627000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3661646465376634616561303963336331333235343766306437623637 Jan 14 00:59:16.627000 audit: BPF prog-id=225 op=UNLOAD Jan 14 00:59:16.627000 audit[4925]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4877 pid=4925 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:59:16.627000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3661646465376634616561303963336331333235343766306437623637 Jan 14 00:59:16.627000 audit: BPF prog-id=227 op=LOAD Jan 14 00:59:16.627000 audit[4925]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001b06e8 a2=98 a3=0 items=0 ppid=4877 pid=4925 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:59:16.627000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3661646465376634616561303963336331333235343766306437623637 Jan 14 00:59:16.633000 audit: BPF prog-id=228 op=LOAD Jan 14 00:59:16.641000 audit: BPF prog-id=229 op=LOAD Jan 14 00:59:16.641000 audit[5037]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=5007 pid=5037 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:59:16.641000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3236643466363433346536336438363331653961393536626366633636 Jan 14 00:59:16.641000 audit: BPF prog-id=229 op=UNLOAD Jan 14 00:59:16.641000 audit[5037]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=5007 pid=5037 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:59:16.641000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3236643466363433346536336438363331653961393536626366633636 Jan 14 00:59:16.641000 audit: BPF prog-id=230 op=LOAD Jan 14 00:59:16.641000 audit[5037]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=5007 pid=5037 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:59:16.641000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3236643466363433346536336438363331653961393536626366633636 Jan 14 00:59:16.641000 audit: BPF prog-id=231 op=LOAD Jan 14 00:59:16.641000 audit[5037]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=5007 pid=5037 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:59:16.641000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3236643466363433346536336438363331653961393536626366633636 Jan 14 00:59:16.645000 audit: BPF prog-id=231 op=UNLOAD Jan 14 00:59:16.645000 audit[5037]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=5007 pid=5037 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:59:16.645000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3236643466363433346536336438363331653961393536626366633636 Jan 14 00:59:16.645000 audit: BPF prog-id=230 op=UNLOAD Jan 14 00:59:16.645000 audit[5037]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=5007 pid=5037 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:59:16.645000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3236643466363433346536336438363331653961393536626366633636 Jan 14 00:59:16.650876 containerd[1939]: time="2026-01-14T00:59:16.650545325Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-mm224,Uid:68eb86c9-3a24-4178-8dfd-2032dfe5776a,Namespace:kube-system,Attempt:0,} returns sandbox id \"a55202ac637ff2210c0687302198a51fbbac3f0657c4226023a407ccaa79a998\"" Jan 14 00:59:16.645000 audit: BPF prog-id=232 op=LOAD Jan 14 00:59:16.645000 audit[5037]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=5007 pid=5037 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:59:16.645000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3236643466363433346536336438363331653961393536626366633636 Jan 14 00:59:16.677557 containerd[1939]: time="2026-01-14T00:59:16.676608627Z" level=info msg="CreateContainer within sandbox \"a55202ac637ff2210c0687302198a51fbbac3f0657c4226023a407ccaa79a998\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 14 00:59:16.697177 containerd[1939]: time="2026-01-14T00:59:16.697030168Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5c479bf86-lp2t6,Uid:5410ee2c-498a-49d0-bc39-5e704c2599b9,Namespace:calico-system,Attempt:0,} returns sandbox id \"6c6e2c2da560f039ab19a69c87db7250180e6568e2956846e16677c96458ef99\"" Jan 14 00:59:16.750864 containerd[1939]: time="2026-01-14T00:59:16.749108597Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 14 00:59:16.775666 containerd[1939]: time="2026-01-14T00:59:16.775335086Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-cdb9b59bb-cpdlw,Uid:5e14ba26-eb09-4a70-a4a6-9ad8bd987906,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"6adde7f4aea09c3c132547f0d7b6723c118c3d94824db48ac78758d033c77148\"" Jan 14 00:59:16.822901 containerd[1939]: time="2026-01-14T00:59:16.822766264Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-xlvwp,Uid:13972001-c667-49c0-9374-a2bbe47d8026,Namespace:calico-system,Attempt:0,} returns sandbox id \"26d4f6434e63d8631e9a956bcfc66259bb5aa71b05860a6d0f3007babd693956\"" Jan 14 00:59:16.850246 systemd-networkd[1548]: cali45356dc7017: Link UP Jan 14 00:59:16.851743 systemd-networkd[1548]: cali45356dc7017: Gained carrier Jan 14 00:59:16.868565 containerd[1939]: time="2026-01-14T00:59:16.866726150Z" level=info msg="Container bcea9528912041223db5cf57fc2b8d95d1f46671b8756e8dcd7b950b14699ff5: CDI devices from CRI Config.CDIDevices: []" Jan 14 00:59:16.880734 containerd[1939]: time="2026-01-14T00:59:16.880666697Z" level=info msg="CreateContainer within sandbox \"a55202ac637ff2210c0687302198a51fbbac3f0657c4226023a407ccaa79a998\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"bcea9528912041223db5cf57fc2b8d95d1f46671b8756e8dcd7b950b14699ff5\"" Jan 14 00:59:16.882034 containerd[1939]: time="2026-01-14T00:59:16.881803782Z" level=info msg="StartContainer for \"bcea9528912041223db5cf57fc2b8d95d1f46671b8756e8dcd7b950b14699ff5\"" Jan 14 00:59:16.883001 containerd[1939]: time="2026-01-14T00:59:16.882972140Z" level=info msg="connecting to shim bcea9528912041223db5cf57fc2b8d95d1f46671b8756e8dcd7b950b14699ff5" address="unix:///run/containerd/s/06ca2e921a022175dd23cadb8a2a6b6e5309f3345ec2eb2c74ae229ef6f1e7ba" protocol=ttrpc version=3 Jan 14 00:59:16.901062 containerd[1939]: 2026-01-14 00:59:16.526 [INFO][4955] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--19--12-k8s-csi--node--driver--7tkkg-eth0 csi-node-driver- calico-system e723b976-5fd1-4159-bd8a-f3fe80761ec5 743 0 2026-01-14 00:58:45 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:857b56db8f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ip-172-31-19-12 csi-node-driver-7tkkg eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali45356dc7017 [] [] }} ContainerID="9d165d1ec047b0ceb45d117ba3140e2a98bdfb23ff2eb4000541d6b9c3cebdd2" Namespace="calico-system" Pod="csi-node-driver-7tkkg" WorkloadEndpoint="ip--172--31--19--12-k8s-csi--node--driver--7tkkg-" Jan 14 00:59:16.901062 containerd[1939]: 2026-01-14 00:59:16.529 [INFO][4955] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="9d165d1ec047b0ceb45d117ba3140e2a98bdfb23ff2eb4000541d6b9c3cebdd2" Namespace="calico-system" Pod="csi-node-driver-7tkkg" WorkloadEndpoint="ip--172--31--19--12-k8s-csi--node--driver--7tkkg-eth0" Jan 14 00:59:16.901062 containerd[1939]: 2026-01-14 00:59:16.751 [INFO][5052] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9d165d1ec047b0ceb45d117ba3140e2a98bdfb23ff2eb4000541d6b9c3cebdd2" HandleID="k8s-pod-network.9d165d1ec047b0ceb45d117ba3140e2a98bdfb23ff2eb4000541d6b9c3cebdd2" Workload="ip--172--31--19--12-k8s-csi--node--driver--7tkkg-eth0" Jan 14 00:59:16.901062 containerd[1939]: 2026-01-14 00:59:16.751 [INFO][5052] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="9d165d1ec047b0ceb45d117ba3140e2a98bdfb23ff2eb4000541d6b9c3cebdd2" HandleID="k8s-pod-network.9d165d1ec047b0ceb45d117ba3140e2a98bdfb23ff2eb4000541d6b9c3cebdd2" Workload="ip--172--31--19--12-k8s-csi--node--driver--7tkkg-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000241ca0), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-19-12", "pod":"csi-node-driver-7tkkg", "timestamp":"2026-01-14 00:59:16.751578585 +0000 UTC"}, Hostname:"ip-172-31-19-12", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 14 00:59:16.901062 containerd[1939]: 2026-01-14 00:59:16.751 [INFO][5052] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 14 00:59:16.901062 containerd[1939]: 2026-01-14 00:59:16.751 [INFO][5052] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 14 00:59:16.901062 containerd[1939]: 2026-01-14 00:59:16.751 [INFO][5052] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-19-12' Jan 14 00:59:16.901062 containerd[1939]: 2026-01-14 00:59:16.772 [INFO][5052] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.9d165d1ec047b0ceb45d117ba3140e2a98bdfb23ff2eb4000541d6b9c3cebdd2" host="ip-172-31-19-12" Jan 14 00:59:16.901062 containerd[1939]: 2026-01-14 00:59:16.779 [INFO][5052] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-19-12" Jan 14 00:59:16.901062 containerd[1939]: 2026-01-14 00:59:16.789 [INFO][5052] ipam/ipam.go 511: Trying affinity for 192.168.64.192/26 host="ip-172-31-19-12" Jan 14 00:59:16.901062 containerd[1939]: 2026-01-14 00:59:16.793 [INFO][5052] ipam/ipam.go 158: Attempting to load block cidr=192.168.64.192/26 host="ip-172-31-19-12" Jan 14 00:59:16.901062 containerd[1939]: 2026-01-14 00:59:16.797 [INFO][5052] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.64.192/26 host="ip-172-31-19-12" Jan 14 00:59:16.901062 containerd[1939]: 2026-01-14 00:59:16.797 [INFO][5052] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.64.192/26 handle="k8s-pod-network.9d165d1ec047b0ceb45d117ba3140e2a98bdfb23ff2eb4000541d6b9c3cebdd2" host="ip-172-31-19-12" Jan 14 00:59:16.901062 containerd[1939]: 2026-01-14 00:59:16.800 [INFO][5052] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.9d165d1ec047b0ceb45d117ba3140e2a98bdfb23ff2eb4000541d6b9c3cebdd2 Jan 14 00:59:16.901062 containerd[1939]: 2026-01-14 00:59:16.816 [INFO][5052] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.64.192/26 handle="k8s-pod-network.9d165d1ec047b0ceb45d117ba3140e2a98bdfb23ff2eb4000541d6b9c3cebdd2" host="ip-172-31-19-12" Jan 14 00:59:16.901062 containerd[1939]: 2026-01-14 00:59:16.834 [INFO][5052] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.64.197/26] block=192.168.64.192/26 handle="k8s-pod-network.9d165d1ec047b0ceb45d117ba3140e2a98bdfb23ff2eb4000541d6b9c3cebdd2" host="ip-172-31-19-12" Jan 14 00:59:16.901062 containerd[1939]: 2026-01-14 00:59:16.834 [INFO][5052] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.64.197/26] handle="k8s-pod-network.9d165d1ec047b0ceb45d117ba3140e2a98bdfb23ff2eb4000541d6b9c3cebdd2" host="ip-172-31-19-12" Jan 14 00:59:16.901062 containerd[1939]: 2026-01-14 00:59:16.834 [INFO][5052] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 14 00:59:16.901062 containerd[1939]: 2026-01-14 00:59:16.834 [INFO][5052] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.64.197/26] IPv6=[] ContainerID="9d165d1ec047b0ceb45d117ba3140e2a98bdfb23ff2eb4000541d6b9c3cebdd2" HandleID="k8s-pod-network.9d165d1ec047b0ceb45d117ba3140e2a98bdfb23ff2eb4000541d6b9c3cebdd2" Workload="ip--172--31--19--12-k8s-csi--node--driver--7tkkg-eth0" Jan 14 00:59:16.903798 containerd[1939]: 2026-01-14 00:59:16.845 [INFO][4955] cni-plugin/k8s.go 418: Populated endpoint ContainerID="9d165d1ec047b0ceb45d117ba3140e2a98bdfb23ff2eb4000541d6b9c3cebdd2" Namespace="calico-system" Pod="csi-node-driver-7tkkg" WorkloadEndpoint="ip--172--31--19--12-k8s-csi--node--driver--7tkkg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--12-k8s-csi--node--driver--7tkkg-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"e723b976-5fd1-4159-bd8a-f3fe80761ec5", ResourceVersion:"743", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 0, 58, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-12", ContainerID:"", Pod:"csi-node-driver-7tkkg", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.64.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali45356dc7017", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 00:59:16.903798 containerd[1939]: 2026-01-14 00:59:16.845 [INFO][4955] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.64.197/32] ContainerID="9d165d1ec047b0ceb45d117ba3140e2a98bdfb23ff2eb4000541d6b9c3cebdd2" Namespace="calico-system" Pod="csi-node-driver-7tkkg" WorkloadEndpoint="ip--172--31--19--12-k8s-csi--node--driver--7tkkg-eth0" Jan 14 00:59:16.903798 containerd[1939]: 2026-01-14 00:59:16.845 [INFO][4955] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali45356dc7017 ContainerID="9d165d1ec047b0ceb45d117ba3140e2a98bdfb23ff2eb4000541d6b9c3cebdd2" Namespace="calico-system" Pod="csi-node-driver-7tkkg" WorkloadEndpoint="ip--172--31--19--12-k8s-csi--node--driver--7tkkg-eth0" Jan 14 00:59:16.903798 containerd[1939]: 2026-01-14 00:59:16.851 [INFO][4955] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9d165d1ec047b0ceb45d117ba3140e2a98bdfb23ff2eb4000541d6b9c3cebdd2" Namespace="calico-system" Pod="csi-node-driver-7tkkg" WorkloadEndpoint="ip--172--31--19--12-k8s-csi--node--driver--7tkkg-eth0" Jan 14 00:59:16.903798 containerd[1939]: 2026-01-14 00:59:16.854 [INFO][4955] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="9d165d1ec047b0ceb45d117ba3140e2a98bdfb23ff2eb4000541d6b9c3cebdd2" Namespace="calico-system" Pod="csi-node-driver-7tkkg" WorkloadEndpoint="ip--172--31--19--12-k8s-csi--node--driver--7tkkg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--12-k8s-csi--node--driver--7tkkg-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"e723b976-5fd1-4159-bd8a-f3fe80761ec5", ResourceVersion:"743", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 0, 58, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-12", ContainerID:"9d165d1ec047b0ceb45d117ba3140e2a98bdfb23ff2eb4000541d6b9c3cebdd2", Pod:"csi-node-driver-7tkkg", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.64.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali45356dc7017", MAC:"c2:9d:6e:52:4e:66", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 00:59:16.903798 containerd[1939]: 2026-01-14 00:59:16.888 [INFO][4955] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="9d165d1ec047b0ceb45d117ba3140e2a98bdfb23ff2eb4000541d6b9c3cebdd2" Namespace="calico-system" Pod="csi-node-driver-7tkkg" WorkloadEndpoint="ip--172--31--19--12-k8s-csi--node--driver--7tkkg-eth0" Jan 14 00:59:16.931913 systemd[1]: Started cri-containerd-bcea9528912041223db5cf57fc2b8d95d1f46671b8756e8dcd7b950b14699ff5.scope - libcontainer container bcea9528912041223db5cf57fc2b8d95d1f46671b8756e8dcd7b950b14699ff5. Jan 14 00:59:16.963000 audit[5123]: NETFILTER_CFG table=filter:126 family=2 entries=54 op=nft_register_chain pid=5123 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 14 00:59:16.963000 audit[5123]: SYSCALL arch=c000003e syscall=46 success=yes exit=25992 a0=3 a1=7ffdcc686770 a2=0 a3=7ffdcc68675c items=0 ppid=4638 pid=5123 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:59:16.963000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 14 00:59:16.979396 systemd-networkd[1548]: cali1c9a34be8b4: Link UP Jan 14 00:59:16.979635 systemd-networkd[1548]: cali1c9a34be8b4: Gained carrier Jan 14 00:59:16.998000 audit: BPF prog-id=233 op=LOAD Jan 14 00:59:17.007000 audit: BPF prog-id=234 op=LOAD Jan 14 00:59:17.007000 audit[5097]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0238 a2=98 a3=0 items=0 ppid=4910 pid=5097 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:59:17.007000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6263656139353238393132303431323233646235636635376663326238 Jan 14 00:59:17.009000 audit: BPF prog-id=234 op=UNLOAD Jan 14 00:59:17.009000 audit[5097]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4910 pid=5097 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:59:17.009000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6263656139353238393132303431323233646235636635376663326238 Jan 14 00:59:17.010000 audit: BPF prog-id=235 op=LOAD Jan 14 00:59:17.010000 audit[5097]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0488 a2=98 a3=0 items=0 ppid=4910 pid=5097 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:59:17.010000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6263656139353238393132303431323233646235636635376663326238 Jan 14 00:59:17.012000 audit: BPF prog-id=236 op=LOAD Jan 14 00:59:17.012000 audit[5097]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001a0218 a2=98 a3=0 items=0 ppid=4910 pid=5097 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:59:17.012000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6263656139353238393132303431323233646235636635376663326238 Jan 14 00:59:17.013000 audit: BPF prog-id=236 op=UNLOAD Jan 14 00:59:17.013000 audit[5097]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4910 pid=5097 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:59:17.013000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6263656139353238393132303431323233646235636635376663326238 Jan 14 00:59:17.013000 audit: BPF prog-id=235 op=UNLOAD Jan 14 00:59:17.013000 audit[5097]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4910 pid=5097 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:59:17.013000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6263656139353238393132303431323233646235636635376663326238 Jan 14 00:59:17.013000 audit: BPF prog-id=237 op=LOAD Jan 14 00:59:17.013000 audit[5097]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a06e8 a2=98 a3=0 items=0 ppid=4910 pid=5097 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:59:17.013000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6263656139353238393132303431323233646235636635376663326238 Jan 14 00:59:17.017490 containerd[1939]: 2026-01-14 00:59:16.547 [INFO][4956] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--19--12-k8s-coredns--674b8bbfcf--w9259-eth0 coredns-674b8bbfcf- kube-system 6399b59d-47f1-4ce4-83ea-ea3fb09c0249 854 0 2026-01-14 00:57:59 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-19-12 coredns-674b8bbfcf-w9259 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali1c9a34be8b4 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="d1ef22fd65ec272028ae74d34f2b8473fe709ad50f3e5a6261241f6e69839382" Namespace="kube-system" Pod="coredns-674b8bbfcf-w9259" WorkloadEndpoint="ip--172--31--19--12-k8s-coredns--674b8bbfcf--w9259-" Jan 14 00:59:17.017490 containerd[1939]: 2026-01-14 00:59:16.549 [INFO][4956] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="d1ef22fd65ec272028ae74d34f2b8473fe709ad50f3e5a6261241f6e69839382" Namespace="kube-system" Pod="coredns-674b8bbfcf-w9259" WorkloadEndpoint="ip--172--31--19--12-k8s-coredns--674b8bbfcf--w9259-eth0" Jan 14 00:59:17.017490 containerd[1939]: 2026-01-14 00:59:16.756 [INFO][5058] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d1ef22fd65ec272028ae74d34f2b8473fe709ad50f3e5a6261241f6e69839382" HandleID="k8s-pod-network.d1ef22fd65ec272028ae74d34f2b8473fe709ad50f3e5a6261241f6e69839382" Workload="ip--172--31--19--12-k8s-coredns--674b8bbfcf--w9259-eth0" Jan 14 00:59:17.017490 containerd[1939]: 2026-01-14 00:59:16.756 [INFO][5058] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="d1ef22fd65ec272028ae74d34f2b8473fe709ad50f3e5a6261241f6e69839382" HandleID="k8s-pod-network.d1ef22fd65ec272028ae74d34f2b8473fe709ad50f3e5a6261241f6e69839382" Workload="ip--172--31--19--12-k8s-coredns--674b8bbfcf--w9259-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00033e670), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-19-12", "pod":"coredns-674b8bbfcf-w9259", "timestamp":"2026-01-14 00:59:16.756688024 +0000 UTC"}, Hostname:"ip-172-31-19-12", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 14 00:59:17.017490 containerd[1939]: 2026-01-14 00:59:16.756 [INFO][5058] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 14 00:59:17.017490 containerd[1939]: 2026-01-14 00:59:16.835 [INFO][5058] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 14 00:59:17.017490 containerd[1939]: 2026-01-14 00:59:16.835 [INFO][5058] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-19-12' Jan 14 00:59:17.017490 containerd[1939]: 2026-01-14 00:59:16.875 [INFO][5058] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.d1ef22fd65ec272028ae74d34f2b8473fe709ad50f3e5a6261241f6e69839382" host="ip-172-31-19-12" Jan 14 00:59:17.017490 containerd[1939]: 2026-01-14 00:59:16.899 [INFO][5058] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-19-12" Jan 14 00:59:17.017490 containerd[1939]: 2026-01-14 00:59:16.913 [INFO][5058] ipam/ipam.go 511: Trying affinity for 192.168.64.192/26 host="ip-172-31-19-12" Jan 14 00:59:17.017490 containerd[1939]: 2026-01-14 00:59:16.918 [INFO][5058] ipam/ipam.go 158: Attempting to load block cidr=192.168.64.192/26 host="ip-172-31-19-12" Jan 14 00:59:17.017490 containerd[1939]: 2026-01-14 00:59:16.927 [INFO][5058] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.64.192/26 host="ip-172-31-19-12" Jan 14 00:59:17.017490 containerd[1939]: 2026-01-14 00:59:16.927 [INFO][5058] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.64.192/26 handle="k8s-pod-network.d1ef22fd65ec272028ae74d34f2b8473fe709ad50f3e5a6261241f6e69839382" host="ip-172-31-19-12" Jan 14 00:59:17.017490 containerd[1939]: 2026-01-14 00:59:16.933 [INFO][5058] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.d1ef22fd65ec272028ae74d34f2b8473fe709ad50f3e5a6261241f6e69839382 Jan 14 00:59:17.017490 containerd[1939]: 2026-01-14 00:59:16.954 [INFO][5058] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.64.192/26 handle="k8s-pod-network.d1ef22fd65ec272028ae74d34f2b8473fe709ad50f3e5a6261241f6e69839382" host="ip-172-31-19-12" Jan 14 00:59:17.017490 containerd[1939]: 2026-01-14 00:59:16.966 [INFO][5058] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.64.198/26] block=192.168.64.192/26 handle="k8s-pod-network.d1ef22fd65ec272028ae74d34f2b8473fe709ad50f3e5a6261241f6e69839382" host="ip-172-31-19-12" Jan 14 00:59:17.017490 containerd[1939]: 2026-01-14 00:59:16.967 [INFO][5058] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.64.198/26] handle="k8s-pod-network.d1ef22fd65ec272028ae74d34f2b8473fe709ad50f3e5a6261241f6e69839382" host="ip-172-31-19-12" Jan 14 00:59:17.017490 containerd[1939]: 2026-01-14 00:59:16.967 [INFO][5058] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 14 00:59:17.017490 containerd[1939]: 2026-01-14 00:59:16.967 [INFO][5058] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.64.198/26] IPv6=[] ContainerID="d1ef22fd65ec272028ae74d34f2b8473fe709ad50f3e5a6261241f6e69839382" HandleID="k8s-pod-network.d1ef22fd65ec272028ae74d34f2b8473fe709ad50f3e5a6261241f6e69839382" Workload="ip--172--31--19--12-k8s-coredns--674b8bbfcf--w9259-eth0" Jan 14 00:59:17.020217 containerd[1939]: 2026-01-14 00:59:16.973 [INFO][4956] cni-plugin/k8s.go 418: Populated endpoint ContainerID="d1ef22fd65ec272028ae74d34f2b8473fe709ad50f3e5a6261241f6e69839382" Namespace="kube-system" Pod="coredns-674b8bbfcf-w9259" WorkloadEndpoint="ip--172--31--19--12-k8s-coredns--674b8bbfcf--w9259-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--12-k8s-coredns--674b8bbfcf--w9259-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"6399b59d-47f1-4ce4-83ea-ea3fb09c0249", ResourceVersion:"854", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 0, 57, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-12", ContainerID:"", Pod:"coredns-674b8bbfcf-w9259", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.64.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali1c9a34be8b4", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 00:59:17.020217 containerd[1939]: 2026-01-14 00:59:16.973 [INFO][4956] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.64.198/32] ContainerID="d1ef22fd65ec272028ae74d34f2b8473fe709ad50f3e5a6261241f6e69839382" Namespace="kube-system" Pod="coredns-674b8bbfcf-w9259" WorkloadEndpoint="ip--172--31--19--12-k8s-coredns--674b8bbfcf--w9259-eth0" Jan 14 00:59:17.020217 containerd[1939]: 2026-01-14 00:59:16.974 [INFO][4956] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1c9a34be8b4 ContainerID="d1ef22fd65ec272028ae74d34f2b8473fe709ad50f3e5a6261241f6e69839382" Namespace="kube-system" Pod="coredns-674b8bbfcf-w9259" WorkloadEndpoint="ip--172--31--19--12-k8s-coredns--674b8bbfcf--w9259-eth0" Jan 14 00:59:17.020217 containerd[1939]: 2026-01-14 00:59:16.977 [INFO][4956] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d1ef22fd65ec272028ae74d34f2b8473fe709ad50f3e5a6261241f6e69839382" Namespace="kube-system" Pod="coredns-674b8bbfcf-w9259" WorkloadEndpoint="ip--172--31--19--12-k8s-coredns--674b8bbfcf--w9259-eth0" Jan 14 00:59:17.020217 containerd[1939]: 2026-01-14 00:59:16.977 [INFO][4956] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="d1ef22fd65ec272028ae74d34f2b8473fe709ad50f3e5a6261241f6e69839382" Namespace="kube-system" Pod="coredns-674b8bbfcf-w9259" WorkloadEndpoint="ip--172--31--19--12-k8s-coredns--674b8bbfcf--w9259-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--12-k8s-coredns--674b8bbfcf--w9259-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"6399b59d-47f1-4ce4-83ea-ea3fb09c0249", ResourceVersion:"854", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 0, 57, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-12", ContainerID:"d1ef22fd65ec272028ae74d34f2b8473fe709ad50f3e5a6261241f6e69839382", Pod:"coredns-674b8bbfcf-w9259", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.64.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali1c9a34be8b4", MAC:"06:e9:5e:ae:8d:c0", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 00:59:17.020217 containerd[1939]: 2026-01-14 00:59:17.009 [INFO][4956] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="d1ef22fd65ec272028ae74d34f2b8473fe709ad50f3e5a6261241f6e69839382" Namespace="kube-system" Pod="coredns-674b8bbfcf-w9259" WorkloadEndpoint="ip--172--31--19--12-k8s-coredns--674b8bbfcf--w9259-eth0" Jan 14 00:59:17.031257 containerd[1939]: time="2026-01-14T00:59:17.031146677Z" level=info msg="connecting to shim 9d165d1ec047b0ceb45d117ba3140e2a98bdfb23ff2eb4000541d6b9c3cebdd2" address="unix:///run/containerd/s/f32ffaea827753447a7316792e208a9968b33f3b1fa8e74890cd86598cf58499" namespace=k8s.io protocol=ttrpc version=3 Jan 14 00:59:17.049000 audit[5154]: NETFILTER_CFG table=filter:127 family=2 entries=44 op=nft_register_chain pid=5154 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 14 00:59:17.049000 audit[5154]: SYSCALL arch=c000003e syscall=46 success=yes exit=21516 a0=3 a1=7ffed9dc89a0 a2=0 a3=7ffed9dc898c items=0 ppid=4638 pid=5154 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:59:17.049000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 14 00:59:17.073966 systemd[1]: Started cri-containerd-9d165d1ec047b0ceb45d117ba3140e2a98bdfb23ff2eb4000541d6b9c3cebdd2.scope - libcontainer container 9d165d1ec047b0ceb45d117ba3140e2a98bdfb23ff2eb4000541d6b9c3cebdd2. Jan 14 00:59:17.082365 containerd[1939]: time="2026-01-14T00:59:17.082169758Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 00:59:17.101299 containerd[1939]: time="2026-01-14T00:59:17.101005329Z" level=info msg="connecting to shim d1ef22fd65ec272028ae74d34f2b8473fe709ad50f3e5a6261241f6e69839382" address="unix:///run/containerd/s/6a99ac5e7d5f7352e08c785023453a23733014bdb57cae70cb53ccca629b761a" namespace=k8s.io protocol=ttrpc version=3 Jan 14 00:59:17.104725 containerd[1939]: time="2026-01-14T00:59:17.104662716Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 14 00:59:17.105068 containerd[1939]: time="2026-01-14T00:59:17.104999921Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 14 00:59:17.105524 kubelet[3293]: E0114 00:59:17.105320 3293 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 14 00:59:17.110769 kubelet[3293]: E0114 00:59:17.110570 3293 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 14 00:59:17.111280 containerd[1939]: time="2026-01-14T00:59:17.111252737Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 14 00:59:17.120000 audit: BPF prog-id=238 op=LOAD Jan 14 00:59:17.120000 audit: BPF prog-id=239 op=LOAD Jan 14 00:59:17.120000 audit[5153]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0238 a2=98 a3=0 items=0 ppid=5141 pid=5153 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:59:17.120000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3964313635643165633034376230636562343564313137626133313430 Jan 14 00:59:17.120000 audit: BPF prog-id=239 op=UNLOAD Jan 14 00:59:17.120000 audit[5153]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5141 pid=5153 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:59:17.120000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3964313635643165633034376230636562343564313137626133313430 Jan 14 00:59:17.121000 audit: BPF prog-id=240 op=LOAD Jan 14 00:59:17.121000 audit[5153]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0488 a2=98 a3=0 items=0 ppid=5141 pid=5153 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:59:17.121000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3964313635643165633034376230636562343564313137626133313430 Jan 14 00:59:17.121000 audit: BPF prog-id=241 op=LOAD Jan 14 00:59:17.121000 audit[5153]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001a0218 a2=98 a3=0 items=0 ppid=5141 pid=5153 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:59:17.121000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3964313635643165633034376230636562343564313137626133313430 Jan 14 00:59:17.121000 audit: BPF prog-id=241 op=UNLOAD Jan 14 00:59:17.121000 audit[5153]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5141 pid=5153 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:59:17.121000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3964313635643165633034376230636562343564313137626133313430 Jan 14 00:59:17.121000 audit: BPF prog-id=240 op=UNLOAD Jan 14 00:59:17.121000 audit[5153]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5141 pid=5153 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:59:17.121000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3964313635643165633034376230636562343564313137626133313430 Jan 14 00:59:17.121000 audit: BPF prog-id=242 op=LOAD Jan 14 00:59:17.121000 audit[5153]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a06e8 a2=98 a3=0 items=0 ppid=5141 pid=5153 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:59:17.121000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3964313635643165633034376230636562343564313137626133313430 Jan 14 00:59:17.127971 kubelet[3293]: E0114 00:59:17.127917 3293 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:71e9debd4c934ffd9f54d469c8ec9440,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-92d7b,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-5c479bf86-lp2t6_calico-system(5410ee2c-498a-49d0-bc39-5e704c2599b9): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 14 00:59:17.132401 containerd[1939]: time="2026-01-14T00:59:17.132347130Z" level=info msg="StartContainer for \"bcea9528912041223db5cf57fc2b8d95d1f46671b8756e8dcd7b950b14699ff5\" returns successfully" Jan 14 00:59:17.174732 containerd[1939]: time="2026-01-14T00:59:17.174649146Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-7tkkg,Uid:e723b976-5fd1-4159-bd8a-f3fe80761ec5,Namespace:calico-system,Attempt:0,} returns sandbox id \"9d165d1ec047b0ceb45d117ba3140e2a98bdfb23ff2eb4000541d6b9c3cebdd2\"" Jan 14 00:59:17.176610 systemd[1]: Started cri-containerd-d1ef22fd65ec272028ae74d34f2b8473fe709ad50f3e5a6261241f6e69839382.scope - libcontainer container d1ef22fd65ec272028ae74d34f2b8473fe709ad50f3e5a6261241f6e69839382. Jan 14 00:59:17.195000 audit: BPF prog-id=243 op=LOAD Jan 14 00:59:17.195000 audit: BPF prog-id=244 op=LOAD Jan 14 00:59:17.195000 audit[5211]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=5189 pid=5211 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:59:17.195000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6431656632326664363565633237323032386165373464333466326238 Jan 14 00:59:17.195000 audit: BPF prog-id=244 op=UNLOAD Jan 14 00:59:17.195000 audit[5211]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5189 pid=5211 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:59:17.195000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6431656632326664363565633237323032386165373464333466326238 Jan 14 00:59:17.196000 audit: BPF prog-id=245 op=LOAD Jan 14 00:59:17.196000 audit[5211]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=5189 pid=5211 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:59:17.196000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6431656632326664363565633237323032386165373464333466326238 Jan 14 00:59:17.196000 audit: BPF prog-id=246 op=LOAD Jan 14 00:59:17.196000 audit[5211]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=5189 pid=5211 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:59:17.196000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6431656632326664363565633237323032386165373464333466326238 Jan 14 00:59:17.196000 audit: BPF prog-id=246 op=UNLOAD Jan 14 00:59:17.196000 audit[5211]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5189 pid=5211 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:59:17.196000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6431656632326664363565633237323032386165373464333466326238 Jan 14 00:59:17.196000 audit: BPF prog-id=245 op=UNLOAD Jan 14 00:59:17.196000 audit[5211]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5189 pid=5211 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:59:17.196000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6431656632326664363565633237323032386165373464333466326238 Jan 14 00:59:17.196000 audit: BPF prog-id=247 op=LOAD Jan 14 00:59:17.196000 audit[5211]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=5189 pid=5211 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:59:17.196000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6431656632326664363565633237323032386165373464333466326238 Jan 14 00:59:17.238970 containerd[1939]: time="2026-01-14T00:59:17.238911624Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-w9259,Uid:6399b59d-47f1-4ce4-83ea-ea3fb09c0249,Namespace:kube-system,Attempt:0,} returns sandbox id \"d1ef22fd65ec272028ae74d34f2b8473fe709ad50f3e5a6261241f6e69839382\"" Jan 14 00:59:17.247057 containerd[1939]: time="2026-01-14T00:59:17.247014366Z" level=info msg="CreateContainer within sandbox \"d1ef22fd65ec272028ae74d34f2b8473fe709ad50f3e5a6261241f6e69839382\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 14 00:59:17.264676 containerd[1939]: time="2026-01-14T00:59:17.262890604Z" level=info msg="Container 848c6330b239ae3bc8d8867c137e01ac9ddcae1136b6f9f86abae5ef548f1be1: CDI devices from CRI Config.CDIDevices: []" Jan 14 00:59:17.277890 containerd[1939]: time="2026-01-14T00:59:17.277845539Z" level=info msg="CreateContainer within sandbox \"d1ef22fd65ec272028ae74d34f2b8473fe709ad50f3e5a6261241f6e69839382\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"848c6330b239ae3bc8d8867c137e01ac9ddcae1136b6f9f86abae5ef548f1be1\"" Jan 14 00:59:17.278543 containerd[1939]: time="2026-01-14T00:59:17.278517310Z" level=info msg="StartContainer for \"848c6330b239ae3bc8d8867c137e01ac9ddcae1136b6f9f86abae5ef548f1be1\"" Jan 14 00:59:17.280326 containerd[1939]: time="2026-01-14T00:59:17.280293489Z" level=info msg="connecting to shim 848c6330b239ae3bc8d8867c137e01ac9ddcae1136b6f9f86abae5ef548f1be1" address="unix:///run/containerd/s/6a99ac5e7d5f7352e08c785023453a23733014bdb57cae70cb53ccca629b761a" protocol=ttrpc version=3 Jan 14 00:59:17.291912 systemd-networkd[1548]: cali01a5d1589e8: Gained IPv6LL Jan 14 00:59:17.320456 systemd[1]: Started cri-containerd-848c6330b239ae3bc8d8867c137e01ac9ddcae1136b6f9f86abae5ef548f1be1.scope - libcontainer container 848c6330b239ae3bc8d8867c137e01ac9ddcae1136b6f9f86abae5ef548f1be1. Jan 14 00:59:17.334000 audit: BPF prog-id=248 op=LOAD Jan 14 00:59:17.334000 audit: BPF prog-id=249 op=LOAD Jan 14 00:59:17.334000 audit[5239]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=5189 pid=5239 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:59:17.334000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3834386336333330623233396165336263386438383637633133376530 Jan 14 00:59:17.334000 audit: BPF prog-id=249 op=UNLOAD Jan 14 00:59:17.334000 audit[5239]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5189 pid=5239 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:59:17.334000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3834386336333330623233396165336263386438383637633133376530 Jan 14 00:59:17.334000 audit: BPF prog-id=250 op=LOAD Jan 14 00:59:17.334000 audit[5239]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=5189 pid=5239 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:59:17.334000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3834386336333330623233396165336263386438383637633133376530 Jan 14 00:59:17.334000 audit: BPF prog-id=251 op=LOAD Jan 14 00:59:17.334000 audit[5239]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=5189 pid=5239 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:59:17.334000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3834386336333330623233396165336263386438383637633133376530 Jan 14 00:59:17.334000 audit: BPF prog-id=251 op=UNLOAD Jan 14 00:59:17.334000 audit[5239]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5189 pid=5239 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:59:17.334000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3834386336333330623233396165336263386438383637633133376530 Jan 14 00:59:17.334000 audit: BPF prog-id=250 op=UNLOAD Jan 14 00:59:17.334000 audit[5239]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5189 pid=5239 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:59:17.334000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3834386336333330623233396165336263386438383637633133376530 Jan 14 00:59:17.334000 audit: BPF prog-id=252 op=LOAD Jan 14 00:59:17.334000 audit[5239]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=5189 pid=5239 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:59:17.334000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3834386336333330623233396165336263386438383637633133376530 Jan 14 00:59:17.369937 containerd[1939]: time="2026-01-14T00:59:17.369709572Z" level=info msg="StartContainer for \"848c6330b239ae3bc8d8867c137e01ac9ddcae1136b6f9f86abae5ef548f1be1\" returns successfully" Jan 14 00:59:17.377167 containerd[1939]: time="2026-01-14T00:59:17.377124662Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 00:59:17.379506 containerd[1939]: time="2026-01-14T00:59:17.379409162Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 14 00:59:17.379506 containerd[1939]: time="2026-01-14T00:59:17.379466180Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 14 00:59:17.379782 kubelet[3293]: E0114 00:59:17.379623 3293 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 00:59:17.379782 kubelet[3293]: E0114 00:59:17.379666 3293 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 00:59:17.379966 kubelet[3293]: E0114 00:59:17.379910 3293 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6hskc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-cdb9b59bb-cpdlw_calico-apiserver(5e14ba26-eb09-4a70-a4a6-9ad8bd987906): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 14 00:59:17.380092 containerd[1939]: time="2026-01-14T00:59:17.380039541Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 14 00:59:17.387628 kubelet[3293]: E0114 00:59:17.381594 3293 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-cdb9b59bb-cpdlw" podUID="5e14ba26-eb09-4a70-a4a6-9ad8bd987906" Jan 14 00:59:17.626428 kubelet[3293]: E0114 00:59:17.626394 3293 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-cdb9b59bb-cpdlw" podUID="5e14ba26-eb09-4a70-a4a6-9ad8bd987906" Jan 14 00:59:17.680760 containerd[1939]: time="2026-01-14T00:59:17.680621669Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 00:59:17.682808 containerd[1939]: time="2026-01-14T00:59:17.682694595Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 14 00:59:17.682808 containerd[1939]: time="2026-01-14T00:59:17.682784111Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 14 00:59:17.683128 kubelet[3293]: E0114 00:59:17.683088 3293 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 14 00:59:17.683283 kubelet[3293]: E0114 00:59:17.683137 3293 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 14 00:59:17.683656 kubelet[3293]: E0114 00:59:17.683366 3293 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2qw6k,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-xlvwp_calico-system(13972001-c667-49c0-9374-a2bbe47d8026): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 14 00:59:17.684331 containerd[1939]: time="2026-01-14T00:59:17.684254006Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 14 00:59:17.685276 kubelet[3293]: E0114 00:59:17.685245 3293 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-xlvwp" podUID="13972001-c667-49c0-9374-a2bbe47d8026" Jan 14 00:59:17.703255 kubelet[3293]: I0114 00:59:17.700512 3293 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-w9259" podStartSLOduration=78.695592863 podStartE2EDuration="1m18.695592863s" podCreationTimestamp="2026-01-14 00:57:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-14 00:59:17.677256049 +0000 UTC m=+84.623610902" watchObservedRunningTime="2026-01-14 00:59:17.695592863 +0000 UTC m=+84.641947716" Jan 14 00:59:17.713000 audit[5266]: NETFILTER_CFG table=filter:128 family=2 entries=20 op=nft_register_rule pid=5266 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 00:59:17.713000 audit[5266]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffe0e2aa6d0 a2=0 a3=7ffe0e2aa6bc items=0 ppid=3541 pid=5266 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:59:17.713000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 00:59:17.717893 kubelet[3293]: I0114 00:59:17.717576 3293 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-mm224" podStartSLOduration=78.717561124 podStartE2EDuration="1m18.717561124s" podCreationTimestamp="2026-01-14 00:57:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-14 00:59:17.716492517 +0000 UTC m=+84.662847369" watchObservedRunningTime="2026-01-14 00:59:17.717561124 +0000 UTC m=+84.663916005" Jan 14 00:59:17.719000 audit[5266]: NETFILTER_CFG table=nat:129 family=2 entries=14 op=nft_register_rule pid=5266 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 00:59:17.719000 audit[5266]: SYSCALL arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7ffe0e2aa6d0 a2=0 a3=0 items=0 ppid=3541 pid=5266 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:59:17.719000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 00:59:17.739393 systemd-networkd[1548]: calie10e6efd824: Gained IPv6LL Jan 14 00:59:17.755000 audit[5268]: NETFILTER_CFG table=filter:130 family=2 entries=20 op=nft_register_rule pid=5268 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 00:59:17.755000 audit[5268]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7fff9495ac20 a2=0 a3=7fff9495ac0c items=0 ppid=3541 pid=5268 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:59:17.755000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 00:59:17.757000 audit[5268]: NETFILTER_CFG table=nat:131 family=2 entries=14 op=nft_register_rule pid=5268 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 00:59:17.757000 audit[5268]: SYSCALL arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7fff9495ac20 a2=0 a3=0 items=0 ppid=3541 pid=5268 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:59:17.757000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 00:59:17.867382 systemd-networkd[1548]: cali40acdcb5b70: Gained IPv6LL Jan 14 00:59:17.867750 systemd-networkd[1548]: cali90da3e2c685: Gained IPv6LL Jan 14 00:59:17.963395 containerd[1939]: time="2026-01-14T00:59:17.963242662Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 00:59:17.965344 containerd[1939]: time="2026-01-14T00:59:17.965296749Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 14 00:59:17.965578 containerd[1939]: time="2026-01-14T00:59:17.965304735Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 14 00:59:17.965620 kubelet[3293]: E0114 00:59:17.965534 3293 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 14 00:59:17.965620 kubelet[3293]: E0114 00:59:17.965571 3293 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 14 00:59:17.966075 kubelet[3293]: E0114 00:59:17.966034 3293 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-92d7b,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-5c479bf86-lp2t6_calico-system(5410ee2c-498a-49d0-bc39-5e704c2599b9): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 14 00:59:17.966254 containerd[1939]: time="2026-01-14T00:59:17.966228902Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 14 00:59:17.967387 kubelet[3293]: E0114 00:59:17.967327 3293 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5c479bf86-lp2t6" podUID="5410ee2c-498a-49d0-bc39-5e704c2599b9" Jan 14 00:59:18.236338 containerd[1939]: time="2026-01-14T00:59:18.236226350Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 00:59:18.238453 containerd[1939]: time="2026-01-14T00:59:18.238344236Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 14 00:59:18.238453 containerd[1939]: time="2026-01-14T00:59:18.238391937Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 14 00:59:18.238668 kubelet[3293]: E0114 00:59:18.238625 3293 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 14 00:59:18.238948 kubelet[3293]: E0114 00:59:18.238681 3293 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 14 00:59:18.238948 kubelet[3293]: E0114 00:59:18.238795 3293 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4lxnj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-7tkkg_calico-system(e723b976-5fd1-4159-bd8a-f3fe80761ec5): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 14 00:59:18.241110 containerd[1939]: time="2026-01-14T00:59:18.241083196Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 14 00:59:18.443881 systemd-networkd[1548]: cali1c9a34be8b4: Gained IPv6LL Jan 14 00:59:18.483904 containerd[1939]: time="2026-01-14T00:59:18.483845645Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 00:59:18.486041 containerd[1939]: time="2026-01-14T00:59:18.485996886Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 14 00:59:18.486106 containerd[1939]: time="2026-01-14T00:59:18.486076497Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 14 00:59:18.486317 kubelet[3293]: E0114 00:59:18.486279 3293 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 14 00:59:18.486447 kubelet[3293]: E0114 00:59:18.486327 3293 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 14 00:59:18.486868 kubelet[3293]: E0114 00:59:18.486457 3293 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4lxnj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-7tkkg_calico-system(e723b976-5fd1-4159-bd8a-f3fe80761ec5): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 14 00:59:18.488224 kubelet[3293]: E0114 00:59:18.488166 3293 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-7tkkg" podUID="e723b976-5fd1-4159-bd8a-f3fe80761ec5" Jan 14 00:59:18.571343 systemd-networkd[1548]: cali45356dc7017: Gained IPv6LL Jan 14 00:59:18.627208 kubelet[3293]: E0114 00:59:18.627150 3293 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-cdb9b59bb-cpdlw" podUID="5e14ba26-eb09-4a70-a4a6-9ad8bd987906" Jan 14 00:59:18.629570 kubelet[3293]: E0114 00:59:18.629281 3293 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-xlvwp" podUID="13972001-c667-49c0-9374-a2bbe47d8026" Jan 14 00:59:18.630003 kubelet[3293]: E0114 00:59:18.629961 3293 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5c479bf86-lp2t6" podUID="5410ee2c-498a-49d0-bc39-5e704c2599b9" Jan 14 00:59:18.630144 kubelet[3293]: E0114 00:59:18.629920 3293 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-7tkkg" podUID="e723b976-5fd1-4159-bd8a-f3fe80761ec5" Jan 14 00:59:18.696000 audit[5276]: NETFILTER_CFG table=filter:132 family=2 entries=20 op=nft_register_rule pid=5276 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 00:59:18.698373 kernel: kauditd_printk_skb: 391 callbacks suppressed Jan 14 00:59:18.698471 kernel: audit: type=1325 audit(1768352358.696:728): table=filter:132 family=2 entries=20 op=nft_register_rule pid=5276 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 00:59:18.696000 audit[5276]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7fff6e2f02b0 a2=0 a3=7fff6e2f029c items=0 ppid=3541 pid=5276 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:59:18.703008 kernel: audit: type=1300 audit(1768352358.696:728): arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7fff6e2f02b0 a2=0 a3=7fff6e2f029c items=0 ppid=3541 pid=5276 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:59:18.696000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 00:59:18.708623 kernel: audit: type=1327 audit(1768352358.696:728): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 00:59:18.703000 audit[5276]: NETFILTER_CFG table=nat:133 family=2 entries=14 op=nft_register_rule pid=5276 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 00:59:18.713091 kernel: audit: type=1325 audit(1768352358.703:729): table=nat:133 family=2 entries=14 op=nft_register_rule pid=5276 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 00:59:18.713432 kernel: audit: type=1300 audit(1768352358.703:729): arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7fff6e2f02b0 a2=0 a3=0 items=0 ppid=3541 pid=5276 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:59:18.703000 audit[5276]: SYSCALL arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7fff6e2f02b0 a2=0 a3=0 items=0 ppid=3541 pid=5276 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:59:18.703000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 00:59:18.721249 kernel: audit: type=1327 audit(1768352358.703:729): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 00:59:20.912660 ntpd[1913]: Listen normally on 6 vxlan.calico 192.168.64.192:123 Jan 14 00:59:20.912719 ntpd[1913]: Listen normally on 7 vxlan.calico [fe80::641d:45ff:febc:3514%4]:123 Jan 14 00:59:20.914657 ntpd[1913]: 14 Jan 00:59:20 ntpd[1913]: Listen normally on 6 vxlan.calico 192.168.64.192:123 Jan 14 00:59:20.914657 ntpd[1913]: 14 Jan 00:59:20 ntpd[1913]: Listen normally on 7 vxlan.calico [fe80::641d:45ff:febc:3514%4]:123 Jan 14 00:59:20.914657 ntpd[1913]: 14 Jan 00:59:20 ntpd[1913]: Listen normally on 8 cali01a5d1589e8 [fe80::ecee:eeff:feee:eeee%7]:123 Jan 14 00:59:20.914657 ntpd[1913]: 14 Jan 00:59:20 ntpd[1913]: Listen normally on 9 calie10e6efd824 [fe80::ecee:eeff:feee:eeee%8]:123 Jan 14 00:59:20.914657 ntpd[1913]: 14 Jan 00:59:20 ntpd[1913]: Listen normally on 10 cali90da3e2c685 [fe80::ecee:eeff:feee:eeee%9]:123 Jan 14 00:59:20.914657 ntpd[1913]: 14 Jan 00:59:20 ntpd[1913]: Listen normally on 11 cali40acdcb5b70 [fe80::ecee:eeff:feee:eeee%10]:123 Jan 14 00:59:20.914657 ntpd[1913]: 14 Jan 00:59:20 ntpd[1913]: Listen normally on 12 cali45356dc7017 [fe80::ecee:eeff:feee:eeee%11]:123 Jan 14 00:59:20.914657 ntpd[1913]: 14 Jan 00:59:20 ntpd[1913]: Listen normally on 13 cali1c9a34be8b4 [fe80::ecee:eeff:feee:eeee%12]:123 Jan 14 00:59:20.912741 ntpd[1913]: Listen normally on 8 cali01a5d1589e8 [fe80::ecee:eeff:feee:eeee%7]:123 Jan 14 00:59:20.912760 ntpd[1913]: Listen normally on 9 calie10e6efd824 [fe80::ecee:eeff:feee:eeee%8]:123 Jan 14 00:59:20.912778 ntpd[1913]: Listen normally on 10 cali90da3e2c685 [fe80::ecee:eeff:feee:eeee%9]:123 Jan 14 00:59:20.912862 ntpd[1913]: Listen normally on 11 cali40acdcb5b70 [fe80::ecee:eeff:feee:eeee%10]:123 Jan 14 00:59:20.912884 ntpd[1913]: Listen normally on 12 cali45356dc7017 [fe80::ecee:eeff:feee:eeee%11]:123 Jan 14 00:59:20.912904 ntpd[1913]: Listen normally on 13 cali1c9a34be8b4 [fe80::ecee:eeff:feee:eeee%12]:123 Jan 14 00:59:23.303296 containerd[1939]: time="2026-01-14T00:59:23.303056838Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6b4df85555-29mjx,Uid:f05abc55-0515-49ca-aacb-ebde63d756a4,Namespace:calico-system,Attempt:0,}" Jan 14 00:59:23.503595 systemd-networkd[1548]: cali89ae691a471: Link UP Jan 14 00:59:23.504254 systemd-networkd[1548]: cali89ae691a471: Gained carrier Jan 14 00:59:23.508261 (udev-worker)[5311]: Network interface NamePolicy= disabled on kernel command line. Jan 14 00:59:23.521159 containerd[1939]: 2026-01-14 00:59:23.431 [INFO][5293] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--19--12-k8s-calico--kube--controllers--6b4df85555--29mjx-eth0 calico-kube-controllers-6b4df85555- calico-system f05abc55-0515-49ca-aacb-ebde63d756a4 857 0 2026-01-14 00:58:45 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:6b4df85555 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ip-172-31-19-12 calico-kube-controllers-6b4df85555-29mjx eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali89ae691a471 [] [] }} ContainerID="a7ca27d24be8ce7829a4690349f2d0b1c03b8ab024b8df952fd6e2c4b0a8c41d" Namespace="calico-system" Pod="calico-kube-controllers-6b4df85555-29mjx" WorkloadEndpoint="ip--172--31--19--12-k8s-calico--kube--controllers--6b4df85555--29mjx-" Jan 14 00:59:23.521159 containerd[1939]: 2026-01-14 00:59:23.433 [INFO][5293] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a7ca27d24be8ce7829a4690349f2d0b1c03b8ab024b8df952fd6e2c4b0a8c41d" Namespace="calico-system" Pod="calico-kube-controllers-6b4df85555-29mjx" WorkloadEndpoint="ip--172--31--19--12-k8s-calico--kube--controllers--6b4df85555--29mjx-eth0" Jan 14 00:59:23.521159 containerd[1939]: 2026-01-14 00:59:23.461 [INFO][5305] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a7ca27d24be8ce7829a4690349f2d0b1c03b8ab024b8df952fd6e2c4b0a8c41d" HandleID="k8s-pod-network.a7ca27d24be8ce7829a4690349f2d0b1c03b8ab024b8df952fd6e2c4b0a8c41d" Workload="ip--172--31--19--12-k8s-calico--kube--controllers--6b4df85555--29mjx-eth0" Jan 14 00:59:23.521159 containerd[1939]: 2026-01-14 00:59:23.461 [INFO][5305] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="a7ca27d24be8ce7829a4690349f2d0b1c03b8ab024b8df952fd6e2c4b0a8c41d" HandleID="k8s-pod-network.a7ca27d24be8ce7829a4690349f2d0b1c03b8ab024b8df952fd6e2c4b0a8c41d" Workload="ip--172--31--19--12-k8s-calico--kube--controllers--6b4df85555--29mjx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5090), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-19-12", "pod":"calico-kube-controllers-6b4df85555-29mjx", "timestamp":"2026-01-14 00:59:23.461499927 +0000 UTC"}, Hostname:"ip-172-31-19-12", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 14 00:59:23.521159 containerd[1939]: 2026-01-14 00:59:23.461 [INFO][5305] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 14 00:59:23.521159 containerd[1939]: 2026-01-14 00:59:23.461 [INFO][5305] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 14 00:59:23.521159 containerd[1939]: 2026-01-14 00:59:23.461 [INFO][5305] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-19-12' Jan 14 00:59:23.521159 containerd[1939]: 2026-01-14 00:59:23.469 [INFO][5305] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.a7ca27d24be8ce7829a4690349f2d0b1c03b8ab024b8df952fd6e2c4b0a8c41d" host="ip-172-31-19-12" Jan 14 00:59:23.521159 containerd[1939]: 2026-01-14 00:59:23.475 [INFO][5305] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-19-12" Jan 14 00:59:23.521159 containerd[1939]: 2026-01-14 00:59:23.478 [INFO][5305] ipam/ipam.go 511: Trying affinity for 192.168.64.192/26 host="ip-172-31-19-12" Jan 14 00:59:23.521159 containerd[1939]: 2026-01-14 00:59:23.480 [INFO][5305] ipam/ipam.go 158: Attempting to load block cidr=192.168.64.192/26 host="ip-172-31-19-12" Jan 14 00:59:23.521159 containerd[1939]: 2026-01-14 00:59:23.482 [INFO][5305] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.64.192/26 host="ip-172-31-19-12" Jan 14 00:59:23.521159 containerd[1939]: 2026-01-14 00:59:23.482 [INFO][5305] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.64.192/26 handle="k8s-pod-network.a7ca27d24be8ce7829a4690349f2d0b1c03b8ab024b8df952fd6e2c4b0a8c41d" host="ip-172-31-19-12" Jan 14 00:59:23.521159 containerd[1939]: 2026-01-14 00:59:23.483 [INFO][5305] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.a7ca27d24be8ce7829a4690349f2d0b1c03b8ab024b8df952fd6e2c4b0a8c41d Jan 14 00:59:23.521159 containerd[1939]: 2026-01-14 00:59:23.488 [INFO][5305] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.64.192/26 handle="k8s-pod-network.a7ca27d24be8ce7829a4690349f2d0b1c03b8ab024b8df952fd6e2c4b0a8c41d" host="ip-172-31-19-12" Jan 14 00:59:23.521159 containerd[1939]: 2026-01-14 00:59:23.496 [INFO][5305] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.64.199/26] block=192.168.64.192/26 handle="k8s-pod-network.a7ca27d24be8ce7829a4690349f2d0b1c03b8ab024b8df952fd6e2c4b0a8c41d" host="ip-172-31-19-12" Jan 14 00:59:23.521159 containerd[1939]: 2026-01-14 00:59:23.496 [INFO][5305] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.64.199/26] handle="k8s-pod-network.a7ca27d24be8ce7829a4690349f2d0b1c03b8ab024b8df952fd6e2c4b0a8c41d" host="ip-172-31-19-12" Jan 14 00:59:23.521159 containerd[1939]: 2026-01-14 00:59:23.496 [INFO][5305] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 14 00:59:23.521159 containerd[1939]: 2026-01-14 00:59:23.496 [INFO][5305] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.64.199/26] IPv6=[] ContainerID="a7ca27d24be8ce7829a4690349f2d0b1c03b8ab024b8df952fd6e2c4b0a8c41d" HandleID="k8s-pod-network.a7ca27d24be8ce7829a4690349f2d0b1c03b8ab024b8df952fd6e2c4b0a8c41d" Workload="ip--172--31--19--12-k8s-calico--kube--controllers--6b4df85555--29mjx-eth0" Jan 14 00:59:23.521799 containerd[1939]: 2026-01-14 00:59:23.499 [INFO][5293] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a7ca27d24be8ce7829a4690349f2d0b1c03b8ab024b8df952fd6e2c4b0a8c41d" Namespace="calico-system" Pod="calico-kube-controllers-6b4df85555-29mjx" WorkloadEndpoint="ip--172--31--19--12-k8s-calico--kube--controllers--6b4df85555--29mjx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--12-k8s-calico--kube--controllers--6b4df85555--29mjx-eth0", GenerateName:"calico-kube-controllers-6b4df85555-", Namespace:"calico-system", SelfLink:"", UID:"f05abc55-0515-49ca-aacb-ebde63d756a4", ResourceVersion:"857", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 0, 58, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6b4df85555", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-12", ContainerID:"", Pod:"calico-kube-controllers-6b4df85555-29mjx", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.64.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali89ae691a471", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 00:59:23.521799 containerd[1939]: 2026-01-14 00:59:23.500 [INFO][5293] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.64.199/32] ContainerID="a7ca27d24be8ce7829a4690349f2d0b1c03b8ab024b8df952fd6e2c4b0a8c41d" Namespace="calico-system" Pod="calico-kube-controllers-6b4df85555-29mjx" WorkloadEndpoint="ip--172--31--19--12-k8s-calico--kube--controllers--6b4df85555--29mjx-eth0" Jan 14 00:59:23.521799 containerd[1939]: 2026-01-14 00:59:23.500 [INFO][5293] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali89ae691a471 ContainerID="a7ca27d24be8ce7829a4690349f2d0b1c03b8ab024b8df952fd6e2c4b0a8c41d" Namespace="calico-system" Pod="calico-kube-controllers-6b4df85555-29mjx" WorkloadEndpoint="ip--172--31--19--12-k8s-calico--kube--controllers--6b4df85555--29mjx-eth0" Jan 14 00:59:23.521799 containerd[1939]: 2026-01-14 00:59:23.505 [INFO][5293] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a7ca27d24be8ce7829a4690349f2d0b1c03b8ab024b8df952fd6e2c4b0a8c41d" Namespace="calico-system" Pod="calico-kube-controllers-6b4df85555-29mjx" WorkloadEndpoint="ip--172--31--19--12-k8s-calico--kube--controllers--6b4df85555--29mjx-eth0" Jan 14 00:59:23.521799 containerd[1939]: 2026-01-14 00:59:23.505 [INFO][5293] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a7ca27d24be8ce7829a4690349f2d0b1c03b8ab024b8df952fd6e2c4b0a8c41d" Namespace="calico-system" Pod="calico-kube-controllers-6b4df85555-29mjx" WorkloadEndpoint="ip--172--31--19--12-k8s-calico--kube--controllers--6b4df85555--29mjx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--12-k8s-calico--kube--controllers--6b4df85555--29mjx-eth0", GenerateName:"calico-kube-controllers-6b4df85555-", Namespace:"calico-system", SelfLink:"", UID:"f05abc55-0515-49ca-aacb-ebde63d756a4", ResourceVersion:"857", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 0, 58, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6b4df85555", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-12", ContainerID:"a7ca27d24be8ce7829a4690349f2d0b1c03b8ab024b8df952fd6e2c4b0a8c41d", Pod:"calico-kube-controllers-6b4df85555-29mjx", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.64.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali89ae691a471", MAC:"f6:f0:b9:fe:26:c3", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 00:59:23.521799 containerd[1939]: 2026-01-14 00:59:23.516 [INFO][5293] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a7ca27d24be8ce7829a4690349f2d0b1c03b8ab024b8df952fd6e2c4b0a8c41d" Namespace="calico-system" Pod="calico-kube-controllers-6b4df85555-29mjx" WorkloadEndpoint="ip--172--31--19--12-k8s-calico--kube--controllers--6b4df85555--29mjx-eth0" Jan 14 00:59:23.535000 audit[5320]: NETFILTER_CFG table=filter:134 family=2 entries=52 op=nft_register_chain pid=5320 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 14 00:59:23.540440 kernel: audit: type=1325 audit(1768352363.535:730): table=filter:134 family=2 entries=52 op=nft_register_chain pid=5320 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 14 00:59:23.535000 audit[5320]: SYSCALL arch=c000003e syscall=46 success=yes exit=24312 a0=3 a1=7fffa9f6d7b0 a2=0 a3=7fffa9f6d79c items=0 ppid=4638 pid=5320 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:59:23.547139 kernel: audit: type=1300 audit(1768352363.535:730): arch=c000003e syscall=46 success=yes exit=24312 a0=3 a1=7fffa9f6d7b0 a2=0 a3=7fffa9f6d79c items=0 ppid=4638 pid=5320 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:59:23.547237 kernel: audit: type=1327 audit(1768352363.535:730): proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 14 00:59:23.535000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 14 00:59:23.568210 containerd[1939]: time="2026-01-14T00:59:23.567129258Z" level=info msg="connecting to shim a7ca27d24be8ce7829a4690349f2d0b1c03b8ab024b8df952fd6e2c4b0a8c41d" address="unix:///run/containerd/s/b24f36e62139dfad8f3bf8c296ff2877c8862b117bd4f62550322f1872de2a9d" namespace=k8s.io protocol=ttrpc version=3 Jan 14 00:59:23.604423 systemd[1]: Started cri-containerd-a7ca27d24be8ce7829a4690349f2d0b1c03b8ab024b8df952fd6e2c4b0a8c41d.scope - libcontainer container a7ca27d24be8ce7829a4690349f2d0b1c03b8ab024b8df952fd6e2c4b0a8c41d. Jan 14 00:59:23.616000 audit: BPF prog-id=253 op=LOAD Jan 14 00:59:23.616000 audit: BPF prog-id=254 op=LOAD Jan 14 00:59:23.616000 audit[5341]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=5330 pid=5341 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:59:23.616000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6137636132376432346265386365373832396134363930333439663264 Jan 14 00:59:23.616000 audit: BPF prog-id=254 op=UNLOAD Jan 14 00:59:23.616000 audit[5341]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=5330 pid=5341 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:59:23.619322 kernel: audit: type=1334 audit(1768352363.616:731): prog-id=253 op=LOAD Jan 14 00:59:23.616000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6137636132376432346265386365373832396134363930333439663264 Jan 14 00:59:23.616000 audit: BPF prog-id=255 op=LOAD Jan 14 00:59:23.616000 audit[5341]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=5330 pid=5341 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:59:23.616000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6137636132376432346265386365373832396134363930333439663264 Jan 14 00:59:23.616000 audit: BPF prog-id=256 op=LOAD Jan 14 00:59:23.616000 audit[5341]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=5330 pid=5341 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:59:23.616000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6137636132376432346265386365373832396134363930333439663264 Jan 14 00:59:23.616000 audit: BPF prog-id=256 op=UNLOAD Jan 14 00:59:23.616000 audit[5341]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=5330 pid=5341 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:59:23.616000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6137636132376432346265386365373832396134363930333439663264 Jan 14 00:59:23.616000 audit: BPF prog-id=255 op=UNLOAD Jan 14 00:59:23.616000 audit[5341]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=5330 pid=5341 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:59:23.616000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6137636132376432346265386365373832396134363930333439663264 Jan 14 00:59:23.616000 audit: BPF prog-id=257 op=LOAD Jan 14 00:59:23.616000 audit[5341]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=5330 pid=5341 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:59:23.616000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6137636132376432346265386365373832396134363930333439663264 Jan 14 00:59:23.661650 containerd[1939]: time="2026-01-14T00:59:23.661610133Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6b4df85555-29mjx,Uid:f05abc55-0515-49ca-aacb-ebde63d756a4,Namespace:calico-system,Attempt:0,} returns sandbox id \"a7ca27d24be8ce7829a4690349f2d0b1c03b8ab024b8df952fd6e2c4b0a8c41d\"" Jan 14 00:59:23.666241 containerd[1939]: time="2026-01-14T00:59:23.665425186Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 14 00:59:23.936819 containerd[1939]: time="2026-01-14T00:59:23.936689222Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 00:59:23.939406 containerd[1939]: time="2026-01-14T00:59:23.939274160Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 14 00:59:23.939406 containerd[1939]: time="2026-01-14T00:59:23.939299803Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 14 00:59:23.939645 kubelet[3293]: E0114 00:59:23.939549 3293 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 14 00:59:23.939645 kubelet[3293]: E0114 00:59:23.939593 3293 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 14 00:59:23.940576 kubelet[3293]: E0114 00:59:23.939716 3293 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qp8hb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-6b4df85555-29mjx_calico-system(f05abc55-0515-49ca-aacb-ebde63d756a4): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 14 00:59:23.941026 kubelet[3293]: E0114 00:59:23.940994 3293 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6b4df85555-29mjx" podUID="f05abc55-0515-49ca-aacb-ebde63d756a4" Jan 14 00:59:24.301441 containerd[1939]: time="2026-01-14T00:59:24.301166884Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-cdb9b59bb-pr6gg,Uid:7449b447-f9d5-45e2-8001-6763bc56b2d8,Namespace:calico-apiserver,Attempt:0,}" Jan 14 00:59:24.417112 systemd-networkd[1548]: cali01504b0f4f5: Link UP Jan 14 00:59:24.417878 systemd-networkd[1548]: cali01504b0f4f5: Gained carrier Jan 14 00:59:24.443926 containerd[1939]: 2026-01-14 00:59:24.345 [INFO][5373] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--19--12-k8s-calico--apiserver--cdb9b59bb--pr6gg-eth0 calico-apiserver-cdb9b59bb- calico-apiserver 7449b447-f9d5-45e2-8001-6763bc56b2d8 865 0 2026-01-14 00:58:40 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:cdb9b59bb projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-19-12 calico-apiserver-cdb9b59bb-pr6gg eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali01504b0f4f5 [] [] }} ContainerID="bb790739ed7ffd569776dffda71e784ce97e3c50c74b3d0d9fd77558661b0451" Namespace="calico-apiserver" Pod="calico-apiserver-cdb9b59bb-pr6gg" WorkloadEndpoint="ip--172--31--19--12-k8s-calico--apiserver--cdb9b59bb--pr6gg-" Jan 14 00:59:24.443926 containerd[1939]: 2026-01-14 00:59:24.346 [INFO][5373] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="bb790739ed7ffd569776dffda71e784ce97e3c50c74b3d0d9fd77558661b0451" Namespace="calico-apiserver" Pod="calico-apiserver-cdb9b59bb-pr6gg" WorkloadEndpoint="ip--172--31--19--12-k8s-calico--apiserver--cdb9b59bb--pr6gg-eth0" Jan 14 00:59:24.443926 containerd[1939]: 2026-01-14 00:59:24.371 [INFO][5380] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="bb790739ed7ffd569776dffda71e784ce97e3c50c74b3d0d9fd77558661b0451" HandleID="k8s-pod-network.bb790739ed7ffd569776dffda71e784ce97e3c50c74b3d0d9fd77558661b0451" Workload="ip--172--31--19--12-k8s-calico--apiserver--cdb9b59bb--pr6gg-eth0" Jan 14 00:59:24.443926 containerd[1939]: 2026-01-14 00:59:24.371 [INFO][5380] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="bb790739ed7ffd569776dffda71e784ce97e3c50c74b3d0d9fd77558661b0451" HandleID="k8s-pod-network.bb790739ed7ffd569776dffda71e784ce97e3c50c74b3d0d9fd77558661b0451" Workload="ip--172--31--19--12-k8s-calico--apiserver--cdb9b59bb--pr6gg-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f010), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ip-172-31-19-12", "pod":"calico-apiserver-cdb9b59bb-pr6gg", "timestamp":"2026-01-14 00:59:24.371772412 +0000 UTC"}, Hostname:"ip-172-31-19-12", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 14 00:59:24.443926 containerd[1939]: 2026-01-14 00:59:24.371 [INFO][5380] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 14 00:59:24.443926 containerd[1939]: 2026-01-14 00:59:24.372 [INFO][5380] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 14 00:59:24.443926 containerd[1939]: 2026-01-14 00:59:24.372 [INFO][5380] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-19-12' Jan 14 00:59:24.443926 containerd[1939]: 2026-01-14 00:59:24.379 [INFO][5380] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.bb790739ed7ffd569776dffda71e784ce97e3c50c74b3d0d9fd77558661b0451" host="ip-172-31-19-12" Jan 14 00:59:24.443926 containerd[1939]: 2026-01-14 00:59:24.384 [INFO][5380] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-19-12" Jan 14 00:59:24.443926 containerd[1939]: 2026-01-14 00:59:24.389 [INFO][5380] ipam/ipam.go 511: Trying affinity for 192.168.64.192/26 host="ip-172-31-19-12" Jan 14 00:59:24.443926 containerd[1939]: 2026-01-14 00:59:24.391 [INFO][5380] ipam/ipam.go 158: Attempting to load block cidr=192.168.64.192/26 host="ip-172-31-19-12" Jan 14 00:59:24.443926 containerd[1939]: 2026-01-14 00:59:24.393 [INFO][5380] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.64.192/26 host="ip-172-31-19-12" Jan 14 00:59:24.443926 containerd[1939]: 2026-01-14 00:59:24.393 [INFO][5380] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.64.192/26 handle="k8s-pod-network.bb790739ed7ffd569776dffda71e784ce97e3c50c74b3d0d9fd77558661b0451" host="ip-172-31-19-12" Jan 14 00:59:24.443926 containerd[1939]: 2026-01-14 00:59:24.395 [INFO][5380] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.bb790739ed7ffd569776dffda71e784ce97e3c50c74b3d0d9fd77558661b0451 Jan 14 00:59:24.443926 containerd[1939]: 2026-01-14 00:59:24.401 [INFO][5380] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.64.192/26 handle="k8s-pod-network.bb790739ed7ffd569776dffda71e784ce97e3c50c74b3d0d9fd77558661b0451" host="ip-172-31-19-12" Jan 14 00:59:24.443926 containerd[1939]: 2026-01-14 00:59:24.410 [INFO][5380] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.64.200/26] block=192.168.64.192/26 handle="k8s-pod-network.bb790739ed7ffd569776dffda71e784ce97e3c50c74b3d0d9fd77558661b0451" host="ip-172-31-19-12" Jan 14 00:59:24.443926 containerd[1939]: 2026-01-14 00:59:24.410 [INFO][5380] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.64.200/26] handle="k8s-pod-network.bb790739ed7ffd569776dffda71e784ce97e3c50c74b3d0d9fd77558661b0451" host="ip-172-31-19-12" Jan 14 00:59:24.443926 containerd[1939]: 2026-01-14 00:59:24.410 [INFO][5380] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 14 00:59:24.443926 containerd[1939]: 2026-01-14 00:59:24.410 [INFO][5380] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.64.200/26] IPv6=[] ContainerID="bb790739ed7ffd569776dffda71e784ce97e3c50c74b3d0d9fd77558661b0451" HandleID="k8s-pod-network.bb790739ed7ffd569776dffda71e784ce97e3c50c74b3d0d9fd77558661b0451" Workload="ip--172--31--19--12-k8s-calico--apiserver--cdb9b59bb--pr6gg-eth0" Jan 14 00:59:24.446967 containerd[1939]: 2026-01-14 00:59:24.412 [INFO][5373] cni-plugin/k8s.go 418: Populated endpoint ContainerID="bb790739ed7ffd569776dffda71e784ce97e3c50c74b3d0d9fd77558661b0451" Namespace="calico-apiserver" Pod="calico-apiserver-cdb9b59bb-pr6gg" WorkloadEndpoint="ip--172--31--19--12-k8s-calico--apiserver--cdb9b59bb--pr6gg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--12-k8s-calico--apiserver--cdb9b59bb--pr6gg-eth0", GenerateName:"calico-apiserver-cdb9b59bb-", Namespace:"calico-apiserver", SelfLink:"", UID:"7449b447-f9d5-45e2-8001-6763bc56b2d8", ResourceVersion:"865", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 0, 58, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"cdb9b59bb", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-12", ContainerID:"", Pod:"calico-apiserver-cdb9b59bb-pr6gg", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.64.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali01504b0f4f5", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 00:59:24.446967 containerd[1939]: 2026-01-14 00:59:24.412 [INFO][5373] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.64.200/32] ContainerID="bb790739ed7ffd569776dffda71e784ce97e3c50c74b3d0d9fd77558661b0451" Namespace="calico-apiserver" Pod="calico-apiserver-cdb9b59bb-pr6gg" WorkloadEndpoint="ip--172--31--19--12-k8s-calico--apiserver--cdb9b59bb--pr6gg-eth0" Jan 14 00:59:24.446967 containerd[1939]: 2026-01-14 00:59:24.412 [INFO][5373] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali01504b0f4f5 ContainerID="bb790739ed7ffd569776dffda71e784ce97e3c50c74b3d0d9fd77558661b0451" Namespace="calico-apiserver" Pod="calico-apiserver-cdb9b59bb-pr6gg" WorkloadEndpoint="ip--172--31--19--12-k8s-calico--apiserver--cdb9b59bb--pr6gg-eth0" Jan 14 00:59:24.446967 containerd[1939]: 2026-01-14 00:59:24.418 [INFO][5373] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="bb790739ed7ffd569776dffda71e784ce97e3c50c74b3d0d9fd77558661b0451" Namespace="calico-apiserver" Pod="calico-apiserver-cdb9b59bb-pr6gg" WorkloadEndpoint="ip--172--31--19--12-k8s-calico--apiserver--cdb9b59bb--pr6gg-eth0" Jan 14 00:59:24.446967 containerd[1939]: 2026-01-14 00:59:24.418 [INFO][5373] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="bb790739ed7ffd569776dffda71e784ce97e3c50c74b3d0d9fd77558661b0451" Namespace="calico-apiserver" Pod="calico-apiserver-cdb9b59bb-pr6gg" WorkloadEndpoint="ip--172--31--19--12-k8s-calico--apiserver--cdb9b59bb--pr6gg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--12-k8s-calico--apiserver--cdb9b59bb--pr6gg-eth0", GenerateName:"calico-apiserver-cdb9b59bb-", Namespace:"calico-apiserver", SelfLink:"", UID:"7449b447-f9d5-45e2-8001-6763bc56b2d8", ResourceVersion:"865", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 0, 58, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"cdb9b59bb", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-12", ContainerID:"bb790739ed7ffd569776dffda71e784ce97e3c50c74b3d0d9fd77558661b0451", Pod:"calico-apiserver-cdb9b59bb-pr6gg", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.64.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali01504b0f4f5", MAC:"2a:31:ef:2a:32:67", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 00:59:24.446967 containerd[1939]: 2026-01-14 00:59:24.434 [INFO][5373] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="bb790739ed7ffd569776dffda71e784ce97e3c50c74b3d0d9fd77558661b0451" Namespace="calico-apiserver" Pod="calico-apiserver-cdb9b59bb-pr6gg" WorkloadEndpoint="ip--172--31--19--12-k8s-calico--apiserver--cdb9b59bb--pr6gg-eth0" Jan 14 00:59:24.462038 kernel: kauditd_printk_skb: 21 callbacks suppressed Jan 14 00:59:24.462148 kernel: audit: type=1325 audit(1768352364.456:739): table=filter:135 family=2 entries=63 op=nft_register_chain pid=5398 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 14 00:59:24.456000 audit[5398]: NETFILTER_CFG table=filter:135 family=2 entries=63 op=nft_register_chain pid=5398 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 14 00:59:24.471770 kernel: audit: type=1300 audit(1768352364.456:739): arch=c000003e syscall=46 success=yes exit=30664 a0=3 a1=7ffe14d18820 a2=0 a3=7ffe14d1880c items=0 ppid=4638 pid=5398 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:59:24.472301 kernel: audit: type=1327 audit(1768352364.456:739): proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 14 00:59:24.456000 audit[5398]: SYSCALL arch=c000003e syscall=46 success=yes exit=30664 a0=3 a1=7ffe14d18820 a2=0 a3=7ffe14d1880c items=0 ppid=4638 pid=5398 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:59:24.456000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 14 00:59:24.498065 containerd[1939]: time="2026-01-14T00:59:24.497624943Z" level=info msg="connecting to shim bb790739ed7ffd569776dffda71e784ce97e3c50c74b3d0d9fd77558661b0451" address="unix:///run/containerd/s/8d61cedc801514c254cb033ff3b6824d7ae6ab2928365169bb52371241ae7a07" namespace=k8s.io protocol=ttrpc version=3 Jan 14 00:59:24.526390 systemd[1]: Started cri-containerd-bb790739ed7ffd569776dffda71e784ce97e3c50c74b3d0d9fd77558661b0451.scope - libcontainer container bb790739ed7ffd569776dffda71e784ce97e3c50c74b3d0d9fd77558661b0451. Jan 14 00:59:24.537000 audit: BPF prog-id=258 op=LOAD Jan 14 00:59:24.540436 kernel: audit: type=1334 audit(1768352364.537:740): prog-id=258 op=LOAD Jan 14 00:59:24.540914 kernel: audit: type=1334 audit(1768352364.538:741): prog-id=259 op=LOAD Jan 14 00:59:24.538000 audit: BPF prog-id=259 op=LOAD Jan 14 00:59:24.538000 audit[5419]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000138238 a2=98 a3=0 items=0 ppid=5407 pid=5419 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:59:24.543984 kernel: audit: type=1300 audit(1768352364.538:741): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000138238 a2=98 a3=0 items=0 ppid=5407 pid=5419 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:59:24.552610 kernel: audit: type=1327 audit(1768352364.538:741): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6262373930373339656437666664353639373736646666646137316537 Jan 14 00:59:24.538000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6262373930373339656437666664353639373736646666646137316537 Jan 14 00:59:24.538000 audit: BPF prog-id=259 op=UNLOAD Jan 14 00:59:24.562278 kernel: audit: type=1334 audit(1768352364.538:742): prog-id=259 op=UNLOAD Jan 14 00:59:24.562384 kernel: audit: type=1300 audit(1768352364.538:742): arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5407 pid=5419 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:59:24.538000 audit[5419]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5407 pid=5419 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:59:24.568428 kernel: audit: type=1327 audit(1768352364.538:742): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6262373930373339656437666664353639373736646666646137316537 Jan 14 00:59:24.538000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6262373930373339656437666664353639373736646666646137316537 Jan 14 00:59:24.538000 audit: BPF prog-id=260 op=LOAD Jan 14 00:59:24.538000 audit[5419]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000138488 a2=98 a3=0 items=0 ppid=5407 pid=5419 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:59:24.538000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6262373930373339656437666664353639373736646666646137316537 Jan 14 00:59:24.538000 audit: BPF prog-id=261 op=LOAD Jan 14 00:59:24.538000 audit[5419]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000138218 a2=98 a3=0 items=0 ppid=5407 pid=5419 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:59:24.538000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6262373930373339656437666664353639373736646666646137316537 Jan 14 00:59:24.538000 audit: BPF prog-id=261 op=UNLOAD Jan 14 00:59:24.538000 audit[5419]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5407 pid=5419 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:59:24.538000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6262373930373339656437666664353639373736646666646137316537 Jan 14 00:59:24.538000 audit: BPF prog-id=260 op=UNLOAD Jan 14 00:59:24.538000 audit[5419]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5407 pid=5419 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:59:24.538000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6262373930373339656437666664353639373736646666646137316537 Jan 14 00:59:24.538000 audit: BPF prog-id=262 op=LOAD Jan 14 00:59:24.538000 audit[5419]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001386e8 a2=98 a3=0 items=0 ppid=5407 pid=5419 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:59:24.538000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6262373930373339656437666664353639373736646666646137316537 Jan 14 00:59:24.590237 containerd[1939]: time="2026-01-14T00:59:24.590082822Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-cdb9b59bb-pr6gg,Uid:7449b447-f9d5-45e2-8001-6763bc56b2d8,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"bb790739ed7ffd569776dffda71e784ce97e3c50c74b3d0d9fd77558661b0451\"" Jan 14 00:59:24.592784 containerd[1939]: time="2026-01-14T00:59:24.592600802Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 14 00:59:24.644463 kubelet[3293]: E0114 00:59:24.644421 3293 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6b4df85555-29mjx" podUID="f05abc55-0515-49ca-aacb-ebde63d756a4" Jan 14 00:59:24.888337 containerd[1939]: time="2026-01-14T00:59:24.888295511Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 00:59:24.890466 containerd[1939]: time="2026-01-14T00:59:24.890422742Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 14 00:59:24.890592 containerd[1939]: time="2026-01-14T00:59:24.890439913Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 14 00:59:24.890662 kubelet[3293]: E0114 00:59:24.890637 3293 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 00:59:24.890716 kubelet[3293]: E0114 00:59:24.890670 3293 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 00:59:24.890824 kubelet[3293]: E0114 00:59:24.890787 3293 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-glhgl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-cdb9b59bb-pr6gg_calico-apiserver(7449b447-f9d5-45e2-8001-6763bc56b2d8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 14 00:59:24.892165 kubelet[3293]: E0114 00:59:24.892129 3293 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-cdb9b59bb-pr6gg" podUID="7449b447-f9d5-45e2-8001-6763bc56b2d8" Jan 14 00:59:25.484024 systemd-networkd[1548]: cali01504b0f4f5: Gained IPv6LL Jan 14 00:59:25.484317 systemd-networkd[1548]: cali89ae691a471: Gained IPv6LL Jan 14 00:59:25.661469 kubelet[3293]: E0114 00:59:25.661146 3293 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6b4df85555-29mjx" podUID="f05abc55-0515-49ca-aacb-ebde63d756a4" Jan 14 00:59:25.661469 kubelet[3293]: E0114 00:59:25.661224 3293 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-cdb9b59bb-pr6gg" podUID="7449b447-f9d5-45e2-8001-6763bc56b2d8" Jan 14 00:59:25.705000 audit[5445]: NETFILTER_CFG table=filter:136 family=2 entries=20 op=nft_register_rule pid=5445 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 00:59:25.705000 audit[5445]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffeb9d7a5b0 a2=0 a3=7ffeb9d7a59c items=0 ppid=3541 pid=5445 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:59:25.705000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 00:59:25.712000 audit[5445]: NETFILTER_CFG table=nat:137 family=2 entries=14 op=nft_register_rule pid=5445 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 00:59:25.712000 audit[5445]: SYSCALL arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7ffeb9d7a5b0 a2=0 a3=0 items=0 ppid=3541 pid=5445 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:59:25.712000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 00:59:27.912604 ntpd[1913]: Listen normally on 14 cali89ae691a471 [fe80::ecee:eeff:feee:eeee%13]:123 Jan 14 00:59:27.913031 ntpd[1913]: 14 Jan 00:59:27 ntpd[1913]: Listen normally on 14 cali89ae691a471 [fe80::ecee:eeff:feee:eeee%13]:123 Jan 14 00:59:27.913031 ntpd[1913]: 14 Jan 00:59:27 ntpd[1913]: Listen normally on 15 cali01504b0f4f5 [fe80::ecee:eeff:feee:eeee%14]:123 Jan 14 00:59:27.912656 ntpd[1913]: Listen normally on 15 cali01504b0f4f5 [fe80::ecee:eeff:feee:eeee%14]:123 Jan 14 00:59:29.667000 audit[5449]: NETFILTER_CFG table=filter:138 family=2 entries=17 op=nft_register_rule pid=5449 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 00:59:29.670291 kernel: kauditd_printk_skb: 21 callbacks suppressed Jan 14 00:59:29.670377 kernel: audit: type=1325 audit(1768352369.667:750): table=filter:138 family=2 entries=17 op=nft_register_rule pid=5449 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 00:59:29.667000 audit[5449]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffec76bcf80 a2=0 a3=7ffec76bcf6c items=0 ppid=3541 pid=5449 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:59:29.680202 kernel: audit: type=1300 audit(1768352369.667:750): arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffec76bcf80 a2=0 a3=7ffec76bcf6c items=0 ppid=3541 pid=5449 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:59:29.667000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 00:59:29.686943 kernel: audit: type=1327 audit(1768352369.667:750): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 00:59:29.675000 audit[5449]: NETFILTER_CFG table=nat:139 family=2 entries=35 op=nft_register_chain pid=5449 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 00:59:29.693516 kernel: audit: type=1325 audit(1768352369.675:751): table=nat:139 family=2 entries=35 op=nft_register_chain pid=5449 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 00:59:29.675000 audit[5449]: SYSCALL arch=c000003e syscall=46 success=yes exit=14196 a0=3 a1=7ffec76bcf80 a2=0 a3=7ffec76bcf6c items=0 ppid=3541 pid=5449 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:59:29.705176 kernel: audit: type=1300 audit(1768352369.675:751): arch=c000003e syscall=46 success=yes exit=14196 a0=3 a1=7ffec76bcf80 a2=0 a3=7ffec76bcf6c items=0 ppid=3541 pid=5449 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:59:29.705451 kernel: audit: type=1327 audit(1768352369.675:751): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 00:59:29.675000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 00:59:29.698000 audit[5451]: NETFILTER_CFG table=filter:140 family=2 entries=14 op=nft_register_rule pid=5451 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 00:59:29.698000 audit[5451]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffd364cfcf0 a2=0 a3=7ffd364cfcdc items=0 ppid=3541 pid=5451 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:59:29.709156 kernel: audit: type=1325 audit(1768352369.698:752): table=filter:140 family=2 entries=14 op=nft_register_rule pid=5451 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 00:59:29.709214 kernel: audit: type=1300 audit(1768352369.698:752): arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffd364cfcf0 a2=0 a3=7ffd364cfcdc items=0 ppid=3541 pid=5451 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:59:29.698000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 00:59:29.716050 kernel: audit: type=1327 audit(1768352369.698:752): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 00:59:29.757000 audit[5451]: NETFILTER_CFG table=nat:141 family=2 entries=56 op=nft_register_chain pid=5451 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 00:59:29.763217 kernel: audit: type=1325 audit(1768352369.757:753): table=nat:141 family=2 entries=56 op=nft_register_chain pid=5451 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 00:59:29.757000 audit[5451]: SYSCALL arch=c000003e syscall=46 success=yes exit=19860 a0=3 a1=7ffd364cfcf0 a2=0 a3=7ffd364cfcdc items=0 ppid=3541 pid=5451 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:59:29.757000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 00:59:30.682926 systemd[1]: Started sshd@7-172.31.19.12:22-68.220.241.50:41748.service - OpenSSH per-connection server daemon (68.220.241.50:41748). Jan 14 00:59:30.682000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-172.31.19.12:22-68.220.241.50:41748 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:59:31.174000 audit[5460]: USER_ACCT pid=5460 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 00:59:31.176407 sshd[5460]: Accepted publickey for core from 68.220.241.50 port 41748 ssh2: RSA SHA256:002NFNa+pW78QyiVemrE46QtSgcyjLzj0uNejAEHPK0 Jan 14 00:59:31.176000 audit[5460]: CRED_ACQ pid=5460 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 00:59:31.176000 audit[5460]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd4e54fab0 a2=3 a3=0 items=0 ppid=1 pid=5460 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=9 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:59:31.176000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 00:59:31.179063 sshd-session[5460]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 00:59:31.187253 systemd-logind[1922]: New session 9 of user core. Jan 14 00:59:31.191408 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 14 00:59:31.194000 audit[5460]: USER_START pid=5460 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 00:59:31.196000 audit[5464]: CRED_ACQ pid=5464 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 00:59:31.303804 containerd[1939]: time="2026-01-14T00:59:31.303724679Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 14 00:59:31.544833 containerd[1939]: time="2026-01-14T00:59:31.544715869Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 00:59:31.545990 containerd[1939]: time="2026-01-14T00:59:31.545947178Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 14 00:59:31.546092 containerd[1939]: time="2026-01-14T00:59:31.546021583Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 14 00:59:31.546272 kubelet[3293]: E0114 00:59:31.546237 3293 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 14 00:59:31.548120 kubelet[3293]: E0114 00:59:31.546292 3293 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 14 00:59:31.548120 kubelet[3293]: E0114 00:59:31.546754 3293 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4lxnj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-7tkkg_calico-system(e723b976-5fd1-4159-bd8a-f3fe80761ec5): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 14 00:59:31.548676 containerd[1939]: time="2026-01-14T00:59:31.546600836Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 14 00:59:31.836142 containerd[1939]: time="2026-01-14T00:59:31.835973091Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 00:59:31.837002 containerd[1939]: time="2026-01-14T00:59:31.836955259Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 14 00:59:31.837081 containerd[1939]: time="2026-01-14T00:59:31.837032196Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 14 00:59:31.837876 kubelet[3293]: E0114 00:59:31.837229 3293 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 00:59:31.837876 kubelet[3293]: E0114 00:59:31.837270 3293 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 00:59:31.838597 kubelet[3293]: E0114 00:59:31.837644 3293 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6hskc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-cdb9b59bb-cpdlw_calico-apiserver(5e14ba26-eb09-4a70-a4a6-9ad8bd987906): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 14 00:59:31.838845 containerd[1939]: time="2026-01-14T00:59:31.838358323Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 14 00:59:31.839581 kubelet[3293]: E0114 00:59:31.839533 3293 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-cdb9b59bb-cpdlw" podUID="5e14ba26-eb09-4a70-a4a6-9ad8bd987906" Jan 14 00:59:32.037302 sshd[5464]: Connection closed by 68.220.241.50 port 41748 Jan 14 00:59:32.038564 sshd-session[5460]: pam_unix(sshd:session): session closed for user core Jan 14 00:59:32.041000 audit[5460]: USER_END pid=5460 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 00:59:32.041000 audit[5460]: CRED_DISP pid=5460 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 00:59:32.045694 systemd-logind[1922]: Session 9 logged out. Waiting for processes to exit. Jan 14 00:59:32.045835 systemd[1]: sshd@7-172.31.19.12:22-68.220.241.50:41748.service: Deactivated successfully. Jan 14 00:59:32.044000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-172.31.19.12:22-68.220.241.50:41748 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:59:32.048752 systemd[1]: session-9.scope: Deactivated successfully. Jan 14 00:59:32.050881 systemd-logind[1922]: Removed session 9. Jan 14 00:59:32.103252 containerd[1939]: time="2026-01-14T00:59:32.103116769Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 00:59:32.104883 containerd[1939]: time="2026-01-14T00:59:32.104722070Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 14 00:59:32.104883 containerd[1939]: time="2026-01-14T00:59:32.104778362Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 14 00:59:32.105119 kubelet[3293]: E0114 00:59:32.104996 3293 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 14 00:59:32.105119 kubelet[3293]: E0114 00:59:32.105036 3293 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 14 00:59:32.105308 kubelet[3293]: E0114 00:59:32.105154 3293 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4lxnj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-7tkkg_calico-system(e723b976-5fd1-4159-bd8a-f3fe80761ec5): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 14 00:59:32.106748 kubelet[3293]: E0114 00:59:32.106710 3293 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-7tkkg" podUID="e723b976-5fd1-4159-bd8a-f3fe80761ec5" Jan 14 00:59:32.301958 containerd[1939]: time="2026-01-14T00:59:32.301908049Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 14 00:59:32.546235 containerd[1939]: time="2026-01-14T00:59:32.545948108Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 00:59:32.548180 containerd[1939]: time="2026-01-14T00:59:32.548041760Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 14 00:59:32.548180 containerd[1939]: time="2026-01-14T00:59:32.548067317Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 14 00:59:32.548498 kubelet[3293]: E0114 00:59:32.548457 3293 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 14 00:59:32.549484 kubelet[3293]: E0114 00:59:32.548510 3293 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 14 00:59:32.549484 kubelet[3293]: E0114 00:59:32.548661 3293 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:71e9debd4c934ffd9f54d469c8ec9440,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-92d7b,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-5c479bf86-lp2t6_calico-system(5410ee2c-498a-49d0-bc39-5e704c2599b9): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 14 00:59:32.551577 containerd[1939]: time="2026-01-14T00:59:32.551420862Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 14 00:59:32.831880 containerd[1939]: time="2026-01-14T00:59:32.831833329Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 00:59:32.832748 containerd[1939]: time="2026-01-14T00:59:32.832712170Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 14 00:59:32.832912 containerd[1939]: time="2026-01-14T00:59:32.832790099Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 14 00:59:32.832984 kubelet[3293]: E0114 00:59:32.832951 3293 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 14 00:59:32.833053 kubelet[3293]: E0114 00:59:32.832995 3293 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 14 00:59:32.833198 kubelet[3293]: E0114 00:59:32.833119 3293 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-92d7b,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-5c479bf86-lp2t6_calico-system(5410ee2c-498a-49d0-bc39-5e704c2599b9): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 14 00:59:32.834460 kubelet[3293]: E0114 00:59:32.834398 3293 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5c479bf86-lp2t6" podUID="5410ee2c-498a-49d0-bc39-5e704c2599b9" Jan 14 00:59:33.303225 containerd[1939]: time="2026-01-14T00:59:33.302152213Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 14 00:59:33.586990 containerd[1939]: time="2026-01-14T00:59:33.586931324Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 00:59:33.588407 containerd[1939]: time="2026-01-14T00:59:33.588296379Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 14 00:59:33.588407 containerd[1939]: time="2026-01-14T00:59:33.588340035Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 14 00:59:33.588621 kubelet[3293]: E0114 00:59:33.588578 3293 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 14 00:59:33.589099 kubelet[3293]: E0114 00:59:33.588632 3293 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 14 00:59:33.589099 kubelet[3293]: E0114 00:59:33.588869 3293 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2qw6k,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-xlvwp_calico-system(13972001-c667-49c0-9374-a2bbe47d8026): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 14 00:59:33.590356 kubelet[3293]: E0114 00:59:33.590326 3293 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-xlvwp" podUID="13972001-c667-49c0-9374-a2bbe47d8026" Jan 14 00:59:37.131106 systemd[1]: Started sshd@8-172.31.19.12:22-68.220.241.50:60302.service - OpenSSH per-connection server daemon (68.220.241.50:60302). Jan 14 00:59:37.136090 kernel: kauditd_printk_skb: 13 callbacks suppressed Jan 14 00:59:37.136134 kernel: audit: type=1130 audit(1768352377.130:763): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-172.31.19.12:22-68.220.241.50:60302 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:59:37.130000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-172.31.19.12:22-68.220.241.50:60302 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:59:37.574000 audit[5486]: USER_ACCT pid=5486 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 00:59:37.577854 sshd-session[5486]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 00:59:37.580630 sshd[5486]: Accepted publickey for core from 68.220.241.50 port 60302 ssh2: RSA SHA256:002NFNa+pW78QyiVemrE46QtSgcyjLzj0uNejAEHPK0 Jan 14 00:59:37.574000 audit[5486]: CRED_ACQ pid=5486 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 00:59:37.586090 kernel: audit: type=1101 audit(1768352377.574:764): pid=5486 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 00:59:37.586170 kernel: audit: type=1103 audit(1768352377.574:765): pid=5486 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 00:59:37.588959 systemd-logind[1922]: New session 10 of user core. Jan 14 00:59:37.591806 kernel: audit: type=1006 audit(1768352377.576:766): pid=5486 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=10 res=1 Jan 14 00:59:37.591848 kernel: audit: type=1300 audit(1768352377.576:766): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffcf0116330 a2=3 a3=0 items=0 ppid=1 pid=5486 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:59:37.576000 audit[5486]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffcf0116330 a2=3 a3=0 items=0 ppid=1 pid=5486 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:59:37.576000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 00:59:37.597602 kernel: audit: type=1327 audit(1768352377.576:766): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 00:59:37.599463 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 14 00:59:37.602000 audit[5486]: USER_START pid=5486 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 00:59:37.609000 audit[5490]: CRED_ACQ pid=5490 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 00:59:37.614989 kernel: audit: type=1105 audit(1768352377.602:767): pid=5486 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 00:59:37.615084 kernel: audit: type=1103 audit(1768352377.609:768): pid=5490 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 00:59:37.875495 sshd[5490]: Connection closed by 68.220.241.50 port 60302 Jan 14 00:59:37.875984 sshd-session[5486]: pam_unix(sshd:session): session closed for user core Jan 14 00:59:37.885266 kernel: audit: type=1106 audit(1768352377.877:769): pid=5486 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 00:59:37.877000 audit[5486]: USER_END pid=5486 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 00:59:37.882394 systemd[1]: sshd@8-172.31.19.12:22-68.220.241.50:60302.service: Deactivated successfully. Jan 14 00:59:37.884742 systemd[1]: session-10.scope: Deactivated successfully. Jan 14 00:59:37.877000 audit[5486]: CRED_DISP pid=5486 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 00:59:37.881000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-172.31.19.12:22-68.220.241.50:60302 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:59:37.891316 kernel: audit: type=1104 audit(1768352377.877:770): pid=5486 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 00:59:37.891126 systemd-logind[1922]: Session 10 logged out. Waiting for processes to exit. Jan 14 00:59:37.892393 systemd-logind[1922]: Removed session 10. Jan 14 00:59:38.302511 containerd[1939]: time="2026-01-14T00:59:38.302236580Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 14 00:59:38.770969 containerd[1939]: time="2026-01-14T00:59:38.770925468Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 00:59:38.771952 containerd[1939]: time="2026-01-14T00:59:38.771915448Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 14 00:59:38.772092 containerd[1939]: time="2026-01-14T00:59:38.771986535Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 14 00:59:38.772157 kubelet[3293]: E0114 00:59:38.772128 3293 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 14 00:59:38.772463 kubelet[3293]: E0114 00:59:38.772172 3293 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 14 00:59:38.772897 kubelet[3293]: E0114 00:59:38.772476 3293 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qp8hb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-6b4df85555-29mjx_calico-system(f05abc55-0515-49ca-aacb-ebde63d756a4): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 14 00:59:38.773154 containerd[1939]: time="2026-01-14T00:59:38.772727931Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 14 00:59:38.773905 kubelet[3293]: E0114 00:59:38.773862 3293 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6b4df85555-29mjx" podUID="f05abc55-0515-49ca-aacb-ebde63d756a4" Jan 14 00:59:39.091651 containerd[1939]: time="2026-01-14T00:59:39.091585764Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 00:59:39.092666 containerd[1939]: time="2026-01-14T00:59:39.092625253Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 14 00:59:39.092763 containerd[1939]: time="2026-01-14T00:59:39.092698625Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 14 00:59:39.092938 kubelet[3293]: E0114 00:59:39.092901 3293 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 00:59:39.093016 kubelet[3293]: E0114 00:59:39.092947 3293 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 00:59:39.093102 kubelet[3293]: E0114 00:59:39.093068 3293 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-glhgl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-cdb9b59bb-pr6gg_calico-apiserver(7449b447-f9d5-45e2-8001-6763bc56b2d8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 14 00:59:39.094634 kubelet[3293]: E0114 00:59:39.094545 3293 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-cdb9b59bb-pr6gg" podUID="7449b447-f9d5-45e2-8001-6763bc56b2d8" Jan 14 00:59:42.967762 systemd[1]: Started sshd@9-172.31.19.12:22-68.220.241.50:40274.service - OpenSSH per-connection server daemon (68.220.241.50:40274). Jan 14 00:59:42.966000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-172.31.19.12:22-68.220.241.50:40274 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:59:42.969633 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 14 00:59:42.969699 kernel: audit: type=1130 audit(1768352382.966:772): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-172.31.19.12:22-68.220.241.50:40274 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:59:43.322087 kubelet[3293]: E0114 00:59:43.322048 3293 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-cdb9b59bb-cpdlw" podUID="5e14ba26-eb09-4a70-a4a6-9ad8bd987906" Jan 14 00:59:43.427000 audit[5529]: USER_ACCT pid=5529 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 00:59:43.430954 sshd-session[5529]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 00:59:43.434638 sshd[5529]: Accepted publickey for core from 68.220.241.50 port 40274 ssh2: RSA SHA256:002NFNa+pW78QyiVemrE46QtSgcyjLzj0uNejAEHPK0 Jan 14 00:59:43.435201 kernel: audit: type=1101 audit(1768352383.427:773): pid=5529 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 00:59:43.427000 audit[5529]: CRED_ACQ pid=5529 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 00:59:43.440204 kernel: audit: type=1103 audit(1768352383.427:774): pid=5529 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 00:59:43.443267 kernel: audit: type=1006 audit(1768352383.427:775): pid=5529 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=11 res=1 Jan 14 00:59:43.427000 audit[5529]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffde1b4fd10 a2=3 a3=0 items=0 ppid=1 pid=5529 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:59:43.445994 systemd-logind[1922]: New session 11 of user core. Jan 14 00:59:43.449289 kernel: audit: type=1300 audit(1768352383.427:775): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffde1b4fd10 a2=3 a3=0 items=0 ppid=1 pid=5529 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:59:43.449319 kernel: audit: type=1327 audit(1768352383.427:775): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 00:59:43.427000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 00:59:43.452393 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 14 00:59:43.456000 audit[5529]: USER_START pid=5529 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 00:59:43.468499 kernel: audit: type=1105 audit(1768352383.456:776): pid=5529 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 00:59:43.468583 kernel: audit: type=1103 audit(1768352383.458:777): pid=5533 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 00:59:43.458000 audit[5533]: CRED_ACQ pid=5533 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 00:59:43.739373 sshd[5533]: Connection closed by 68.220.241.50 port 40274 Jan 14 00:59:43.742363 sshd-session[5529]: pam_unix(sshd:session): session closed for user core Jan 14 00:59:43.744000 audit[5529]: USER_END pid=5529 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 00:59:43.752236 kernel: audit: type=1106 audit(1768352383.744:778): pid=5529 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 00:59:43.750647 systemd[1]: sshd@9-172.31.19.12:22-68.220.241.50:40274.service: Deactivated successfully. Jan 14 00:59:43.754115 systemd[1]: session-11.scope: Deactivated successfully. Jan 14 00:59:43.744000 audit[5529]: CRED_DISP pid=5529 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 00:59:43.756428 systemd-logind[1922]: Session 11 logged out. Waiting for processes to exit. Jan 14 00:59:43.758539 systemd-logind[1922]: Removed session 11. Jan 14 00:59:43.759351 kernel: audit: type=1104 audit(1768352383.744:779): pid=5529 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 00:59:43.749000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-172.31.19.12:22-68.220.241.50:40274 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:59:43.827505 systemd[1]: Started sshd@10-172.31.19.12:22-68.220.241.50:40286.service - OpenSSH per-connection server daemon (68.220.241.50:40286). Jan 14 00:59:43.826000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-172.31.19.12:22-68.220.241.50:40286 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:59:44.271000 audit[5546]: USER_ACCT pid=5546 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 00:59:44.273164 sshd[5546]: Accepted publickey for core from 68.220.241.50 port 40286 ssh2: RSA SHA256:002NFNa+pW78QyiVemrE46QtSgcyjLzj0uNejAEHPK0 Jan 14 00:59:44.272000 audit[5546]: CRED_ACQ pid=5546 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 00:59:44.272000 audit[5546]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff66dcb500 a2=3 a3=0 items=0 ppid=1 pid=5546 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:59:44.272000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 00:59:44.274819 sshd-session[5546]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 00:59:44.280248 systemd-logind[1922]: New session 12 of user core. Jan 14 00:59:44.285380 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 14 00:59:44.287000 audit[5546]: USER_START pid=5546 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 00:59:44.289000 audit[5550]: CRED_ACQ pid=5550 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 00:59:44.303520 kubelet[3293]: E0114 00:59:44.303471 3293 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5c479bf86-lp2t6" podUID="5410ee2c-498a-49d0-bc39-5e704c2599b9" Jan 14 00:59:44.640371 sshd[5550]: Connection closed by 68.220.241.50 port 40286 Jan 14 00:59:44.642358 sshd-session[5546]: pam_unix(sshd:session): session closed for user core Jan 14 00:59:44.642000 audit[5546]: USER_END pid=5546 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 00:59:44.643000 audit[5546]: CRED_DISP pid=5546 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 00:59:44.647430 systemd[1]: sshd@10-172.31.19.12:22-68.220.241.50:40286.service: Deactivated successfully. Jan 14 00:59:44.646000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-172.31.19.12:22-68.220.241.50:40286 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:59:44.651925 systemd[1]: session-12.scope: Deactivated successfully. Jan 14 00:59:44.654841 systemd-logind[1922]: Session 12 logged out. Waiting for processes to exit. Jan 14 00:59:44.657694 systemd-logind[1922]: Removed session 12. Jan 14 00:59:44.736602 systemd[1]: Started sshd@11-172.31.19.12:22-68.220.241.50:40290.service - OpenSSH per-connection server daemon (68.220.241.50:40290). Jan 14 00:59:44.735000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-172.31.19.12:22-68.220.241.50:40290 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:59:45.169000 audit[5561]: USER_ACCT pid=5561 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 00:59:45.171317 sshd[5561]: Accepted publickey for core from 68.220.241.50 port 40290 ssh2: RSA SHA256:002NFNa+pW78QyiVemrE46QtSgcyjLzj0uNejAEHPK0 Jan 14 00:59:45.171000 audit[5561]: CRED_ACQ pid=5561 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 00:59:45.171000 audit[5561]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffebd210fd0 a2=3 a3=0 items=0 ppid=1 pid=5561 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:59:45.171000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 00:59:45.173378 sshd-session[5561]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 00:59:45.179273 systemd-logind[1922]: New session 13 of user core. Jan 14 00:59:45.185407 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 14 00:59:45.188000 audit[5561]: USER_START pid=5561 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 00:59:45.189000 audit[5565]: CRED_ACQ pid=5565 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 00:59:45.479082 sshd[5565]: Connection closed by 68.220.241.50 port 40290 Jan 14 00:59:45.478744 sshd-session[5561]: pam_unix(sshd:session): session closed for user core Jan 14 00:59:45.479000 audit[5561]: USER_END pid=5561 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 00:59:45.481000 audit[5561]: CRED_DISP pid=5561 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 00:59:45.486559 systemd-logind[1922]: Session 13 logged out. Waiting for processes to exit. Jan 14 00:59:45.489575 systemd[1]: sshd@11-172.31.19.12:22-68.220.241.50:40290.service: Deactivated successfully. Jan 14 00:59:45.489000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-172.31.19.12:22-68.220.241.50:40290 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:59:45.493205 systemd[1]: session-13.scope: Deactivated successfully. Jan 14 00:59:45.496153 systemd-logind[1922]: Removed session 13. Jan 14 00:59:47.303324 kubelet[3293]: E0114 00:59:47.303198 3293 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-7tkkg" podUID="e723b976-5fd1-4159-bd8a-f3fe80761ec5" Jan 14 00:59:49.302752 kubelet[3293]: E0114 00:59:49.302708 3293 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-xlvwp" podUID="13972001-c667-49c0-9374-a2bbe47d8026" Jan 14 00:59:50.301483 kubelet[3293]: E0114 00:59:50.301440 3293 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-cdb9b59bb-pr6gg" podUID="7449b447-f9d5-45e2-8001-6763bc56b2d8" Jan 14 00:59:50.566000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-172.31.19.12:22-68.220.241.50:40300 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:59:50.567732 systemd[1]: Started sshd@12-172.31.19.12:22-68.220.241.50:40300.service - OpenSSH per-connection server daemon (68.220.241.50:40300). Jan 14 00:59:50.568340 kernel: kauditd_printk_skb: 23 callbacks suppressed Jan 14 00:59:50.568389 kernel: audit: type=1130 audit(1768352390.566:799): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-172.31.19.12:22-68.220.241.50:40300 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:59:51.062000 audit[5583]: USER_ACCT pid=5583 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 00:59:51.065228 sshd[5583]: Accepted publickey for core from 68.220.241.50 port 40300 ssh2: RSA SHA256:002NFNa+pW78QyiVemrE46QtSgcyjLzj0uNejAEHPK0 Jan 14 00:59:51.067284 sshd-session[5583]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 00:59:51.064000 audit[5583]: CRED_ACQ pid=5583 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 00:59:51.071417 kernel: audit: type=1101 audit(1768352391.062:800): pid=5583 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 00:59:51.071500 kernel: audit: type=1103 audit(1768352391.064:801): pid=5583 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 00:59:51.065000 audit[5583]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc7651ee30 a2=3 a3=0 items=0 ppid=1 pid=5583 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:59:51.080302 kernel: audit: type=1006 audit(1768352391.065:802): pid=5583 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=14 res=1 Jan 14 00:59:51.080385 kernel: audit: type=1300 audit(1768352391.065:802): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc7651ee30 a2=3 a3=0 items=0 ppid=1 pid=5583 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:59:51.086063 kernel: audit: type=1327 audit(1768352391.065:802): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 00:59:51.065000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 00:59:51.082785 systemd-logind[1922]: New session 14 of user core. Jan 14 00:59:51.088423 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 14 00:59:51.093000 audit[5583]: USER_START pid=5583 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 00:59:51.102132 kernel: audit: type=1105 audit(1768352391.093:803): pid=5583 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 00:59:51.102229 kernel: audit: type=1103 audit(1768352391.100:804): pid=5590 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 00:59:51.100000 audit[5590]: CRED_ACQ pid=5590 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 00:59:51.307530 kubelet[3293]: E0114 00:59:51.307160 3293 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6b4df85555-29mjx" podUID="f05abc55-0515-49ca-aacb-ebde63d756a4" Jan 14 00:59:51.378974 sshd[5590]: Connection closed by 68.220.241.50 port 40300 Jan 14 00:59:51.380168 sshd-session[5583]: pam_unix(sshd:session): session closed for user core Jan 14 00:59:51.380000 audit[5583]: USER_END pid=5583 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 00:59:51.391841 kernel: audit: type=1106 audit(1768352391.380:805): pid=5583 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 00:59:51.391917 kernel: audit: type=1104 audit(1768352391.380:806): pid=5583 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 00:59:51.380000 audit[5583]: CRED_DISP pid=5583 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 00:59:51.387903 systemd[1]: sshd@12-172.31.19.12:22-68.220.241.50:40300.service: Deactivated successfully. Jan 14 00:59:51.390888 systemd[1]: session-14.scope: Deactivated successfully. Jan 14 00:59:51.387000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-172.31.19.12:22-68.220.241.50:40300 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:59:51.393332 systemd-logind[1922]: Session 14 logged out. Waiting for processes to exit. Jan 14 00:59:51.394169 systemd-logind[1922]: Removed session 14. Jan 14 00:59:56.471472 systemd[1]: Started sshd@13-172.31.19.12:22-68.220.241.50:49502.service - OpenSSH per-connection server daemon (68.220.241.50:49502). Jan 14 00:59:56.470000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-172.31.19.12:22-68.220.241.50:49502 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:59:56.472700 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 14 00:59:56.472759 kernel: audit: type=1130 audit(1768352396.470:808): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-172.31.19.12:22-68.220.241.50:49502 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:59:56.916000 audit[5612]: USER_ACCT pid=5612 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 00:59:56.920537 sshd-session[5612]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 00:59:56.921694 sshd[5612]: Accepted publickey for core from 68.220.241.50 port 49502 ssh2: RSA SHA256:002NFNa+pW78QyiVemrE46QtSgcyjLzj0uNejAEHPK0 Jan 14 00:59:56.916000 audit[5612]: CRED_ACQ pid=5612 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 00:59:56.925673 kernel: audit: type=1101 audit(1768352396.916:809): pid=5612 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 00:59:56.925743 kernel: audit: type=1103 audit(1768352396.916:810): pid=5612 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 00:59:56.931419 kernel: audit: type=1006 audit(1768352396.916:811): pid=5612 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=15 res=1 Jan 14 00:59:56.934885 kernel: audit: type=1300 audit(1768352396.916:811): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffca13d4e60 a2=3 a3=0 items=0 ppid=1 pid=5612 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:59:56.916000 audit[5612]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffca13d4e60 a2=3 a3=0 items=0 ppid=1 pid=5612 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:59:56.939827 kernel: audit: type=1327 audit(1768352396.916:811): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 00:59:56.916000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 00:59:56.939299 systemd-logind[1922]: New session 15 of user core. Jan 14 00:59:56.942455 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 14 00:59:56.945000 audit[5612]: USER_START pid=5612 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 00:59:56.955209 kernel: audit: type=1105 audit(1768352396.945:812): pid=5612 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 00:59:56.955292 kernel: audit: type=1103 audit(1768352396.948:813): pid=5616 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 00:59:56.948000 audit[5616]: CRED_ACQ pid=5616 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 00:59:57.230955 sshd[5616]: Connection closed by 68.220.241.50 port 49502 Jan 14 00:59:57.231776 sshd-session[5612]: pam_unix(sshd:session): session closed for user core Jan 14 00:59:57.233000 audit[5612]: USER_END pid=5612 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 00:59:57.240786 systemd[1]: sshd@13-172.31.19.12:22-68.220.241.50:49502.service: Deactivated successfully. Jan 14 00:59:57.241214 kernel: audit: type=1106 audit(1768352397.233:814): pid=5612 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 00:59:57.242822 systemd[1]: session-15.scope: Deactivated successfully. Jan 14 00:59:57.233000 audit[5612]: CRED_DISP pid=5612 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 00:59:57.244246 systemd-logind[1922]: Session 15 logged out. Waiting for processes to exit. Jan 14 00:59:57.247366 systemd-logind[1922]: Removed session 15. Jan 14 00:59:57.248279 kernel: audit: type=1104 audit(1768352397.233:815): pid=5612 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 00:59:57.239000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-172.31.19.12:22-68.220.241.50:49502 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:59:58.302129 containerd[1939]: time="2026-01-14T00:59:58.301820720Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 14 00:59:58.763937 containerd[1939]: time="2026-01-14T00:59:58.763895055Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 00:59:58.766161 containerd[1939]: time="2026-01-14T00:59:58.766125833Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 14 00:59:58.766263 containerd[1939]: time="2026-01-14T00:59:58.766208925Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 14 00:59:58.766379 kubelet[3293]: E0114 00:59:58.766338 3293 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 00:59:58.766662 kubelet[3293]: E0114 00:59:58.766386 3293 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 00:59:58.766662 kubelet[3293]: E0114 00:59:58.766570 3293 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6hskc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-cdb9b59bb-cpdlw_calico-apiserver(5e14ba26-eb09-4a70-a4a6-9ad8bd987906): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 14 00:59:58.767082 containerd[1939]: time="2026-01-14T00:59:58.767056934Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 14 00:59:58.768449 kubelet[3293]: E0114 00:59:58.768412 3293 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-cdb9b59bb-cpdlw" podUID="5e14ba26-eb09-4a70-a4a6-9ad8bd987906" Jan 14 00:59:59.067375 containerd[1939]: time="2026-01-14T00:59:59.067335828Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 00:59:59.069443 containerd[1939]: time="2026-01-14T00:59:59.069400566Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 14 00:59:59.069512 containerd[1939]: time="2026-01-14T00:59:59.069474753Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 14 00:59:59.069638 kubelet[3293]: E0114 00:59:59.069602 3293 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 14 00:59:59.069703 kubelet[3293]: E0114 00:59:59.069648 3293 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 14 00:59:59.069797 kubelet[3293]: E0114 00:59:59.069762 3293 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4lxnj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-7tkkg_calico-system(e723b976-5fd1-4159-bd8a-f3fe80761ec5): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 14 00:59:59.072220 containerd[1939]: time="2026-01-14T00:59:59.072070496Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 14 00:59:59.340939 containerd[1939]: time="2026-01-14T00:59:59.340776053Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 00:59:59.342923 containerd[1939]: time="2026-01-14T00:59:59.342871997Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 14 00:59:59.343108 containerd[1939]: time="2026-01-14T00:59:59.342893031Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 14 00:59:59.343244 kubelet[3293]: E0114 00:59:59.343178 3293 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 14 00:59:59.343346 kubelet[3293]: E0114 00:59:59.343255 3293 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 14 00:59:59.343546 kubelet[3293]: E0114 00:59:59.343505 3293 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4lxnj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-7tkkg_calico-system(e723b976-5fd1-4159-bd8a-f3fe80761ec5): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 14 00:59:59.343976 containerd[1939]: time="2026-01-14T00:59:59.343947317Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 14 00:59:59.345153 kubelet[3293]: E0114 00:59:59.345096 3293 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-7tkkg" podUID="e723b976-5fd1-4159-bd8a-f3fe80761ec5" Jan 14 00:59:59.634155 containerd[1939]: time="2026-01-14T00:59:59.634032582Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 00:59:59.636574 containerd[1939]: time="2026-01-14T00:59:59.636506297Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 14 00:59:59.636574 containerd[1939]: time="2026-01-14T00:59:59.636543508Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 14 00:59:59.636765 kubelet[3293]: E0114 00:59:59.636731 3293 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 14 00:59:59.636852 kubelet[3293]: E0114 00:59:59.636774 3293 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 14 00:59:59.636985 kubelet[3293]: E0114 00:59:59.636944 3293 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:71e9debd4c934ffd9f54d469c8ec9440,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-92d7b,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-5c479bf86-lp2t6_calico-system(5410ee2c-498a-49d0-bc39-5e704c2599b9): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 14 00:59:59.639434 containerd[1939]: time="2026-01-14T00:59:59.639403250Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 14 00:59:59.916037 containerd[1939]: time="2026-01-14T00:59:59.915905371Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 00:59:59.917974 containerd[1939]: time="2026-01-14T00:59:59.917927820Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 14 00:59:59.918095 containerd[1939]: time="2026-01-14T00:59:59.918028003Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 14 00:59:59.918267 kubelet[3293]: E0114 00:59:59.918233 3293 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 14 00:59:59.918556 kubelet[3293]: E0114 00:59:59.918277 3293 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 14 00:59:59.918556 kubelet[3293]: E0114 00:59:59.918409 3293 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-92d7b,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-5c479bf86-lp2t6_calico-system(5410ee2c-498a-49d0-bc39-5e704c2599b9): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 14 00:59:59.919967 kubelet[3293]: E0114 00:59:59.919929 3293 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5c479bf86-lp2t6" podUID="5410ee2c-498a-49d0-bc39-5e704c2599b9" Jan 14 01:00:01.304424 containerd[1939]: time="2026-01-14T01:00:01.303832129Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 14 01:00:02.322030 systemd[1]: Started sshd@14-172.31.19.12:22-68.220.241.50:49504.service - OpenSSH per-connection server daemon (68.220.241.50:49504). Jan 14 01:00:02.324296 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 14 01:00:02.324389 kernel: audit: type=1130 audit(1768352402.321:817): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-172.31.19.12:22-68.220.241.50:49504 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:00:02.321000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-172.31.19.12:22-68.220.241.50:49504 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:00:02.367385 containerd[1939]: time="2026-01-14T01:00:02.366613324Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:00:02.369006 containerd[1939]: time="2026-01-14T01:00:02.368918876Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 14 01:00:02.369333 containerd[1939]: time="2026-01-14T01:00:02.368967465Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 14 01:00:02.370887 kubelet[3293]: E0114 01:00:02.370389 3293 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 14 01:00:02.370887 kubelet[3293]: E0114 01:00:02.370470 3293 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 14 01:00:02.371327 containerd[1939]: time="2026-01-14T01:00:02.371205831Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 14 01:00:02.371516 kubelet[3293]: E0114 01:00:02.370780 3293 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2qw6k,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-xlvwp_calico-system(13972001-c667-49c0-9374-a2bbe47d8026): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 14 01:00:02.373412 kubelet[3293]: E0114 01:00:02.373268 3293 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-xlvwp" podUID="13972001-c667-49c0-9374-a2bbe47d8026" Jan 14 01:00:02.786000 audit[5628]: USER_ACCT pid=5628 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 01:00:02.787903 sshd[5628]: Accepted publickey for core from 68.220.241.50 port 49504 ssh2: RSA SHA256:002NFNa+pW78QyiVemrE46QtSgcyjLzj0uNejAEHPK0 Jan 14 01:00:02.790089 sshd-session[5628]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 01:00:02.788000 audit[5628]: CRED_ACQ pid=5628 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 01:00:02.794416 kernel: audit: type=1101 audit(1768352402.786:818): pid=5628 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 01:00:02.794471 kernel: audit: type=1103 audit(1768352402.788:819): pid=5628 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 01:00:02.797496 systemd-logind[1922]: New session 16 of user core. Jan 14 01:00:02.798732 kernel: audit: type=1006 audit(1768352402.788:820): pid=5628 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=16 res=1 Jan 14 01:00:02.788000 audit[5628]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe0c5db7f0 a2=3 a3=0 items=0 ppid=1 pid=5628 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:00:02.802065 kernel: audit: type=1300 audit(1768352402.788:820): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe0c5db7f0 a2=3 a3=0 items=0 ppid=1 pid=5628 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:00:02.788000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:00:02.806639 kernel: audit: type=1327 audit(1768352402.788:820): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:00:02.807396 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 14 01:00:02.809000 audit[5628]: USER_START pid=5628 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 01:00:02.815000 audit[5632]: CRED_ACQ pid=5632 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 01:00:02.817604 kernel: audit: type=1105 audit(1768352402.809:821): pid=5628 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 01:00:02.817668 kernel: audit: type=1103 audit(1768352402.815:822): pid=5632 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 01:00:03.109068 sshd[5632]: Connection closed by 68.220.241.50 port 49504 Jan 14 01:00:03.110356 sshd-session[5628]: pam_unix(sshd:session): session closed for user core Jan 14 01:00:03.110000 audit[5628]: USER_END pid=5628 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 01:00:03.123022 kernel: audit: type=1106 audit(1768352403.110:823): pid=5628 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 01:00:03.123066 kernel: audit: type=1104 audit(1768352403.110:824): pid=5628 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 01:00:03.110000 audit[5628]: CRED_DISP pid=5628 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 01:00:03.114035 systemd[1]: sshd@14-172.31.19.12:22-68.220.241.50:49504.service: Deactivated successfully. Jan 14 01:00:03.116444 systemd[1]: session-16.scope: Deactivated successfully. Jan 14 01:00:03.118805 systemd-logind[1922]: Session 16 logged out. Waiting for processes to exit. Jan 14 01:00:03.119819 systemd-logind[1922]: Removed session 16. Jan 14 01:00:03.112000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-172.31.19.12:22-68.220.241.50:49504 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:00:04.146503 containerd[1939]: time="2026-01-14T01:00:04.146323769Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:00:04.148670 containerd[1939]: time="2026-01-14T01:00:04.148565663Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 14 01:00:04.148670 containerd[1939]: time="2026-01-14T01:00:04.148641310Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 14 01:00:04.148973 kubelet[3293]: E0114 01:00:04.148924 3293 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 14 01:00:04.149378 kubelet[3293]: E0114 01:00:04.148984 3293 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 14 01:00:04.149378 kubelet[3293]: E0114 01:00:04.149177 3293 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qp8hb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-6b4df85555-29mjx_calico-system(f05abc55-0515-49ca-aacb-ebde63d756a4): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 14 01:00:04.150843 kubelet[3293]: E0114 01:00:04.150798 3293 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6b4df85555-29mjx" podUID="f05abc55-0515-49ca-aacb-ebde63d756a4" Jan 14 01:00:04.302496 containerd[1939]: time="2026-01-14T01:00:04.301990695Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 14 01:00:05.107323 containerd[1939]: time="2026-01-14T01:00:05.107255333Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:00:05.109483 containerd[1939]: time="2026-01-14T01:00:05.109409480Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 14 01:00:05.109483 containerd[1939]: time="2026-01-14T01:00:05.109438096Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 14 01:00:05.109640 kubelet[3293]: E0114 01:00:05.109602 3293 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 01:00:05.109686 kubelet[3293]: E0114 01:00:05.109643 3293 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 01:00:05.109817 kubelet[3293]: E0114 01:00:05.109772 3293 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-glhgl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-cdb9b59bb-pr6gg_calico-apiserver(7449b447-f9d5-45e2-8001-6763bc56b2d8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 14 01:00:05.111295 kubelet[3293]: E0114 01:00:05.111253 3293 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-cdb9b59bb-pr6gg" podUID="7449b447-f9d5-45e2-8001-6763bc56b2d8" Jan 14 01:00:08.203000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-172.31.19.12:22-68.220.241.50:37412 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:00:08.204359 systemd[1]: Started sshd@15-172.31.19.12:22-68.220.241.50:37412.service - OpenSSH per-connection server daemon (68.220.241.50:37412). Jan 14 01:00:08.206036 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 14 01:00:08.206219 kernel: audit: type=1130 audit(1768352408.203:826): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-172.31.19.12:22-68.220.241.50:37412 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:00:08.644000 audit[5648]: USER_ACCT pid=5648 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 01:00:08.648116 sshd-session[5648]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 01:00:08.649677 sshd[5648]: Accepted publickey for core from 68.220.241.50 port 37412 ssh2: RSA SHA256:002NFNa+pW78QyiVemrE46QtSgcyjLzj0uNejAEHPK0 Jan 14 01:00:08.645000 audit[5648]: CRED_ACQ pid=5648 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 01:00:08.653358 kernel: audit: type=1101 audit(1768352408.644:827): pid=5648 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 01:00:08.653448 kernel: audit: type=1103 audit(1768352408.645:828): pid=5648 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 01:00:08.657973 kernel: audit: type=1006 audit(1768352408.645:829): pid=5648 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=17 res=1 Jan 14 01:00:08.659981 systemd-logind[1922]: New session 17 of user core. Jan 14 01:00:08.645000 audit[5648]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffeb5a31f10 a2=3 a3=0 items=0 ppid=1 pid=5648 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:00:08.662153 kernel: audit: type=1300 audit(1768352408.645:829): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffeb5a31f10 a2=3 a3=0 items=0 ppid=1 pid=5648 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:00:08.645000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:00:08.666682 kernel: audit: type=1327 audit(1768352408.645:829): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:00:08.667422 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 14 01:00:08.670000 audit[5648]: USER_START pid=5648 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 01:00:08.678263 kernel: audit: type=1105 audit(1768352408.670:830): pid=5648 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 01:00:08.677000 audit[5652]: CRED_ACQ pid=5652 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 01:00:08.684210 kernel: audit: type=1103 audit(1768352408.677:831): pid=5652 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 01:00:08.941593 sshd[5652]: Connection closed by 68.220.241.50 port 37412 Jan 14 01:00:08.942299 sshd-session[5648]: pam_unix(sshd:session): session closed for user core Jan 14 01:00:08.942000 audit[5648]: USER_END pid=5648 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 01:00:08.950216 kernel: audit: type=1106 audit(1768352408.942:832): pid=5648 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 01:00:08.950450 systemd[1]: sshd@15-172.31.19.12:22-68.220.241.50:37412.service: Deactivated successfully. Jan 14 01:00:08.952325 systemd[1]: session-17.scope: Deactivated successfully. Jan 14 01:00:08.942000 audit[5648]: CRED_DISP pid=5648 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 01:00:08.953749 systemd-logind[1922]: Session 17 logged out. Waiting for processes to exit. Jan 14 01:00:08.957034 systemd-logind[1922]: Removed session 17. Jan 14 01:00:08.949000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-172.31.19.12:22-68.220.241.50:37412 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:00:08.958210 kernel: audit: type=1104 audit(1768352408.942:833): pid=5648 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 01:00:09.029682 systemd[1]: Started sshd@16-172.31.19.12:22-68.220.241.50:37420.service - OpenSSH per-connection server daemon (68.220.241.50:37420). Jan 14 01:00:09.028000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-172.31.19.12:22-68.220.241.50:37420 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:00:09.303996 kubelet[3293]: E0114 01:00:09.303453 3293 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-cdb9b59bb-cpdlw" podUID="5e14ba26-eb09-4a70-a4a6-9ad8bd987906" Jan 14 01:00:09.453000 audit[5664]: USER_ACCT pid=5664 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 01:00:09.454467 sshd[5664]: Accepted publickey for core from 68.220.241.50 port 37420 ssh2: RSA SHA256:002NFNa+pW78QyiVemrE46QtSgcyjLzj0uNejAEHPK0 Jan 14 01:00:09.454000 audit[5664]: CRED_ACQ pid=5664 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 01:00:09.454000 audit[5664]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd7c7ae2c0 a2=3 a3=0 items=0 ppid=1 pid=5664 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:00:09.454000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:00:09.456511 sshd-session[5664]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 01:00:09.461251 systemd-logind[1922]: New session 18 of user core. Jan 14 01:00:09.466375 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 14 01:00:09.468000 audit[5664]: USER_START pid=5664 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 01:00:09.470000 audit[5668]: CRED_ACQ pid=5668 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 01:00:10.116322 sshd[5668]: Connection closed by 68.220.241.50 port 37420 Jan 14 01:00:10.117374 sshd-session[5664]: pam_unix(sshd:session): session closed for user core Jan 14 01:00:10.118000 audit[5664]: USER_END pid=5664 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 01:00:10.118000 audit[5664]: CRED_DISP pid=5664 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 01:00:10.128340 systemd[1]: sshd@16-172.31.19.12:22-68.220.241.50:37420.service: Deactivated successfully. Jan 14 01:00:10.128493 systemd-logind[1922]: Session 18 logged out. Waiting for processes to exit. Jan 14 01:00:10.127000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-172.31.19.12:22-68.220.241.50:37420 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:00:10.130769 systemd[1]: session-18.scope: Deactivated successfully. Jan 14 01:00:10.133595 systemd-logind[1922]: Removed session 18. Jan 14 01:00:10.208000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-172.31.19.12:22-68.220.241.50:37422 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:00:10.209312 systemd[1]: Started sshd@17-172.31.19.12:22-68.220.241.50:37422.service - OpenSSH per-connection server daemon (68.220.241.50:37422). Jan 14 01:00:10.673000 audit[5678]: USER_ACCT pid=5678 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 01:00:10.675717 sshd[5678]: Accepted publickey for core from 68.220.241.50 port 37422 ssh2: RSA SHA256:002NFNa+pW78QyiVemrE46QtSgcyjLzj0uNejAEHPK0 Jan 14 01:00:10.674000 audit[5678]: CRED_ACQ pid=5678 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 01:00:10.674000 audit[5678]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe2a0c72b0 a2=3 a3=0 items=0 ppid=1 pid=5678 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:00:10.674000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:00:10.676913 sshd-session[5678]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 01:00:10.685264 systemd-logind[1922]: New session 19 of user core. Jan 14 01:00:10.691576 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 14 01:00:10.697000 audit[5678]: USER_START pid=5678 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 01:00:10.699000 audit[5708]: CRED_ACQ pid=5708 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 01:00:11.846000 audit[5718]: NETFILTER_CFG table=filter:142 family=2 entries=26 op=nft_register_rule pid=5718 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:00:11.846000 audit[5718]: SYSCALL arch=c000003e syscall=46 success=yes exit=14176 a0=3 a1=7ffc18947650 a2=0 a3=7ffc1894763c items=0 ppid=3541 pid=5718 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:00:11.846000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:00:11.853000 audit[5718]: NETFILTER_CFG table=nat:143 family=2 entries=20 op=nft_register_rule pid=5718 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:00:11.853000 audit[5718]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffc18947650 a2=0 a3=0 items=0 ppid=3541 pid=5718 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:00:11.853000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:00:11.868000 audit[5720]: NETFILTER_CFG table=filter:144 family=2 entries=38 op=nft_register_rule pid=5720 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:00:11.868000 audit[5720]: SYSCALL arch=c000003e syscall=46 success=yes exit=14176 a0=3 a1=7fff15937690 a2=0 a3=7fff1593767c items=0 ppid=3541 pid=5720 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:00:11.868000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:00:11.872000 audit[5720]: NETFILTER_CFG table=nat:145 family=2 entries=20 op=nft_register_rule pid=5720 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:00:11.872000 audit[5720]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7fff15937690 a2=0 a3=0 items=0 ppid=3541 pid=5720 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:00:11.872000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:00:11.936599 sshd[5708]: Connection closed by 68.220.241.50 port 37422 Jan 14 01:00:11.937429 sshd-session[5678]: pam_unix(sshd:session): session closed for user core Jan 14 01:00:11.940000 audit[5678]: USER_END pid=5678 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 01:00:11.940000 audit[5678]: CRED_DISP pid=5678 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 01:00:11.945080 systemd[1]: sshd@17-172.31.19.12:22-68.220.241.50:37422.service: Deactivated successfully. Jan 14 01:00:11.944000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-172.31.19.12:22-68.220.241.50:37422 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:00:11.946957 systemd-logind[1922]: Session 19 logged out. Waiting for processes to exit. Jan 14 01:00:11.948175 systemd[1]: session-19.scope: Deactivated successfully. Jan 14 01:00:11.954262 systemd-logind[1922]: Removed session 19. Jan 14 01:00:12.022000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-172.31.19.12:22-68.220.241.50:37426 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:00:12.024029 systemd[1]: Started sshd@18-172.31.19.12:22-68.220.241.50:37426.service - OpenSSH per-connection server daemon (68.220.241.50:37426). Jan 14 01:00:12.461000 audit[5725]: USER_ACCT pid=5725 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 01:00:12.462930 sshd[5725]: Accepted publickey for core from 68.220.241.50 port 37426 ssh2: RSA SHA256:002NFNa+pW78QyiVemrE46QtSgcyjLzj0uNejAEHPK0 Jan 14 01:00:12.462000 audit[5725]: CRED_ACQ pid=5725 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 01:00:12.462000 audit[5725]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd1d69cd60 a2=3 a3=0 items=0 ppid=1 pid=5725 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=20 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:00:12.462000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:00:12.464635 sshd-session[5725]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 01:00:12.470079 systemd-logind[1922]: New session 20 of user core. Jan 14 01:00:12.474458 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 14 01:00:12.477000 audit[5725]: USER_START pid=5725 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 01:00:12.479000 audit[5729]: CRED_ACQ pid=5729 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 01:00:12.923545 sshd[5729]: Connection closed by 68.220.241.50 port 37426 Jan 14 01:00:12.924463 sshd-session[5725]: pam_unix(sshd:session): session closed for user core Jan 14 01:00:12.926000 audit[5725]: USER_END pid=5725 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 01:00:12.926000 audit[5725]: CRED_DISP pid=5725 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 01:00:12.930872 systemd-logind[1922]: Session 20 logged out. Waiting for processes to exit. Jan 14 01:00:12.931735 systemd[1]: sshd@18-172.31.19.12:22-68.220.241.50:37426.service: Deactivated successfully. Jan 14 01:00:12.931000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-172.31.19.12:22-68.220.241.50:37426 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:00:12.935607 systemd[1]: session-20.scope: Deactivated successfully. Jan 14 01:00:12.938941 systemd-logind[1922]: Removed session 20. Jan 14 01:00:13.010809 systemd[1]: Started sshd@19-172.31.19.12:22-68.220.241.50:54770.service - OpenSSH per-connection server daemon (68.220.241.50:54770). Jan 14 01:00:13.009000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-172.31.19.12:22-68.220.241.50:54770 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:00:13.312131 kubelet[3293]: E0114 01:00:13.311401 3293 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-7tkkg" podUID="e723b976-5fd1-4159-bd8a-f3fe80761ec5" Jan 14 01:00:13.474000 audit[5739]: USER_ACCT pid=5739 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 01:00:13.477226 kernel: kauditd_printk_skb: 47 callbacks suppressed Jan 14 01:00:13.477331 kernel: audit: type=1101 audit(1768352413.474:867): pid=5739 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 01:00:13.477372 sshd[5739]: Accepted publickey for core from 68.220.241.50 port 54770 ssh2: RSA SHA256:002NFNa+pW78QyiVemrE46QtSgcyjLzj0uNejAEHPK0 Jan 14 01:00:13.479358 sshd-session[5739]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 01:00:13.475000 audit[5739]: CRED_ACQ pid=5739 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 01:00:13.485512 kernel: audit: type=1103 audit(1768352413.475:868): pid=5739 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 01:00:13.496540 kernel: audit: type=1006 audit(1768352413.475:869): pid=5739 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=21 res=1 Jan 14 01:00:13.496624 kernel: audit: type=1300 audit(1768352413.475:869): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd7624acb0 a2=3 a3=0 items=0 ppid=1 pid=5739 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=21 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:00:13.497105 kernel: audit: type=1327 audit(1768352413.475:869): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:00:13.475000 audit[5739]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd7624acb0 a2=3 a3=0 items=0 ppid=1 pid=5739 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=21 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:00:13.475000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:00:13.492858 systemd-logind[1922]: New session 21 of user core. Jan 14 01:00:13.501406 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 14 01:00:13.504000 audit[5739]: USER_START pid=5739 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 01:00:13.508000 audit[5743]: CRED_ACQ pid=5743 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 01:00:13.513751 kernel: audit: type=1105 audit(1768352413.504:870): pid=5739 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 01:00:13.513819 kernel: audit: type=1103 audit(1768352413.508:871): pid=5743 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 01:00:13.781080 sshd[5743]: Connection closed by 68.220.241.50 port 54770 Jan 14 01:00:13.781789 sshd-session[5739]: pam_unix(sshd:session): session closed for user core Jan 14 01:00:13.782000 audit[5739]: USER_END pid=5739 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 01:00:13.787225 systemd[1]: sshd@19-172.31.19.12:22-68.220.241.50:54770.service: Deactivated successfully. Jan 14 01:00:13.789128 systemd[1]: session-21.scope: Deactivated successfully. Jan 14 01:00:13.790205 kernel: audit: type=1106 audit(1768352413.782:872): pid=5739 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 01:00:13.782000 audit[5739]: CRED_DISP pid=5739 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 01:00:13.791604 systemd-logind[1922]: Session 21 logged out. Waiting for processes to exit. Jan 14 01:00:13.792804 systemd-logind[1922]: Removed session 21. Jan 14 01:00:13.786000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-172.31.19.12:22-68.220.241.50:54770 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:00:13.795581 kernel: audit: type=1104 audit(1768352413.782:873): pid=5739 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 01:00:13.795623 kernel: audit: type=1131 audit(1768352413.786:874): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-172.31.19.12:22-68.220.241.50:54770 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:00:15.302656 kubelet[3293]: E0114 01:00:15.302599 3293 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6b4df85555-29mjx" podUID="f05abc55-0515-49ca-aacb-ebde63d756a4" Jan 14 01:00:15.305715 kubelet[3293]: E0114 01:00:15.305672 3293 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5c479bf86-lp2t6" podUID="5410ee2c-498a-49d0-bc39-5e704c2599b9" Jan 14 01:00:16.301638 kubelet[3293]: E0114 01:00:16.301574 3293 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-xlvwp" podUID="13972001-c667-49c0-9374-a2bbe47d8026" Jan 14 01:00:16.302234 kubelet[3293]: E0114 01:00:16.301882 3293 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-cdb9b59bb-pr6gg" podUID="7449b447-f9d5-45e2-8001-6763bc56b2d8" Jan 14 01:00:18.258000 audit[5756]: NETFILTER_CFG table=filter:146 family=2 entries=26 op=nft_register_rule pid=5756 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:00:18.258000 audit[5756]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffe01c6f240 a2=0 a3=7ffe01c6f22c items=0 ppid=3541 pid=5756 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:00:18.258000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:00:18.266000 audit[5756]: NETFILTER_CFG table=nat:147 family=2 entries=104 op=nft_register_chain pid=5756 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:00:18.266000 audit[5756]: SYSCALL arch=c000003e syscall=46 success=yes exit=48684 a0=3 a1=7ffe01c6f240 a2=0 a3=7ffe01c6f22c items=0 ppid=3541 pid=5756 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:00:18.266000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:00:18.869000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-172.31.19.12:22-68.220.241.50:54774 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:00:18.870323 systemd[1]: Started sshd@20-172.31.19.12:22-68.220.241.50:54774.service - OpenSSH per-connection server daemon (68.220.241.50:54774). Jan 14 01:00:18.871636 kernel: kauditd_printk_skb: 6 callbacks suppressed Jan 14 01:00:18.872137 kernel: audit: type=1130 audit(1768352418.869:877): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-172.31.19.12:22-68.220.241.50:54774 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:00:19.303000 audit[5758]: USER_ACCT pid=5758 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 01:00:19.306695 sshd-session[5758]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 01:00:19.307638 sshd[5758]: Accepted publickey for core from 68.220.241.50 port 54774 ssh2: RSA SHA256:002NFNa+pW78QyiVemrE46QtSgcyjLzj0uNejAEHPK0 Jan 14 01:00:19.310217 kernel: audit: type=1101 audit(1768352419.303:878): pid=5758 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 01:00:19.303000 audit[5758]: CRED_ACQ pid=5758 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 01:00:19.315219 kernel: audit: type=1103 audit(1768352419.303:879): pid=5758 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 01:00:19.315574 kernel: audit: type=1006 audit(1768352419.303:880): pid=5758 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=22 res=1 Jan 14 01:00:19.303000 audit[5758]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe11219130 a2=3 a3=0 items=0 ppid=1 pid=5758 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:00:19.319379 kernel: audit: type=1300 audit(1768352419.303:880): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe11219130 a2=3 a3=0 items=0 ppid=1 pid=5758 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:00:19.320065 systemd-logind[1922]: New session 22 of user core. Jan 14 01:00:19.303000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:00:19.325200 kernel: audit: type=1327 audit(1768352419.303:880): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:00:19.329384 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 14 01:00:19.331000 audit[5758]: USER_START pid=5758 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 01:00:19.336000 audit[5762]: CRED_ACQ pid=5762 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 01:00:19.339586 kernel: audit: type=1105 audit(1768352419.331:881): pid=5758 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 01:00:19.339630 kernel: audit: type=1103 audit(1768352419.336:882): pid=5762 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 01:00:19.598977 sshd[5762]: Connection closed by 68.220.241.50 port 54774 Jan 14 01:00:19.600417 sshd-session[5758]: pam_unix(sshd:session): session closed for user core Jan 14 01:00:19.601000 audit[5758]: USER_END pid=5758 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 01:00:19.609842 kernel: audit: type=1106 audit(1768352419.601:883): pid=5758 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 01:00:19.606794 systemd[1]: sshd@20-172.31.19.12:22-68.220.241.50:54774.service: Deactivated successfully. Jan 14 01:00:19.609975 systemd[1]: session-22.scope: Deactivated successfully. Jan 14 01:00:19.601000 audit[5758]: CRED_DISP pid=5758 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 01:00:19.617341 kernel: audit: type=1104 audit(1768352419.601:884): pid=5758 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 01:00:19.603000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-172.31.19.12:22-68.220.241.50:54774 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:00:19.616613 systemd-logind[1922]: Session 22 logged out. Waiting for processes to exit. Jan 14 01:00:19.618586 systemd-logind[1922]: Removed session 22. Jan 14 01:00:21.304065 kubelet[3293]: E0114 01:00:21.304017 3293 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-cdb9b59bb-cpdlw" podUID="5e14ba26-eb09-4a70-a4a6-9ad8bd987906" Jan 14 01:00:24.697359 systemd[1]: Started sshd@21-172.31.19.12:22-68.220.241.50:49244.service - OpenSSH per-connection server daemon (68.220.241.50:49244). Jan 14 01:00:24.698616 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 14 01:00:24.698673 kernel: audit: type=1130 audit(1768352424.696:886): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-172.31.19.12:22-68.220.241.50:49244 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:00:24.696000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-172.31.19.12:22-68.220.241.50:49244 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:00:25.154000 audit[5774]: USER_ACCT pid=5774 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 01:00:25.155781 sshd[5774]: Accepted publickey for core from 68.220.241.50 port 49244 ssh2: RSA SHA256:002NFNa+pW78QyiVemrE46QtSgcyjLzj0uNejAEHPK0 Jan 14 01:00:25.158869 sshd-session[5774]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 01:00:25.156000 audit[5774]: CRED_ACQ pid=5774 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 01:00:25.163731 kernel: audit: type=1101 audit(1768352425.154:887): pid=5774 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 01:00:25.163788 kernel: audit: type=1103 audit(1768352425.156:888): pid=5774 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 01:00:25.168613 kernel: audit: type=1006 audit(1768352425.156:889): pid=5774 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=23 res=1 Jan 14 01:00:25.156000 audit[5774]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffcff36c7c0 a2=3 a3=0 items=0 ppid=1 pid=5774 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:00:25.171811 systemd-logind[1922]: New session 23 of user core. Jan 14 01:00:25.174727 kernel: audit: type=1300 audit(1768352425.156:889): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffcff36c7c0 a2=3 a3=0 items=0 ppid=1 pid=5774 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:00:25.156000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:00:25.177564 kernel: audit: type=1327 audit(1768352425.156:889): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:00:25.180429 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 14 01:00:25.183000 audit[5774]: USER_START pid=5774 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 01:00:25.186000 audit[5778]: CRED_ACQ pid=5778 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 01:00:25.192350 kernel: audit: type=1105 audit(1768352425.183:890): pid=5774 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 01:00:25.192420 kernel: audit: type=1103 audit(1768352425.186:891): pid=5778 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 01:00:25.304038 kubelet[3293]: E0114 01:00:25.303782 3293 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-7tkkg" podUID="e723b976-5fd1-4159-bd8a-f3fe80761ec5" Jan 14 01:00:25.465831 sshd[5778]: Connection closed by 68.220.241.50 port 49244 Jan 14 01:00:25.467358 sshd-session[5774]: pam_unix(sshd:session): session closed for user core Jan 14 01:00:25.467000 audit[5774]: USER_END pid=5774 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 01:00:25.472291 systemd[1]: sshd@21-172.31.19.12:22-68.220.241.50:49244.service: Deactivated successfully. Jan 14 01:00:25.474999 systemd[1]: session-23.scope: Deactivated successfully. Jan 14 01:00:25.475201 kernel: audit: type=1106 audit(1768352425.467:892): pid=5774 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 01:00:25.467000 audit[5774]: CRED_DISP pid=5774 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 01:00:25.477708 systemd-logind[1922]: Session 23 logged out. Waiting for processes to exit. Jan 14 01:00:25.478988 systemd-logind[1922]: Removed session 23. Jan 14 01:00:25.469000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-172.31.19.12:22-68.220.241.50:49244 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:00:25.480257 kernel: audit: type=1104 audit(1768352425.467:893): pid=5774 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 01:00:27.309448 kubelet[3293]: E0114 01:00:27.309410 3293 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6b4df85555-29mjx" podUID="f05abc55-0515-49ca-aacb-ebde63d756a4" Jan 14 01:00:28.303418 kubelet[3293]: E0114 01:00:28.303382 3293 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-xlvwp" podUID="13972001-c667-49c0-9374-a2bbe47d8026" Jan 14 01:00:28.304619 kubelet[3293]: E0114 01:00:28.304103 3293 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5c479bf86-lp2t6" podUID="5410ee2c-498a-49d0-bc39-5e704c2599b9" Jan 14 01:00:30.305205 kubelet[3293]: E0114 01:00:30.304228 3293 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-cdb9b59bb-pr6gg" podUID="7449b447-f9d5-45e2-8001-6763bc56b2d8" Jan 14 01:00:30.550000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-172.31.19.12:22-68.220.241.50:49256 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:00:30.551670 systemd[1]: Started sshd@22-172.31.19.12:22-68.220.241.50:49256.service - OpenSSH per-connection server daemon (68.220.241.50:49256). Jan 14 01:00:30.552801 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 14 01:00:30.552902 kernel: audit: type=1130 audit(1768352430.550:895): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-172.31.19.12:22-68.220.241.50:49256 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:00:31.048000 audit[5789]: USER_ACCT pid=5789 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 01:00:31.050202 sshd[5789]: Accepted publickey for core from 68.220.241.50 port 49256 ssh2: RSA SHA256:002NFNa+pW78QyiVemrE46QtSgcyjLzj0uNejAEHPK0 Jan 14 01:00:31.052097 sshd-session[5789]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 01:00:31.050000 audit[5789]: CRED_ACQ pid=5789 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 01:00:31.056047 kernel: audit: type=1101 audit(1768352431.048:896): pid=5789 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 01:00:31.056106 kernel: audit: type=1103 audit(1768352431.050:897): pid=5789 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 01:00:31.060411 kernel: audit: type=1006 audit(1768352431.050:898): pid=5789 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=24 res=1 Jan 14 01:00:31.050000 audit[5789]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffeab179f10 a2=3 a3=0 items=0 ppid=1 pid=5789 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:00:31.064487 kernel: audit: type=1300 audit(1768352431.050:898): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffeab179f10 a2=3 a3=0 items=0 ppid=1 pid=5789 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:00:31.050000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:00:31.070405 kernel: audit: type=1327 audit(1768352431.050:898): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:00:31.072383 systemd-logind[1922]: New session 24 of user core. Jan 14 01:00:31.077376 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 14 01:00:31.079000 audit[5789]: USER_START pid=5789 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 01:00:31.087231 kernel: audit: type=1105 audit(1768352431.079:899): pid=5789 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 01:00:31.087289 kernel: audit: type=1103 audit(1768352431.086:900): pid=5793 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 01:00:31.086000 audit[5793]: CRED_ACQ pid=5793 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 01:00:31.475194 sshd[5793]: Connection closed by 68.220.241.50 port 49256 Jan 14 01:00:31.476326 sshd-session[5789]: pam_unix(sshd:session): session closed for user core Jan 14 01:00:31.477000 audit[5789]: USER_END pid=5789 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 01:00:31.486209 kernel: audit: type=1106 audit(1768352431.477:901): pid=5789 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 01:00:31.487434 systemd[1]: sshd@22-172.31.19.12:22-68.220.241.50:49256.service: Deactivated successfully. Jan 14 01:00:31.477000 audit[5789]: CRED_DISP pid=5789 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 01:00:31.492052 systemd[1]: session-24.scope: Deactivated successfully. Jan 14 01:00:31.492204 kernel: audit: type=1104 audit(1768352431.477:902): pid=5789 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 01:00:31.488000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-172.31.19.12:22-68.220.241.50:49256 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:00:31.497738 systemd-logind[1922]: Session 24 logged out. Waiting for processes to exit. Jan 14 01:00:31.498790 systemd-logind[1922]: Removed session 24. Jan 14 01:00:33.309027 kubelet[3293]: E0114 01:00:33.308979 3293 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-cdb9b59bb-cpdlw" podUID="5e14ba26-eb09-4a70-a4a6-9ad8bd987906" Jan 14 01:00:36.571747 systemd[1]: Started sshd@23-172.31.19.12:22-68.220.241.50:41906.service - OpenSSH per-connection server daemon (68.220.241.50:41906). Jan 14 01:00:36.570000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-172.31.19.12:22-68.220.241.50:41906 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:00:36.576790 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 14 01:00:36.576892 kernel: audit: type=1130 audit(1768352436.570:904): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-172.31.19.12:22-68.220.241.50:41906 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:00:37.032000 audit[5812]: USER_ACCT pid=5812 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 01:00:37.034876 sshd[5812]: Accepted publickey for core from 68.220.241.50 port 41906 ssh2: RSA SHA256:002NFNa+pW78QyiVemrE46QtSgcyjLzj0uNejAEHPK0 Jan 14 01:00:37.037988 sshd-session[5812]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 01:00:37.040212 kernel: audit: type=1101 audit(1768352437.032:905): pid=5812 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 01:00:37.035000 audit[5812]: CRED_ACQ pid=5812 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 01:00:37.046207 kernel: audit: type=1103 audit(1768352437.035:906): pid=5812 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 01:00:37.035000 audit[5812]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd7df351a0 a2=3 a3=0 items=0 ppid=1 pid=5812 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:00:37.052060 kernel: audit: type=1006 audit(1768352437.035:907): pid=5812 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=25 res=1 Jan 14 01:00:37.052285 kernel: audit: type=1300 audit(1768352437.035:907): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd7df351a0 a2=3 a3=0 items=0 ppid=1 pid=5812 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:00:37.035000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:00:37.059198 kernel: audit: type=1327 audit(1768352437.035:907): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:00:37.063054 systemd-logind[1922]: New session 25 of user core. Jan 14 01:00:37.068440 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 14 01:00:37.073000 audit[5812]: USER_START pid=5812 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 01:00:37.082320 kernel: audit: type=1105 audit(1768352437.073:908): pid=5812 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 01:00:37.080000 audit[5816]: CRED_ACQ pid=5816 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 01:00:37.088235 kernel: audit: type=1103 audit(1768352437.080:909): pid=5816 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 01:00:37.344857 sshd[5816]: Connection closed by 68.220.241.50 port 41906 Jan 14 01:00:37.346346 sshd-session[5812]: pam_unix(sshd:session): session closed for user core Jan 14 01:00:37.347000 audit[5812]: USER_END pid=5812 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 01:00:37.354727 systemd[1]: sshd@23-172.31.19.12:22-68.220.241.50:41906.service: Deactivated successfully. Jan 14 01:00:37.355223 kernel: audit: type=1106 audit(1768352437.347:910): pid=5812 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 01:00:37.356507 systemd[1]: session-25.scope: Deactivated successfully. Jan 14 01:00:37.347000 audit[5812]: CRED_DISP pid=5812 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 01:00:37.364350 kernel: audit: type=1104 audit(1768352437.347:911): pid=5812 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 01:00:37.353000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-172.31.19.12:22-68.220.241.50:41906 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:00:37.364528 systemd-logind[1922]: Session 25 logged out. Waiting for processes to exit. Jan 14 01:00:37.366044 systemd-logind[1922]: Removed session 25. Jan 14 01:00:38.302047 kubelet[3293]: E0114 01:00:38.301918 3293 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-7tkkg" podUID="e723b976-5fd1-4159-bd8a-f3fe80761ec5" Jan 14 01:00:41.304090 kubelet[3293]: E0114 01:00:41.304048 3293 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-xlvwp" podUID="13972001-c667-49c0-9374-a2bbe47d8026" Jan 14 01:00:41.305260 kubelet[3293]: E0114 01:00:41.305220 3293 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6b4df85555-29mjx" podUID="f05abc55-0515-49ca-aacb-ebde63d756a4" Jan 14 01:00:42.432258 systemd[1]: Started sshd@24-172.31.19.12:22-68.220.241.50:41914.service - OpenSSH per-connection server daemon (68.220.241.50:41914). Jan 14 01:00:42.437569 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 14 01:00:42.437645 kernel: audit: type=1130 audit(1768352442.431:913): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-172.31.19.12:22-68.220.241.50:41914 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:00:42.431000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-172.31.19.12:22-68.220.241.50:41914 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:00:42.896000 audit[5852]: USER_ACCT pid=5852 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 01:00:42.898504 sshd[5852]: Accepted publickey for core from 68.220.241.50 port 41914 ssh2: RSA SHA256:002NFNa+pW78QyiVemrE46QtSgcyjLzj0uNejAEHPK0 Jan 14 01:00:42.905298 kernel: audit: type=1101 audit(1768352442.896:914): pid=5852 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 01:00:42.904000 audit[5852]: CRED_ACQ pid=5852 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 01:00:42.912228 kernel: audit: type=1103 audit(1768352442.904:915): pid=5852 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 01:00:42.912728 sshd-session[5852]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 01:00:42.917297 kernel: audit: type=1006 audit(1768352442.904:916): pid=5852 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=26 res=1 Jan 14 01:00:42.904000 audit[5852]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd34fdcd40 a2=3 a3=0 items=0 ppid=1 pid=5852 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=26 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:00:42.925206 kernel: audit: type=1300 audit(1768352442.904:916): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd34fdcd40 a2=3 a3=0 items=0 ppid=1 pid=5852 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=26 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:00:42.904000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:00:42.928218 kernel: audit: type=1327 audit(1768352442.904:916): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:00:42.933794 systemd-logind[1922]: New session 26 of user core. Jan 14 01:00:42.940210 systemd[1]: Started session-26.scope - Session 26 of User core. Jan 14 01:00:42.945000 audit[5852]: USER_START pid=5852 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 01:00:42.956128 kernel: audit: type=1105 audit(1768352442.945:917): pid=5852 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 01:00:42.956274 kernel: audit: type=1103 audit(1768352442.953:918): pid=5856 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 01:00:42.953000 audit[5856]: CRED_ACQ pid=5856 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 01:00:43.304764 kubelet[3293]: E0114 01:00:43.304010 3293 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-cdb9b59bb-pr6gg" podUID="7449b447-f9d5-45e2-8001-6763bc56b2d8" Jan 14 01:00:43.307378 containerd[1939]: time="2026-01-14T01:00:43.307294965Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 14 01:00:43.348216 sshd[5856]: Connection closed by 68.220.241.50 port 41914 Jan 14 01:00:43.349085 sshd-session[5852]: pam_unix(sshd:session): session closed for user core Jan 14 01:00:43.350000 audit[5852]: USER_END pid=5852 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 01:00:43.355803 systemd-logind[1922]: Session 26 logged out. Waiting for processes to exit. Jan 14 01:00:43.356563 systemd[1]: sshd@24-172.31.19.12:22-68.220.241.50:41914.service: Deactivated successfully. Jan 14 01:00:43.359217 kernel: audit: type=1106 audit(1768352443.350:919): pid=5852 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 01:00:43.361454 systemd[1]: session-26.scope: Deactivated successfully. Jan 14 01:00:43.367341 systemd-logind[1922]: Removed session 26. Jan 14 01:00:43.351000 audit[5852]: CRED_DISP pid=5852 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 01:00:43.376969 kernel: audit: type=1104 audit(1768352443.351:920): pid=5852 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 01:00:43.355000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-172.31.19.12:22-68.220.241.50:41914 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:00:43.678038 containerd[1939]: time="2026-01-14T01:00:43.677876741Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:00:43.680044 containerd[1939]: time="2026-01-14T01:00:43.679962968Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 14 01:00:43.680283 containerd[1939]: time="2026-01-14T01:00:43.680019558Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 14 01:00:43.680397 kubelet[3293]: E0114 01:00:43.680335 3293 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 14 01:00:43.680446 kubelet[3293]: E0114 01:00:43.680397 3293 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 14 01:00:43.681408 kubelet[3293]: E0114 01:00:43.681366 3293 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:71e9debd4c934ffd9f54d469c8ec9440,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-92d7b,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-5c479bf86-lp2t6_calico-system(5410ee2c-498a-49d0-bc39-5e704c2599b9): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 14 01:00:43.684375 containerd[1939]: time="2026-01-14T01:00:43.684173531Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 14 01:00:43.960148 containerd[1939]: time="2026-01-14T01:00:43.959737272Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:00:43.962240 containerd[1939]: time="2026-01-14T01:00:43.962206545Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 14 01:00:43.962327 containerd[1939]: time="2026-01-14T01:00:43.962212575Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 14 01:00:43.962487 kubelet[3293]: E0114 01:00:43.962450 3293 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 14 01:00:43.962544 kubelet[3293]: E0114 01:00:43.962500 3293 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 14 01:00:43.962677 kubelet[3293]: E0114 01:00:43.962613 3293 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-92d7b,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-5c479bf86-lp2t6_calico-system(5410ee2c-498a-49d0-bc39-5e704c2599b9): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 14 01:00:43.966197 kubelet[3293]: E0114 01:00:43.963865 3293 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5c479bf86-lp2t6" podUID="5410ee2c-498a-49d0-bc39-5e704c2599b9" Jan 14 01:00:45.304957 containerd[1939]: time="2026-01-14T01:00:45.304917401Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 14 01:00:45.572571 containerd[1939]: time="2026-01-14T01:00:45.572527861Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:00:45.574687 containerd[1939]: time="2026-01-14T01:00:45.574612830Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 14 01:00:45.574687 containerd[1939]: time="2026-01-14T01:00:45.574658005Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 14 01:00:45.574998 kubelet[3293]: E0114 01:00:45.574872 3293 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 01:00:45.574998 kubelet[3293]: E0114 01:00:45.574921 3293 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 01:00:45.575446 kubelet[3293]: E0114 01:00:45.575092 3293 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6hskc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-cdb9b59bb-cpdlw_calico-apiserver(5e14ba26-eb09-4a70-a4a6-9ad8bd987906): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 14 01:00:45.576972 kubelet[3293]: E0114 01:00:45.576815 3293 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-cdb9b59bb-cpdlw" podUID="5e14ba26-eb09-4a70-a4a6-9ad8bd987906" Jan 14 01:00:52.301362 containerd[1939]: time="2026-01-14T01:00:52.301313618Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 14 01:00:52.583829 containerd[1939]: time="2026-01-14T01:00:52.583773354Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:00:52.586006 containerd[1939]: time="2026-01-14T01:00:52.585949706Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 14 01:00:52.586243 containerd[1939]: time="2026-01-14T01:00:52.585970177Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 14 01:00:52.586345 kubelet[3293]: E0114 01:00:52.586255 3293 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 14 01:00:52.586345 kubelet[3293]: E0114 01:00:52.586308 3293 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 14 01:00:52.586767 kubelet[3293]: E0114 01:00:52.586469 3293 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4lxnj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-7tkkg_calico-system(e723b976-5fd1-4159-bd8a-f3fe80761ec5): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 14 01:00:52.588498 containerd[1939]: time="2026-01-14T01:00:52.588464891Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 14 01:00:52.866411 containerd[1939]: time="2026-01-14T01:00:52.866281277Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:00:52.868491 containerd[1939]: time="2026-01-14T01:00:52.868437470Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 14 01:00:52.868599 containerd[1939]: time="2026-01-14T01:00:52.868530509Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 14 01:00:52.868829 kubelet[3293]: E0114 01:00:52.868788 3293 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 14 01:00:52.868829 kubelet[3293]: E0114 01:00:52.868879 3293 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 14 01:00:52.869090 kubelet[3293]: E0114 01:00:52.869035 3293 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4lxnj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-7tkkg_calico-system(e723b976-5fd1-4159-bd8a-f3fe80761ec5): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 14 01:00:52.870306 kubelet[3293]: E0114 01:00:52.870237 3293 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-7tkkg" podUID="e723b976-5fd1-4159-bd8a-f3fe80761ec5" Jan 14 01:00:54.301239 containerd[1939]: time="2026-01-14T01:00:54.301202154Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 14 01:00:54.593122 containerd[1939]: time="2026-01-14T01:00:54.593060778Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:00:54.595210 containerd[1939]: time="2026-01-14T01:00:54.595146226Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 14 01:00:54.595374 containerd[1939]: time="2026-01-14T01:00:54.595178510Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 14 01:00:54.595430 kubelet[3293]: E0114 01:00:54.595384 3293 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 01:00:54.595802 kubelet[3293]: E0114 01:00:54.595440 3293 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 01:00:54.595802 kubelet[3293]: E0114 01:00:54.595565 3293 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-glhgl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-cdb9b59bb-pr6gg_calico-apiserver(7449b447-f9d5-45e2-8001-6763bc56b2d8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 14 01:00:54.596760 kubelet[3293]: E0114 01:00:54.596719 3293 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-cdb9b59bb-pr6gg" podUID="7449b447-f9d5-45e2-8001-6763bc56b2d8" Jan 14 01:00:55.304337 kubelet[3293]: E0114 01:00:55.304263 3293 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5c479bf86-lp2t6" podUID="5410ee2c-498a-49d0-bc39-5e704c2599b9" Jan 14 01:00:55.305341 containerd[1939]: time="2026-01-14T01:00:55.305307521Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 14 01:00:55.609967 containerd[1939]: time="2026-01-14T01:00:55.609922282Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:00:55.612022 containerd[1939]: time="2026-01-14T01:00:55.611964544Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 14 01:00:55.612149 containerd[1939]: time="2026-01-14T01:00:55.612040833Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 14 01:00:55.612245 kubelet[3293]: E0114 01:00:55.612169 3293 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 14 01:00:55.612522 kubelet[3293]: E0114 01:00:55.612244 3293 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 14 01:00:55.612522 kubelet[3293]: E0114 01:00:55.612443 3293 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qp8hb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-6b4df85555-29mjx_calico-system(f05abc55-0515-49ca-aacb-ebde63d756a4): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 14 01:00:55.613639 kubelet[3293]: E0114 01:00:55.613603 3293 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6b4df85555-29mjx" podUID="f05abc55-0515-49ca-aacb-ebde63d756a4" Jan 14 01:00:56.301105 containerd[1939]: time="2026-01-14T01:00:56.301039031Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 14 01:00:56.560126 kubelet[3293]: E0114 01:00:56.560001 3293 controller.go:195] "Failed to update lease" err="the server was unable to return a response in the time allotted, but may still be processing the request (put leases.coordination.k8s.io ip-172-31-19-12)" Jan 14 01:00:56.589058 containerd[1939]: time="2026-01-14T01:00:56.588980673Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:00:56.591274 containerd[1939]: time="2026-01-14T01:00:56.591159106Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 14 01:00:56.591274 containerd[1939]: time="2026-01-14T01:00:56.591213009Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 14 01:00:56.591552 kubelet[3293]: E0114 01:00:56.591405 3293 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 14 01:00:56.591552 kubelet[3293]: E0114 01:00:56.591447 3293 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 14 01:00:56.591642 kubelet[3293]: E0114 01:00:56.591581 3293 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2qw6k,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-xlvwp_calico-system(13972001-c667-49c0-9374-a2bbe47d8026): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 14 01:00:56.593760 kubelet[3293]: E0114 01:00:56.593697 3293 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-xlvwp" podUID="13972001-c667-49c0-9374-a2bbe47d8026" Jan 14 01:00:57.843683 systemd[1]: cri-containerd-c011025727dd0bfe8564a2172d1dd738060f9517bc11b365a2ed796d36f180e9.scope: Deactivated successfully. Jan 14 01:00:57.844111 systemd[1]: cri-containerd-c011025727dd0bfe8564a2172d1dd738060f9517bc11b365a2ed796d36f180e9.scope: Consumed 3.437s CPU time, 114.6M memory peak, 106.9M read from disk. Jan 14 01:00:57.847949 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 14 01:00:57.847997 kernel: audit: type=1334 audit(1768352457.843:922): prog-id=263 op=LOAD Jan 14 01:00:57.843000 audit: BPF prog-id=263 op=LOAD Jan 14 01:00:57.843000 audit: BPF prog-id=97 op=UNLOAD Jan 14 01:00:57.850227 kernel: audit: type=1334 audit(1768352457.843:923): prog-id=97 op=UNLOAD Jan 14 01:00:57.849000 audit: BPF prog-id=110 op=UNLOAD Jan 14 01:00:57.852457 kernel: audit: type=1334 audit(1768352457.849:924): prog-id=110 op=UNLOAD Jan 14 01:00:57.849000 audit: BPF prog-id=114 op=UNLOAD Jan 14 01:00:57.856555 kernel: audit: type=1334 audit(1768352457.849:925): prog-id=114 op=UNLOAD Jan 14 01:00:58.037845 containerd[1939]: time="2026-01-14T01:00:58.037783685Z" level=info msg="received container exit event container_id:\"c011025727dd0bfe8564a2172d1dd738060f9517bc11b365a2ed796d36f180e9\" id:\"c011025727dd0bfe8564a2172d1dd738060f9517bc11b365a2ed796d36f180e9\" pid:3123 exit_status:1 exited_at:{seconds:1768352457 nanos:912658018}" Jan 14 01:00:58.121492 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c011025727dd0bfe8564a2172d1dd738060f9517bc11b365a2ed796d36f180e9-rootfs.mount: Deactivated successfully. Jan 14 01:00:58.229828 systemd[1]: cri-containerd-31e657610061df2a03831e4abace3030f1c96544e78dd28f12e4579ef8d396c6.scope: Deactivated successfully. Jan 14 01:00:58.230208 systemd[1]: cri-containerd-31e657610061df2a03831e4abace3030f1c96544e78dd28f12e4579ef8d396c6.scope: Consumed 9.397s CPU time, 107.2M memory peak, 49.4M read from disk. Jan 14 01:00:58.232000 audit: BPF prog-id=153 op=UNLOAD Jan 14 01:00:58.234229 containerd[1939]: time="2026-01-14T01:00:58.234203583Z" level=info msg="received container exit event container_id:\"31e657610061df2a03831e4abace3030f1c96544e78dd28f12e4579ef8d396c6\" id:\"31e657610061df2a03831e4abace3030f1c96544e78dd28f12e4579ef8d396c6\" pid:3579 exit_status:1 exited_at:{seconds:1768352458 nanos:233857845}" Jan 14 01:00:58.236156 kernel: audit: type=1334 audit(1768352458.232:926): prog-id=153 op=UNLOAD Jan 14 01:00:58.236245 kernel: audit: type=1334 audit(1768352458.232:927): prog-id=157 op=UNLOAD Jan 14 01:00:58.232000 audit: BPF prog-id=157 op=UNLOAD Jan 14 01:00:58.259763 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-31e657610061df2a03831e4abace3030f1c96544e78dd28f12e4579ef8d396c6-rootfs.mount: Deactivated successfully. Jan 14 01:00:58.898381 kubelet[3293]: I0114 01:00:58.898322 3293 scope.go:117] "RemoveContainer" containerID="31e657610061df2a03831e4abace3030f1c96544e78dd28f12e4579ef8d396c6" Jan 14 01:00:58.898815 kubelet[3293]: I0114 01:00:58.898643 3293 scope.go:117] "RemoveContainer" containerID="c011025727dd0bfe8564a2172d1dd738060f9517bc11b365a2ed796d36f180e9" Jan 14 01:00:58.919062 containerd[1939]: time="2026-01-14T01:00:58.919009457Z" level=info msg="CreateContainer within sandbox \"b6616050c2d9e0bb81aaaa66326aa054cff94e7530d7ae5e9ee0af0696abb501\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Jan 14 01:00:58.921654 containerd[1939]: time="2026-01-14T01:00:58.921563633Z" level=info msg="CreateContainer within sandbox \"927aaf7c5f5b56348f4a8f710d79cc4d7be1be3356ba8eb85e5744c5771ed667\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Jan 14 01:00:59.005473 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1968159749.mount: Deactivated successfully. Jan 14 01:00:59.008284 containerd[1939]: time="2026-01-14T01:00:59.008245431Z" level=info msg="Container 529914782c233e643730fbf290ce2f29af6fe9c42557c07517c88f60c2f3c262: CDI devices from CRI Config.CDIDevices: []" Jan 14 01:00:59.013361 containerd[1939]: time="2026-01-14T01:00:59.013322399Z" level=info msg="Container 86fc2a7a1caa702191ba213486b2ca28536a0085347bb77507d93d98deff56b7: CDI devices from CRI Config.CDIDevices: []" Jan 14 01:00:59.044443 containerd[1939]: time="2026-01-14T01:00:59.044399027Z" level=info msg="CreateContainer within sandbox \"927aaf7c5f5b56348f4a8f710d79cc4d7be1be3356ba8eb85e5744c5771ed667\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"529914782c233e643730fbf290ce2f29af6fe9c42557c07517c88f60c2f3c262\"" Jan 14 01:00:59.045257 containerd[1939]: time="2026-01-14T01:00:59.044975180Z" level=info msg="StartContainer for \"529914782c233e643730fbf290ce2f29af6fe9c42557c07517c88f60c2f3c262\"" Jan 14 01:00:59.052944 containerd[1939]: time="2026-01-14T01:00:59.052832393Z" level=info msg="connecting to shim 529914782c233e643730fbf290ce2f29af6fe9c42557c07517c88f60c2f3c262" address="unix:///run/containerd/s/7f5ce4b54665a7bea5f0471378e79b758ae01c8e0d8d3aa25eb32f222d6148a2" protocol=ttrpc version=3 Jan 14 01:00:59.059139 containerd[1939]: time="2026-01-14T01:00:59.059090754Z" level=info msg="CreateContainer within sandbox \"b6616050c2d9e0bb81aaaa66326aa054cff94e7530d7ae5e9ee0af0696abb501\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"86fc2a7a1caa702191ba213486b2ca28536a0085347bb77507d93d98deff56b7\"" Jan 14 01:00:59.059765 containerd[1939]: time="2026-01-14T01:00:59.059732984Z" level=info msg="StartContainer for \"86fc2a7a1caa702191ba213486b2ca28536a0085347bb77507d93d98deff56b7\"" Jan 14 01:00:59.060780 containerd[1939]: time="2026-01-14T01:00:59.060702287Z" level=info msg="connecting to shim 86fc2a7a1caa702191ba213486b2ca28536a0085347bb77507d93d98deff56b7" address="unix:///run/containerd/s/d858b4ced17fbdbbc511d282f724a3c859204a12a9986fec7ceb8ceee921b89e" protocol=ttrpc version=3 Jan 14 01:00:59.080606 systemd[1]: Started cri-containerd-529914782c233e643730fbf290ce2f29af6fe9c42557c07517c88f60c2f3c262.scope - libcontainer container 529914782c233e643730fbf290ce2f29af6fe9c42557c07517c88f60c2f3c262. Jan 14 01:00:59.107000 audit: BPF prog-id=264 op=LOAD Jan 14 01:00:59.111286 kernel: audit: type=1334 audit(1768352459.107:928): prog-id=264 op=LOAD Jan 14 01:00:59.111000 audit: BPF prog-id=265 op=LOAD Jan 14 01:00:59.111000 audit[5918]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=3485 pid=5918 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:00:59.115527 kernel: audit: type=1334 audit(1768352459.111:929): prog-id=265 op=LOAD Jan 14 01:00:59.115587 kernel: audit: type=1300 audit(1768352459.111:929): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=3485 pid=5918 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:00:59.123310 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2997046219.mount: Deactivated successfully. Jan 14 01:00:59.111000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3532393931343738326332333365363433373330666266323930636532 Jan 14 01:00:59.112000 audit: BPF prog-id=265 op=UNLOAD Jan 14 01:00:59.112000 audit[5918]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3485 pid=5918 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:00:59.112000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3532393931343738326332333365363433373330666266323930636532 Jan 14 01:00:59.112000 audit: BPF prog-id=266 op=LOAD Jan 14 01:00:59.112000 audit[5918]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=3485 pid=5918 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:00:59.131203 kernel: audit: type=1327 audit(1768352459.111:929): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3532393931343738326332333365363433373330666266323930636532 Jan 14 01:00:59.112000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3532393931343738326332333365363433373330666266323930636532 Jan 14 01:00:59.112000 audit: BPF prog-id=267 op=LOAD Jan 14 01:00:59.112000 audit[5918]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=3485 pid=5918 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:00:59.112000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3532393931343738326332333365363433373330666266323930636532 Jan 14 01:00:59.112000 audit: BPF prog-id=267 op=UNLOAD Jan 14 01:00:59.112000 audit[5918]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3485 pid=5918 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:00:59.112000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3532393931343738326332333365363433373330666266323930636532 Jan 14 01:00:59.112000 audit: BPF prog-id=266 op=UNLOAD Jan 14 01:00:59.112000 audit[5918]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3485 pid=5918 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:00:59.112000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3532393931343738326332333365363433373330666266323930636532 Jan 14 01:00:59.113000 audit: BPF prog-id=268 op=LOAD Jan 14 01:00:59.113000 audit[5918]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=3485 pid=5918 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:00:59.113000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3532393931343738326332333365363433373330666266323930636532 Jan 14 01:00:59.144309 systemd[1]: Started cri-containerd-86fc2a7a1caa702191ba213486b2ca28536a0085347bb77507d93d98deff56b7.scope - libcontainer container 86fc2a7a1caa702191ba213486b2ca28536a0085347bb77507d93d98deff56b7. Jan 14 01:00:59.174000 audit: BPF prog-id=269 op=LOAD Jan 14 01:00:59.177624 containerd[1939]: time="2026-01-14T01:00:59.177527137Z" level=info msg="StartContainer for \"529914782c233e643730fbf290ce2f29af6fe9c42557c07517c88f60c2f3c262\" returns successfully" Jan 14 01:00:59.178000 audit: BPF prog-id=270 op=LOAD Jan 14 01:00:59.178000 audit[5939]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000128238 a2=98 a3=0 items=0 ppid=2961 pid=5939 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:00:59.178000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3836666332613761316361613730323139316261323133343836623263 Jan 14 01:00:59.179000 audit: BPF prog-id=270 op=UNLOAD Jan 14 01:00:59.179000 audit[5939]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2961 pid=5939 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:00:59.179000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3836666332613761316361613730323139316261323133343836623263 Jan 14 01:00:59.179000 audit: BPF prog-id=271 op=LOAD Jan 14 01:00:59.179000 audit[5939]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000128488 a2=98 a3=0 items=0 ppid=2961 pid=5939 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:00:59.179000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3836666332613761316361613730323139316261323133343836623263 Jan 14 01:00:59.180000 audit: BPF prog-id=272 op=LOAD Jan 14 01:00:59.180000 audit[5939]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000128218 a2=98 a3=0 items=0 ppid=2961 pid=5939 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:00:59.180000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3836666332613761316361613730323139316261323133343836623263 Jan 14 01:00:59.180000 audit: BPF prog-id=272 op=UNLOAD Jan 14 01:00:59.180000 audit[5939]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2961 pid=5939 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:00:59.180000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3836666332613761316361613730323139316261323133343836623263 Jan 14 01:00:59.180000 audit: BPF prog-id=271 op=UNLOAD Jan 14 01:00:59.180000 audit[5939]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2961 pid=5939 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:00:59.180000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3836666332613761316361613730323139316261323133343836623263 Jan 14 01:00:59.180000 audit: BPF prog-id=273 op=LOAD Jan 14 01:00:59.180000 audit[5939]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001286e8 a2=98 a3=0 items=0 ppid=2961 pid=5939 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:00:59.180000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3836666332613761316361613730323139316261323133343836623263 Jan 14 01:00:59.239702 containerd[1939]: time="2026-01-14T01:00:59.239572653Z" level=info msg="StartContainer for \"86fc2a7a1caa702191ba213486b2ca28536a0085347bb77507d93d98deff56b7\" returns successfully" Jan 14 01:01:00.301170 kubelet[3293]: E0114 01:01:00.301127 3293 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-cdb9b59bb-cpdlw" podUID="5e14ba26-eb09-4a70-a4a6-9ad8bd987906" Jan 14 01:01:02.049334 systemd[1]: cri-containerd-52ec8a0ed79ee465b8631b665e07f29a964f28541516af4333b01759f8777cf3.scope: Deactivated successfully. Jan 14 01:01:02.050000 audit: BPF prog-id=274 op=LOAD Jan 14 01:01:02.050000 audit: BPF prog-id=95 op=UNLOAD Jan 14 01:01:02.051000 audit: BPF prog-id=115 op=UNLOAD Jan 14 01:01:02.051000 audit: BPF prog-id=119 op=UNLOAD Jan 14 01:01:02.050243 systemd[1]: cri-containerd-52ec8a0ed79ee465b8631b665e07f29a964f28541516af4333b01759f8777cf3.scope: Consumed 2.032s CPU time, 39.9M memory peak, 37.7M read from disk. Jan 14 01:01:02.055436 containerd[1939]: time="2026-01-14T01:01:02.055394292Z" level=info msg="received container exit event container_id:\"52ec8a0ed79ee465b8631b665e07f29a964f28541516af4333b01759f8777cf3\" id:\"52ec8a0ed79ee465b8631b665e07f29a964f28541516af4333b01759f8777cf3\" pid:3132 exit_status:1 exited_at:{seconds:1768352462 nanos:54859482}" Jan 14 01:01:02.084714 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-52ec8a0ed79ee465b8631b665e07f29a964f28541516af4333b01759f8777cf3-rootfs.mount: Deactivated successfully. Jan 14 01:01:02.903681 kubelet[3293]: I0114 01:01:02.903630 3293 scope.go:117] "RemoveContainer" containerID="52ec8a0ed79ee465b8631b665e07f29a964f28541516af4333b01759f8777cf3" Jan 14 01:01:02.906162 containerd[1939]: time="2026-01-14T01:01:02.906125682Z" level=info msg="CreateContainer within sandbox \"e0624ff800d072cdd7eb9fbe78f9aabc14846d3eadaf98eba2519cc9ba7da7fc\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Jan 14 01:01:02.923786 containerd[1939]: time="2026-01-14T01:01:02.923036705Z" level=info msg="Container 48c4ddb43a24b1b8396699a10e78525f6a8aba857e79c415910245f7e163ddee: CDI devices from CRI Config.CDIDevices: []" Jan 14 01:01:02.936782 containerd[1939]: time="2026-01-14T01:01:02.936743376Z" level=info msg="CreateContainer within sandbox \"e0624ff800d072cdd7eb9fbe78f9aabc14846d3eadaf98eba2519cc9ba7da7fc\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"48c4ddb43a24b1b8396699a10e78525f6a8aba857e79c415910245f7e163ddee\"" Jan 14 01:01:02.937252 containerd[1939]: time="2026-01-14T01:01:02.937231139Z" level=info msg="StartContainer for \"48c4ddb43a24b1b8396699a10e78525f6a8aba857e79c415910245f7e163ddee\"" Jan 14 01:01:02.938289 containerd[1939]: time="2026-01-14T01:01:02.938261608Z" level=info msg="connecting to shim 48c4ddb43a24b1b8396699a10e78525f6a8aba857e79c415910245f7e163ddee" address="unix:///run/containerd/s/873f1d8a013bfbdb4f30ef576093bc26dacbd930b2fc09a652a50b6d81060700" protocol=ttrpc version=3 Jan 14 01:01:02.958383 systemd[1]: Started cri-containerd-48c4ddb43a24b1b8396699a10e78525f6a8aba857e79c415910245f7e163ddee.scope - libcontainer container 48c4ddb43a24b1b8396699a10e78525f6a8aba857e79c415910245f7e163ddee. Jan 14 01:01:02.973109 kernel: kauditd_printk_skb: 44 callbacks suppressed Jan 14 01:01:02.973228 kernel: audit: type=1334 audit(1768352462.969:948): prog-id=275 op=LOAD Jan 14 01:01:02.969000 audit: BPF prog-id=275 op=LOAD Jan 14 01:01:02.970000 audit: BPF prog-id=276 op=LOAD Jan 14 01:01:02.979487 kernel: audit: type=1334 audit(1768352462.970:949): prog-id=276 op=LOAD Jan 14 01:01:02.979551 kernel: audit: type=1300 audit(1768352462.970:949): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=2975 pid=5997 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:01:02.970000 audit[5997]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=2975 pid=5997 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:01:02.984686 kernel: audit: type=1327 audit(1768352462.970:949): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3438633464646234336132346231623833393636393961313065373835 Jan 14 01:01:02.970000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3438633464646234336132346231623833393636393961313065373835 Jan 14 01:01:02.986005 kernel: audit: type=1334 audit(1768352462.970:950): prog-id=276 op=UNLOAD Jan 14 01:01:02.970000 audit: BPF prog-id=276 op=UNLOAD Jan 14 01:01:02.990931 kernel: audit: type=1300 audit(1768352462.970:950): arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2975 pid=5997 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:01:02.970000 audit[5997]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2975 pid=5997 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:01:02.996054 kernel: audit: type=1327 audit(1768352462.970:950): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3438633464646234336132346231623833393636393961313065373835 Jan 14 01:01:02.970000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3438633464646234336132346231623833393636393961313065373835 Jan 14 01:01:02.970000 audit: BPF prog-id=277 op=LOAD Jan 14 01:01:03.002911 kernel: audit: type=1334 audit(1768352462.970:951): prog-id=277 op=LOAD Jan 14 01:01:03.002990 kernel: audit: type=1300 audit(1768352462.970:951): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=2975 pid=5997 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:01:02.970000 audit[5997]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=2975 pid=5997 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:01:02.970000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3438633464646234336132346231623833393636393961313065373835 Jan 14 01:01:02.970000 audit: BPF prog-id=278 op=LOAD Jan 14 01:01:02.970000 audit[5997]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=2975 pid=5997 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:01:03.010276 kernel: audit: type=1327 audit(1768352462.970:951): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3438633464646234336132346231623833393636393961313065373835 Jan 14 01:01:02.970000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3438633464646234336132346231623833393636393961313065373835 Jan 14 01:01:02.970000 audit: BPF prog-id=278 op=UNLOAD Jan 14 01:01:02.970000 audit[5997]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2975 pid=5997 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:01:02.970000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3438633464646234336132346231623833393636393961313065373835 Jan 14 01:01:02.970000 audit: BPF prog-id=277 op=UNLOAD Jan 14 01:01:02.970000 audit[5997]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2975 pid=5997 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:01:02.970000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3438633464646234336132346231623833393636393961313065373835 Jan 14 01:01:02.971000 audit: BPF prog-id=279 op=LOAD Jan 14 01:01:02.971000 audit[5997]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=2975 pid=5997 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:01:02.971000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3438633464646234336132346231623833393636393961313065373835 Jan 14 01:01:03.018543 containerd[1939]: time="2026-01-14T01:01:03.018509117Z" level=info msg="StartContainer for \"48c4ddb43a24b1b8396699a10e78525f6a8aba857e79c415910245f7e163ddee\" returns successfully" Jan 14 01:01:05.301203 kubelet[3293]: E0114 01:01:05.301144 3293 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-cdb9b59bb-pr6gg" podUID="7449b447-f9d5-45e2-8001-6763bc56b2d8" Jan 14 01:01:06.302393 kubelet[3293]: E0114 01:01:06.302328 3293 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6b4df85555-29mjx" podUID="f05abc55-0515-49ca-aacb-ebde63d756a4" Jan 14 01:01:06.302811 kubelet[3293]: E0114 01:01:06.302397 3293 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-7tkkg" podUID="e723b976-5fd1-4159-bd8a-f3fe80761ec5" Jan 14 01:01:06.566998 kubelet[3293]: E0114 01:01:06.566362 3293 controller.go:195] "Failed to update lease" err="Put \"https://172.31.19.12:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-19-12?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 14 01:01:07.302100 kubelet[3293]: E0114 01:01:07.301950 3293 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5c479bf86-lp2t6" podUID="5410ee2c-498a-49d0-bc39-5e704c2599b9" Jan 14 01:01:08.302260 kubelet[3293]: E0114 01:01:08.302218 3293 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-xlvwp" podUID="13972001-c667-49c0-9374-a2bbe47d8026" Jan 14 01:01:11.302051 kubelet[3293]: E0114 01:01:11.301991 3293 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-cdb9b59bb-cpdlw" podUID="5e14ba26-eb09-4a70-a4a6-9ad8bd987906" Jan 14 01:01:11.780589 containerd[1939]: time="2026-01-14T01:01:11.780429668Z" level=info msg="received container exit event container_id:\"529914782c233e643730fbf290ce2f29af6fe9c42557c07517c88f60c2f3c262\" id:\"529914782c233e643730fbf290ce2f29af6fe9c42557c07517c88f60c2f3c262\" pid:5934 exit_status:1 exited_at:{seconds:1768352471 nanos:780217831}" Jan 14 01:01:11.780748 systemd[1]: cri-containerd-529914782c233e643730fbf290ce2f29af6fe9c42557c07517c88f60c2f3c262.scope: Deactivated successfully. Jan 14 01:01:11.781385 systemd[1]: cri-containerd-529914782c233e643730fbf290ce2f29af6fe9c42557c07517c88f60c2f3c262.scope: Consumed 327ms CPU time, 64.7M memory peak, 32.4M read from disk. Jan 14 01:01:11.781000 audit: BPF prog-id=264 op=UNLOAD Jan 14 01:01:11.783662 kernel: kauditd_printk_skb: 12 callbacks suppressed Jan 14 01:01:11.783716 kernel: audit: type=1334 audit(1768352471.781:956): prog-id=264 op=UNLOAD Jan 14 01:01:11.781000 audit: BPF prog-id=268 op=UNLOAD Jan 14 01:01:11.787424 kernel: audit: type=1334 audit(1768352471.781:957): prog-id=268 op=UNLOAD Jan 14 01:01:11.803242 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-529914782c233e643730fbf290ce2f29af6fe9c42557c07517c88f60c2f3c262-rootfs.mount: Deactivated successfully. Jan 14 01:01:11.926032 kubelet[3293]: I0114 01:01:11.925506 3293 scope.go:117] "RemoveContainer" containerID="31e657610061df2a03831e4abace3030f1c96544e78dd28f12e4579ef8d396c6" Jan 14 01:01:11.926032 kubelet[3293]: I0114 01:01:11.925823 3293 scope.go:117] "RemoveContainer" containerID="529914782c233e643730fbf290ce2f29af6fe9c42557c07517c88f60c2f3c262" Jan 14 01:01:11.926032 kubelet[3293]: E0114 01:01:11.925960 3293 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tigera-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=tigera-operator pod=tigera-operator-7dcd859c48-mxcrz_tigera-operator(86222dc6-13c8-49c8-97d2-24af60c81dd7)\"" pod="tigera-operator/tigera-operator-7dcd859c48-mxcrz" podUID="86222dc6-13c8-49c8-97d2-24af60c81dd7" Jan 14 01:01:11.997570 containerd[1939]: time="2026-01-14T01:01:11.997521262Z" level=info msg="RemoveContainer for \"31e657610061df2a03831e4abace3030f1c96544e78dd28f12e4579ef8d396c6\"" Jan 14 01:01:12.024717 containerd[1939]: time="2026-01-14T01:01:12.024654424Z" level=info msg="RemoveContainer for \"31e657610061df2a03831e4abace3030f1c96544e78dd28f12e4579ef8d396c6\" returns successfully" Jan 14 01:01:16.571500 kubelet[3293]: E0114 01:01:16.571448 3293 controller.go:195] "Failed to update lease" err="Put \"https://172.31.19.12:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-19-12?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 14 01:01:17.302223 kubelet[3293]: E0114 01:01:17.302137 3293 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-cdb9b59bb-pr6gg" podUID="7449b447-f9d5-45e2-8001-6763bc56b2d8" Jan 14 01:01:18.302178 kubelet[3293]: E0114 01:01:18.302106 3293 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5c479bf86-lp2t6" podUID="5410ee2c-498a-49d0-bc39-5e704c2599b9" Jan 14 01:01:19.301166 kubelet[3293]: E0114 01:01:19.301044 3293 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6b4df85555-29mjx" podUID="f05abc55-0515-49ca-aacb-ebde63d756a4"