Dec 16 12:16:24.932567 kernel: Booting Linux on physical CPU 0x0000000000 [0x410fd083] Dec 16 12:16:24.932613 kernel: Linux version 6.12.61-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.1_p20250801 p4) 14.3.1 20250801, GNU ld (Gentoo 2.45 p3) 2.45.0) #1 SMP PREEMPT Tue Dec 16 00:05:24 -00 2025 Dec 16 12:16:24.932638 kernel: KASLR disabled due to lack of seed Dec 16 12:16:24.932655 kernel: efi: EFI v2.7 by EDK II Dec 16 12:16:24.932671 kernel: efi: SMBIOS=0x7bed0000 SMBIOS 3.0=0x7beb0000 ACPI=0x786e0000 ACPI 2.0=0x786e0014 MEMATTR=0x7a731a98 MEMRESERVE=0x78551598 Dec 16 12:16:24.932686 kernel: secureboot: Secure boot disabled Dec 16 12:16:24.932704 kernel: ACPI: Early table checksum verification disabled Dec 16 12:16:24.932719 kernel: ACPI: RSDP 0x00000000786E0014 000024 (v02 AMAZON) Dec 16 12:16:24.932735 kernel: ACPI: XSDT 0x00000000786D00E8 000064 (v01 AMAZON AMZNFACP 00000001 01000013) Dec 16 12:16:24.932755 kernel: ACPI: FACP 0x00000000786B0000 000114 (v06 AMAZON AMZNFACP 00000001 AMZN 00000001) Dec 16 12:16:24.932771 kernel: ACPI: DSDT 0x0000000078640000 0013D2 (v02 AMAZON AMZNDSDT 00000001 AMZN 00000001) Dec 16 12:16:24.932786 kernel: ACPI: FACS 0x0000000078630000 000040 Dec 16 12:16:24.932802 kernel: ACPI: APIC 0x00000000786C0000 000108 (v04 AMAZON AMZNAPIC 00000001 AMZN 00000001) Dec 16 12:16:24.932818 kernel: ACPI: SPCR 0x00000000786A0000 000050 (v02 AMAZON AMZNSPCR 00000001 AMZN 00000001) Dec 16 12:16:24.932840 kernel: ACPI: GTDT 0x0000000078690000 000060 (v02 AMAZON AMZNGTDT 00000001 AMZN 00000001) Dec 16 12:16:24.932857 kernel: ACPI: MCFG 0x0000000078680000 00003C (v02 AMAZON AMZNMCFG 00000001 AMZN 00000001) Dec 16 12:16:24.932874 kernel: ACPI: SLIT 0x0000000078670000 00002D (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Dec 16 12:16:24.932892 kernel: ACPI: IORT 0x0000000078660000 000078 (v01 AMAZON AMZNIORT 00000001 AMZN 00000001) Dec 16 12:16:24.932908 kernel: ACPI: PPTT 0x0000000078650000 0000EC (v01 AMAZON AMZNPPTT 00000001 AMZN 00000001) Dec 16 12:16:24.932925 kernel: ACPI: SPCR: console: uart,mmio,0x90a0000,115200 Dec 16 12:16:24.932941 kernel: earlycon: uart0 at MMIO 0x00000000090a0000 (options '115200') Dec 16 12:16:24.932958 kernel: printk: legacy bootconsole [uart0] enabled Dec 16 12:16:24.932975 kernel: ACPI: Use ACPI SPCR as default console: Yes Dec 16 12:16:24.932991 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000004b5ffffff] Dec 16 12:16:24.933012 kernel: NODE_DATA(0) allocated [mem 0x4b584da00-0x4b5854fff] Dec 16 12:16:24.933029 kernel: Zone ranges: Dec 16 12:16:24.933045 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Dec 16 12:16:24.933062 kernel: DMA32 empty Dec 16 12:16:24.933078 kernel: Normal [mem 0x0000000100000000-0x00000004b5ffffff] Dec 16 12:16:24.933095 kernel: Device empty Dec 16 12:16:24.933112 kernel: Movable zone start for each node Dec 16 12:16:24.933170 kernel: Early memory node ranges Dec 16 12:16:24.933190 kernel: node 0: [mem 0x0000000040000000-0x000000007862ffff] Dec 16 12:16:24.933232 kernel: node 0: [mem 0x0000000078630000-0x000000007863ffff] Dec 16 12:16:24.933251 kernel: node 0: [mem 0x0000000078640000-0x00000000786effff] Dec 16 12:16:24.933268 kernel: node 0: [mem 0x00000000786f0000-0x000000007872ffff] Dec 16 12:16:24.933291 kernel: node 0: [mem 0x0000000078730000-0x000000007bbfffff] Dec 16 12:16:24.933307 kernel: node 0: [mem 0x000000007bc00000-0x000000007bfdffff] Dec 16 12:16:24.933324 kernel: node 0: [mem 0x000000007bfe0000-0x000000007fffffff] Dec 16 12:16:24.933341 kernel: node 0: [mem 0x0000000400000000-0x00000004b5ffffff] Dec 16 12:16:24.933365 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000004b5ffffff] Dec 16 12:16:24.933386 kernel: On node 0, zone Normal: 8192 pages in unavailable ranges Dec 16 12:16:24.933404 kernel: cma: Reserved 16 MiB at 0x000000007f000000 on node -1 Dec 16 12:16:24.933422 kernel: psci: probing for conduit method from ACPI. Dec 16 12:16:24.933439 kernel: psci: PSCIv1.0 detected in firmware. Dec 16 12:16:24.933457 kernel: psci: Using standard PSCI v0.2 function IDs Dec 16 12:16:24.933474 kernel: psci: Trusted OS migration not required Dec 16 12:16:24.933492 kernel: psci: SMC Calling Convention v1.1 Dec 16 12:16:24.933509 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000001) Dec 16 12:16:24.933527 kernel: percpu: Embedded 33 pages/cpu s98200 r8192 d28776 u135168 Dec 16 12:16:24.933549 kernel: pcpu-alloc: s98200 r8192 d28776 u135168 alloc=33*4096 Dec 16 12:16:24.933568 kernel: pcpu-alloc: [0] 0 [0] 1 Dec 16 12:16:24.933585 kernel: Detected PIPT I-cache on CPU0 Dec 16 12:16:24.933603 kernel: CPU features: detected: GIC system register CPU interface Dec 16 12:16:24.933620 kernel: CPU features: detected: Spectre-v2 Dec 16 12:16:24.933638 kernel: CPU features: detected: Spectre-v3a Dec 16 12:16:24.933655 kernel: CPU features: detected: Spectre-BHB Dec 16 12:16:24.933672 kernel: CPU features: detected: ARM erratum 1742098 Dec 16 12:16:24.933690 kernel: CPU features: detected: ARM errata 1165522, 1319367, or 1530923 Dec 16 12:16:24.933707 kernel: alternatives: applying boot alternatives Dec 16 12:16:24.933727 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=756b815c2fd7ac2947efceb2a88878d1ea9723ec85037c2b4d1a09bd798bb749 Dec 16 12:16:24.933749 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Dec 16 12:16:24.933767 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Dec 16 12:16:24.933784 kernel: Fallback order for Node 0: 0 Dec 16 12:16:24.933802 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1007616 Dec 16 12:16:24.933819 kernel: Policy zone: Normal Dec 16 12:16:24.933837 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Dec 16 12:16:24.933854 kernel: software IO TLB: area num 2. Dec 16 12:16:24.933872 kernel: software IO TLB: mapped [mem 0x000000006f800000-0x0000000073800000] (64MB) Dec 16 12:16:24.933889 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Dec 16 12:16:24.933907 kernel: rcu: Preemptible hierarchical RCU implementation. Dec 16 12:16:24.933930 kernel: rcu: RCU event tracing is enabled. Dec 16 12:16:24.933948 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Dec 16 12:16:24.933966 kernel: Trampoline variant of Tasks RCU enabled. Dec 16 12:16:24.933984 kernel: Tracing variant of Tasks RCU enabled. Dec 16 12:16:24.934001 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Dec 16 12:16:24.934019 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Dec 16 12:16:24.934036 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 16 12:16:24.934054 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 16 12:16:24.934072 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Dec 16 12:16:24.934089 kernel: GICv3: 96 SPIs implemented Dec 16 12:16:24.934106 kernel: GICv3: 0 Extended SPIs implemented Dec 16 12:16:24.934128 kernel: Root IRQ handler: gic_handle_irq Dec 16 12:16:24.934146 kernel: GICv3: GICv3 features: 16 PPIs Dec 16 12:16:24.934163 kernel: GICv3: GICD_CTRL.DS=1, SCR_EL3.FIQ=0 Dec 16 12:16:24.934181 kernel: GICv3: CPU0: found redistributor 0 region 0:0x0000000010200000 Dec 16 12:16:24.934217 kernel: ITS [mem 0x10080000-0x1009ffff] Dec 16 12:16:24.934240 kernel: ITS@0x0000000010080000: allocated 8192 Devices @4000f0000 (indirect, esz 8, psz 64K, shr 1) Dec 16 12:16:24.934260 kernel: ITS@0x0000000010080000: allocated 8192 Interrupt Collections @400100000 (flat, esz 8, psz 64K, shr 1) Dec 16 12:16:24.934279 kernel: GICv3: using LPI property table @0x0000000400110000 Dec 16 12:16:24.934296 kernel: ITS: Using hypervisor restricted LPI range [128] Dec 16 12:16:24.934314 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000400120000 Dec 16 12:16:24.934332 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Dec 16 12:16:24.934356 kernel: arch_timer: cp15 timer(s) running at 83.33MHz (virt). Dec 16 12:16:24.934374 kernel: clocksource: arch_sys_counter: mask: 0x1ffffffffffffff max_cycles: 0x13381ebeec, max_idle_ns: 440795203145 ns Dec 16 12:16:24.934392 kernel: sched_clock: 57 bits at 83MHz, resolution 12ns, wraps every 4398046511100ns Dec 16 12:16:24.934410 kernel: Console: colour dummy device 80x25 Dec 16 12:16:24.934428 kernel: printk: legacy console [tty1] enabled Dec 16 12:16:24.934447 kernel: ACPI: Core revision 20240827 Dec 16 12:16:24.934466 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 166.66 BogoMIPS (lpj=83333) Dec 16 12:16:24.934484 kernel: pid_max: default: 32768 minimum: 301 Dec 16 12:16:24.934506 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Dec 16 12:16:24.934525 kernel: landlock: Up and running. Dec 16 12:16:24.934543 kernel: SELinux: Initializing. Dec 16 12:16:24.934561 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Dec 16 12:16:24.934580 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Dec 16 12:16:24.934598 kernel: rcu: Hierarchical SRCU implementation. Dec 16 12:16:24.934618 kernel: rcu: Max phase no-delay instances is 400. Dec 16 12:16:24.934637 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Dec 16 12:16:24.934659 kernel: Remapping and enabling EFI services. Dec 16 12:16:24.934677 kernel: smp: Bringing up secondary CPUs ... Dec 16 12:16:24.934696 kernel: Detected PIPT I-cache on CPU1 Dec 16 12:16:24.934715 kernel: GICv3: CPU1: found redistributor 1 region 0:0x0000000010220000 Dec 16 12:16:24.934733 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000400130000 Dec 16 12:16:24.934753 kernel: CPU1: Booted secondary processor 0x0000000001 [0x410fd083] Dec 16 12:16:24.934780 kernel: smp: Brought up 1 node, 2 CPUs Dec 16 12:16:24.934834 kernel: SMP: Total of 2 processors activated. Dec 16 12:16:24.934884 kernel: CPU: All CPU(s) started at EL1 Dec 16 12:16:24.934947 kernel: CPU features: detected: 32-bit EL0 Support Dec 16 12:16:24.934986 kernel: CPU features: detected: 32-bit EL1 Support Dec 16 12:16:24.935006 kernel: CPU features: detected: CRC32 instructions Dec 16 12:16:24.935025 kernel: alternatives: applying system-wide alternatives Dec 16 12:16:24.935047 kernel: Memory: 3823400K/4030464K available (11200K kernel code, 2456K rwdata, 9084K rodata, 12480K init, 1038K bss, 185716K reserved, 16384K cma-reserved) Dec 16 12:16:24.935069 kernel: devtmpfs: initialized Dec 16 12:16:24.935097 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Dec 16 12:16:24.935116 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Dec 16 12:16:24.935135 kernel: 23648 pages in range for non-PLT usage Dec 16 12:16:24.935154 kernel: 515168 pages in range for PLT usage Dec 16 12:16:24.935173 kernel: pinctrl core: initialized pinctrl subsystem Dec 16 12:16:24.935214 kernel: SMBIOS 3.0.0 present. Dec 16 12:16:24.935264 kernel: DMI: Amazon EC2 a1.large/, BIOS 1.0 11/1/2018 Dec 16 12:16:24.935286 kernel: DMI: Memory slots populated: 0/0 Dec 16 12:16:24.935305 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Dec 16 12:16:24.935325 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Dec 16 12:16:24.935345 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Dec 16 12:16:24.935364 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Dec 16 12:16:24.935389 kernel: audit: initializing netlink subsys (disabled) Dec 16 12:16:24.935408 kernel: audit: type=2000 audit(0.225:1): state=initialized audit_enabled=0 res=1 Dec 16 12:16:24.935427 kernel: thermal_sys: Registered thermal governor 'step_wise' Dec 16 12:16:24.935446 kernel: cpuidle: using governor menu Dec 16 12:16:24.935465 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Dec 16 12:16:24.935484 kernel: ASID allocator initialised with 65536 entries Dec 16 12:16:24.935503 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Dec 16 12:16:24.935526 kernel: Serial: AMBA PL011 UART driver Dec 16 12:16:24.935545 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Dec 16 12:16:24.935564 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Dec 16 12:16:24.935583 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Dec 16 12:16:24.935602 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Dec 16 12:16:24.935621 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Dec 16 12:16:24.935640 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Dec 16 12:16:24.935662 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Dec 16 12:16:24.935681 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Dec 16 12:16:24.935700 kernel: ACPI: Added _OSI(Module Device) Dec 16 12:16:24.935719 kernel: ACPI: Added _OSI(Processor Device) Dec 16 12:16:24.935738 kernel: ACPI: Added _OSI(Processor Aggregator Device) Dec 16 12:16:24.935756 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Dec 16 12:16:24.935775 kernel: ACPI: Interpreter enabled Dec 16 12:16:24.935798 kernel: ACPI: Using GIC for interrupt routing Dec 16 12:16:24.935817 kernel: ACPI: MCFG table detected, 1 entries Dec 16 12:16:24.935836 kernel: ACPI: CPU0 has been hot-added Dec 16 12:16:24.935855 kernel: ACPI: CPU1 has been hot-added Dec 16 12:16:24.935874 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00]) Dec 16 12:16:24.936281 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Dec 16 12:16:24.936552 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Dec 16 12:16:24.936813 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Dec 16 12:16:24.937065 kernel: acpi PNP0A08:00: ECAM area [mem 0x20000000-0x200fffff] reserved by PNP0C02:00 Dec 16 12:16:24.937381 kernel: acpi PNP0A08:00: ECAM at [mem 0x20000000-0x200fffff] for [bus 00] Dec 16 12:16:24.937408 kernel: ACPI: Remapped I/O 0x000000001fff0000 to [io 0x0000-0xffff window] Dec 16 12:16:24.937428 kernel: acpiphp: Slot [1] registered Dec 16 12:16:24.937447 kernel: acpiphp: Slot [2] registered Dec 16 12:16:24.937473 kernel: acpiphp: Slot [3] registered Dec 16 12:16:24.937492 kernel: acpiphp: Slot [4] registered Dec 16 12:16:24.937511 kernel: acpiphp: Slot [5] registered Dec 16 12:16:24.937530 kernel: acpiphp: Slot [6] registered Dec 16 12:16:24.937548 kernel: acpiphp: Slot [7] registered Dec 16 12:16:24.937567 kernel: acpiphp: Slot [8] registered Dec 16 12:16:24.937586 kernel: acpiphp: Slot [9] registered Dec 16 12:16:24.937604 kernel: acpiphp: Slot [10] registered Dec 16 12:16:24.937628 kernel: acpiphp: Slot [11] registered Dec 16 12:16:24.937647 kernel: acpiphp: Slot [12] registered Dec 16 12:16:24.937666 kernel: acpiphp: Slot [13] registered Dec 16 12:16:24.937685 kernel: acpiphp: Slot [14] registered Dec 16 12:16:24.937704 kernel: acpiphp: Slot [15] registered Dec 16 12:16:24.937723 kernel: acpiphp: Slot [16] registered Dec 16 12:16:24.937742 kernel: acpiphp: Slot [17] registered Dec 16 12:16:24.937765 kernel: acpiphp: Slot [18] registered Dec 16 12:16:24.937784 kernel: acpiphp: Slot [19] registered Dec 16 12:16:24.937803 kernel: acpiphp: Slot [20] registered Dec 16 12:16:24.937822 kernel: acpiphp: Slot [21] registered Dec 16 12:16:24.937841 kernel: acpiphp: Slot [22] registered Dec 16 12:16:24.937860 kernel: acpiphp: Slot [23] registered Dec 16 12:16:24.937879 kernel: acpiphp: Slot [24] registered Dec 16 12:16:24.937902 kernel: acpiphp: Slot [25] registered Dec 16 12:16:24.937921 kernel: acpiphp: Slot [26] registered Dec 16 12:16:24.937941 kernel: acpiphp: Slot [27] registered Dec 16 12:16:24.937960 kernel: acpiphp: Slot [28] registered Dec 16 12:16:24.937978 kernel: acpiphp: Slot [29] registered Dec 16 12:16:24.937997 kernel: acpiphp: Slot [30] registered Dec 16 12:16:24.938016 kernel: acpiphp: Slot [31] registered Dec 16 12:16:24.938035 kernel: PCI host bridge to bus 0000:00 Dec 16 12:16:24.938319 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xffffffff window] Dec 16 12:16:24.946241 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Dec 16 12:16:24.946507 kernel: pci_bus 0000:00: root bus resource [mem 0x400000000000-0x407fffffffff window] Dec 16 12:16:24.946738 kernel: pci_bus 0000:00: root bus resource [bus 00] Dec 16 12:16:24.947037 kernel: pci 0000:00:00.0: [1d0f:0200] type 00 class 0x060000 conventional PCI endpoint Dec 16 12:16:24.947361 kernel: pci 0000:00:01.0: [1d0f:8250] type 00 class 0x070003 conventional PCI endpoint Dec 16 12:16:24.947629 kernel: pci 0000:00:01.0: BAR 0 [mem 0x80118000-0x80118fff] Dec 16 12:16:24.947938 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 PCIe Root Complex Integrated Endpoint Dec 16 12:16:24.948344 kernel: pci 0000:00:04.0: BAR 0 [mem 0x80114000-0x80117fff] Dec 16 12:16:24.948637 kernel: pci 0000:00:04.0: PME# supported from D0 D1 D2 D3hot D3cold Dec 16 12:16:24.948930 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 PCIe Root Complex Integrated Endpoint Dec 16 12:16:24.949244 kernel: pci 0000:00:05.0: BAR 0 [mem 0x80110000-0x80113fff] Dec 16 12:16:24.949518 kernel: pci 0000:00:05.0: BAR 2 [mem 0x80000000-0x800fffff pref] Dec 16 12:16:24.949774 kernel: pci 0000:00:05.0: BAR 4 [mem 0x80100000-0x8010ffff] Dec 16 12:16:24.950027 kernel: pci 0000:00:05.0: PME# supported from D0 D1 D2 D3hot D3cold Dec 16 12:16:24.950291 kernel: pci_bus 0000:00: resource 4 [mem 0x80000000-0xffffffff window] Dec 16 12:16:24.950535 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Dec 16 12:16:24.950767 kernel: pci_bus 0000:00: resource 6 [mem 0x400000000000-0x407fffffffff window] Dec 16 12:16:24.950793 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Dec 16 12:16:24.950813 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Dec 16 12:16:24.950833 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Dec 16 12:16:24.950852 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Dec 16 12:16:24.950871 kernel: iommu: Default domain type: Translated Dec 16 12:16:24.950895 kernel: iommu: DMA domain TLB invalidation policy: strict mode Dec 16 12:16:24.950915 kernel: efivars: Registered efivars operations Dec 16 12:16:24.950933 kernel: vgaarb: loaded Dec 16 12:16:24.950952 kernel: clocksource: Switched to clocksource arch_sys_counter Dec 16 12:16:24.950971 kernel: VFS: Disk quotas dquot_6.6.0 Dec 16 12:16:24.950990 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Dec 16 12:16:24.951010 kernel: pnp: PnP ACPI init Dec 16 12:16:24.951314 kernel: system 00:00: [mem 0x20000000-0x2fffffff] could not be reserved Dec 16 12:16:24.951343 kernel: pnp: PnP ACPI: found 1 devices Dec 16 12:16:24.951363 kernel: NET: Registered PF_INET protocol family Dec 16 12:16:24.951383 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Dec 16 12:16:24.951402 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Dec 16 12:16:24.951422 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Dec 16 12:16:24.951441 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Dec 16 12:16:24.951466 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Dec 16 12:16:24.951485 kernel: TCP: Hash tables configured (established 32768 bind 32768) Dec 16 12:16:24.951504 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Dec 16 12:16:24.951524 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Dec 16 12:16:24.951543 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Dec 16 12:16:24.951562 kernel: PCI: CLS 0 bytes, default 64 Dec 16 12:16:24.951581 kernel: kvm [1]: HYP mode not available Dec 16 12:16:24.951605 kernel: Initialise system trusted keyrings Dec 16 12:16:24.951623 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Dec 16 12:16:24.951642 kernel: Key type asymmetric registered Dec 16 12:16:24.951661 kernel: Asymmetric key parser 'x509' registered Dec 16 12:16:24.951680 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Dec 16 12:16:24.951699 kernel: io scheduler mq-deadline registered Dec 16 12:16:24.951717 kernel: io scheduler kyber registered Dec 16 12:16:24.951741 kernel: io scheduler bfq registered Dec 16 12:16:24.952061 kernel: pl061_gpio ARMH0061:00: PL061 GPIO chip registered Dec 16 12:16:24.952090 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Dec 16 12:16:24.952113 kernel: ACPI: button: Power Button [PWRB] Dec 16 12:16:24.952134 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input1 Dec 16 12:16:24.952154 kernel: ACPI: button: Sleep Button [SLPB] Dec 16 12:16:24.952181 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Dec 16 12:16:24.952226 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Dec 16 12:16:24.952551 kernel: serial 0000:00:01.0: enabling device (0010 -> 0012) Dec 16 12:16:24.952581 kernel: printk: legacy console [ttyS0] disabled Dec 16 12:16:24.952601 kernel: 0000:00:01.0: ttyS0 at MMIO 0x80118000 (irq = 14, base_baud = 115200) is a 16550A Dec 16 12:16:24.952621 kernel: printk: legacy console [ttyS0] enabled Dec 16 12:16:24.952640 kernel: printk: legacy bootconsole [uart0] disabled Dec 16 12:16:24.952665 kernel: thunder_xcv, ver 1.0 Dec 16 12:16:24.952684 kernel: thunder_bgx, ver 1.0 Dec 16 12:16:24.952703 kernel: nicpf, ver 1.0 Dec 16 12:16:24.952722 kernel: nicvf, ver 1.0 Dec 16 12:16:24.953003 kernel: rtc-efi rtc-efi.0: registered as rtc0 Dec 16 12:16:24.953317 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-12-16T12:16:21 UTC (1765887381) Dec 16 12:16:24.953349 kernel: hid: raw HID events driver (C) Jiri Kosina Dec 16 12:16:24.953378 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 3 (0,80000003) counters available Dec 16 12:16:24.953398 kernel: watchdog: NMI not fully supported Dec 16 12:16:24.953420 kernel: NET: Registered PF_INET6 protocol family Dec 16 12:16:24.953439 kernel: watchdog: Hard watchdog permanently disabled Dec 16 12:16:24.953459 kernel: Segment Routing with IPv6 Dec 16 12:16:24.953478 kernel: In-situ OAM (IOAM) with IPv6 Dec 16 12:16:24.953498 kernel: NET: Registered PF_PACKET protocol family Dec 16 12:16:24.953523 kernel: Key type dns_resolver registered Dec 16 12:16:24.953543 kernel: registered taskstats version 1 Dec 16 12:16:24.953562 kernel: Loading compiled-in X.509 certificates Dec 16 12:16:24.953583 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.61-flatcar: 545838337a91b65b763486e536766b3eec3ef99d' Dec 16 12:16:24.953602 kernel: Demotion targets for Node 0: null Dec 16 12:16:24.953621 kernel: Key type .fscrypt registered Dec 16 12:16:24.953641 kernel: Key type fscrypt-provisioning registered Dec 16 12:16:24.953665 kernel: ima: No TPM chip found, activating TPM-bypass! Dec 16 12:16:24.953685 kernel: ima: Allocated hash algorithm: sha1 Dec 16 12:16:24.953704 kernel: ima: No architecture policies found Dec 16 12:16:24.953725 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Dec 16 12:16:24.953744 kernel: clk: Disabling unused clocks Dec 16 12:16:24.953764 kernel: PM: genpd: Disabling unused power domains Dec 16 12:16:24.953784 kernel: Freeing unused kernel memory: 12480K Dec 16 12:16:24.953803 kernel: Run /init as init process Dec 16 12:16:24.953828 kernel: with arguments: Dec 16 12:16:24.953848 kernel: /init Dec 16 12:16:24.953866 kernel: with environment: Dec 16 12:16:24.953885 kernel: HOME=/ Dec 16 12:16:24.953904 kernel: TERM=linux Dec 16 12:16:24.953924 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Dec 16 12:16:24.954264 kernel: nvme nvme0: pci function 0000:00:04.0 Dec 16 12:16:24.957416 kernel: nvme nvme0: 2/0/0 default/read/poll queues Dec 16 12:16:24.957461 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Dec 16 12:16:24.957481 kernel: GPT:25804799 != 33554431 Dec 16 12:16:24.957500 kernel: GPT:Alternate GPT header not at the end of the disk. Dec 16 12:16:24.957519 kernel: GPT:25804799 != 33554431 Dec 16 12:16:24.957538 kernel: GPT: Use GNU Parted to correct GPT errors. Dec 16 12:16:24.957568 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Dec 16 12:16:24.957587 kernel: SCSI subsystem initialized Dec 16 12:16:24.957607 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Dec 16 12:16:24.957626 kernel: device-mapper: uevent: version 1.0.3 Dec 16 12:16:24.957649 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Dec 16 12:16:24.957669 kernel: device-mapper: verity: sha256 using shash "sha256-ce" Dec 16 12:16:24.957688 kernel: raid6: neonx8 gen() 6587 MB/s Dec 16 12:16:24.957713 kernel: raid6: neonx4 gen() 6587 MB/s Dec 16 12:16:24.957734 kernel: raid6: neonx2 gen() 5466 MB/s Dec 16 12:16:24.957753 kernel: raid6: neonx1 gen() 3969 MB/s Dec 16 12:16:24.957773 kernel: raid6: int64x8 gen() 3635 MB/s Dec 16 12:16:24.957793 kernel: raid6: int64x4 gen() 3701 MB/s Dec 16 12:16:24.957813 kernel: raid6: int64x2 gen() 3609 MB/s Dec 16 12:16:24.957833 kernel: raid6: int64x1 gen() 2733 MB/s Dec 16 12:16:24.957859 kernel: raid6: using algorithm neonx8 gen() 6587 MB/s Dec 16 12:16:24.957879 kernel: raid6: .... xor() 4724 MB/s, rmw enabled Dec 16 12:16:24.957898 kernel: raid6: using neon recovery algorithm Dec 16 12:16:24.957919 kernel: xor: measuring software checksum speed Dec 16 12:16:24.957939 kernel: 8regs : 12919 MB/sec Dec 16 12:16:24.957960 kernel: 32regs : 13013 MB/sec Dec 16 12:16:24.957979 kernel: arm64_neon : 8843 MB/sec Dec 16 12:16:24.958003 kernel: xor: using function: 32regs (13013 MB/sec) Dec 16 12:16:24.958023 kernel: Btrfs loaded, zoned=no, fsverity=no Dec 16 12:16:24.958043 kernel: BTRFS: device fsid d00a2bc5-1c68-4957-aa37-d070193fcf05 devid 1 transid 36 /dev/mapper/usr (254:0) scanned by mount (222) Dec 16 12:16:24.958063 kernel: BTRFS info (device dm-0): first mount of filesystem d00a2bc5-1c68-4957-aa37-d070193fcf05 Dec 16 12:16:24.958083 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Dec 16 12:16:24.958103 kernel: BTRFS info (device dm-0): enabling ssd optimizations Dec 16 12:16:24.958123 kernel: BTRFS info (device dm-0): disabling log replay at mount time Dec 16 12:16:24.958146 kernel: BTRFS info (device dm-0): enabling free space tree Dec 16 12:16:24.958165 kernel: loop: module loaded Dec 16 12:16:24.958185 kernel: loop0: detected capacity change from 0 to 91832 Dec 16 12:16:24.958258 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Dec 16 12:16:24.958285 systemd[1]: Successfully made /usr/ read-only. Dec 16 12:16:24.958312 systemd[1]: systemd 257.9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Dec 16 12:16:24.958341 systemd[1]: Detected virtualization amazon. Dec 16 12:16:24.958361 systemd[1]: Detected architecture arm64. Dec 16 12:16:24.958381 systemd[1]: Running in initrd. Dec 16 12:16:24.958401 systemd[1]: No hostname configured, using default hostname. Dec 16 12:16:24.958422 systemd[1]: Hostname set to . Dec 16 12:16:24.958442 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Dec 16 12:16:24.958463 systemd[1]: Queued start job for default target initrd.target. Dec 16 12:16:24.958488 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Dec 16 12:16:24.958509 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 16 12:16:24.958530 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 16 12:16:24.958552 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Dec 16 12:16:24.958573 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 16 12:16:24.958616 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Dec 16 12:16:24.958638 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Dec 16 12:16:24.958659 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 16 12:16:24.958680 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 16 12:16:24.958701 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Dec 16 12:16:24.958726 systemd[1]: Reached target paths.target - Path Units. Dec 16 12:16:24.958747 systemd[1]: Reached target slices.target - Slice Units. Dec 16 12:16:24.958769 systemd[1]: Reached target swap.target - Swaps. Dec 16 12:16:24.958789 systemd[1]: Reached target timers.target - Timer Units. Dec 16 12:16:24.958810 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Dec 16 12:16:24.958831 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 16 12:16:24.958853 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Dec 16 12:16:24.958879 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Dec 16 12:16:24.958906 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Dec 16 12:16:24.958926 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 16 12:16:24.958947 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 16 12:16:24.958969 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 16 12:16:24.958989 systemd[1]: Reached target sockets.target - Socket Units. Dec 16 12:16:24.959011 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Dec 16 12:16:24.959036 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Dec 16 12:16:24.959057 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 16 12:16:24.959078 systemd[1]: Finished network-cleanup.service - Network Cleanup. Dec 16 12:16:24.959101 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Dec 16 12:16:24.959125 systemd[1]: Starting systemd-fsck-usr.service... Dec 16 12:16:24.959146 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 16 12:16:24.959170 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 16 12:16:24.959217 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 16 12:16:24.959248 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Dec 16 12:16:24.959303 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 16 12:16:24.959328 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 16 12:16:24.959350 systemd[1]: Finished systemd-fsck-usr.service. Dec 16 12:16:24.959427 systemd-journald[361]: Collecting audit messages is enabled. Dec 16 12:16:24.959477 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 16 12:16:24.959499 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Dec 16 12:16:24.959520 kernel: audit: type=1130 audit(1765887384.937:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:16:24.959542 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 16 12:16:24.959568 systemd-journald[361]: Journal started Dec 16 12:16:24.959606 systemd-journald[361]: Runtime Journal (/run/log/journal/ec2d33883081e5b6c080b075b2fcc1dc) is 8M, max 75.3M, 67.3M free. Dec 16 12:16:24.937000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:16:24.969474 systemd[1]: Started systemd-journald.service - Journal Service. Dec 16 12:16:24.969558 kernel: audit: type=1130 audit(1765887384.961:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:16:24.961000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:16:24.983881 kernel: Bridge firewalling registered Dec 16 12:16:24.978466 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 16 12:16:24.979302 systemd-modules-load[362]: Inserted module 'br_netfilter' Dec 16 12:16:24.989449 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 16 12:16:24.998000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:16:25.011865 kernel: audit: type=1130 audit(1765887384.998:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:16:25.012004 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 16 12:16:25.026065 kernel: audit: type=1130 audit(1765887385.010:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:16:25.010000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:16:25.021906 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 16 12:16:25.041664 systemd-tmpfiles[377]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Dec 16 12:16:25.050045 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 12:16:25.055000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:16:25.067453 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 16 12:16:25.077372 kernel: audit: type=1130 audit(1765887385.055:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:16:25.077415 kernel: audit: type=1130 audit(1765887385.066:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:16:25.066000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:16:25.081631 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 16 12:16:25.089440 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 16 12:16:25.094000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:16:25.100000 audit: BPF prog-id=6 op=LOAD Dec 16 12:16:25.103167 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 16 12:16:25.107027 kernel: audit: type=1130 audit(1765887385.094:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:16:25.107080 kernel: audit: type=1334 audit(1765887385.100:9): prog-id=6 op=LOAD Dec 16 12:16:25.146260 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 16 12:16:25.149000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:16:25.154412 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Dec 16 12:16:25.170255 kernel: audit: type=1130 audit(1765887385.149:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:16:25.197912 dracut-cmdline[400]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=756b815c2fd7ac2947efceb2a88878d1ea9723ec85037c2b4d1a09bd798bb749 Dec 16 12:16:25.261911 systemd-resolved[388]: Positive Trust Anchors: Dec 16 12:16:25.265254 systemd-resolved[388]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 16 12:16:25.265265 systemd-resolved[388]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Dec 16 12:16:25.265328 systemd-resolved[388]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 16 12:16:25.468270 kernel: Loading iSCSI transport class v2.0-870. Dec 16 12:16:25.516263 kernel: iscsi: registered transport (tcp) Dec 16 12:16:25.549230 kernel: random: crng init done Dec 16 12:16:25.560401 systemd-resolved[388]: Defaulting to hostname 'linux'. Dec 16 12:16:25.562371 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 16 12:16:25.568000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:16:25.569542 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 16 12:16:25.578747 kernel: audit: type=1130 audit(1765887385.568:11): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:16:25.590252 kernel: iscsi: registered transport (qla4xxx) Dec 16 12:16:25.590334 kernel: QLogic iSCSI HBA Driver Dec 16 12:16:25.629995 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Dec 16 12:16:25.676338 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Dec 16 12:16:25.691031 kernel: audit: type=1130 audit(1765887385.675:12): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:16:25.675000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:16:25.677238 systemd[1]: Reached target network-pre.target - Preparation for Network. Dec 16 12:16:25.765481 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Dec 16 12:16:25.764000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:16:25.772715 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Dec 16 12:16:25.774628 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Dec 16 12:16:25.849000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:16:25.848322 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Dec 16 12:16:25.854000 audit: BPF prog-id=7 op=LOAD Dec 16 12:16:25.854000 audit: BPF prog-id=8 op=LOAD Dec 16 12:16:25.857255 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 16 12:16:25.924859 systemd-udevd[641]: Using default interface naming scheme 'v257'. Dec 16 12:16:25.946305 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 16 12:16:25.952000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:16:25.956793 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Dec 16 12:16:25.997447 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 16 12:16:25.999000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:16:26.002000 audit: BPF prog-id=9 op=LOAD Dec 16 12:16:26.007707 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 16 12:16:26.016471 dracut-pre-trigger[718]: rd.md=0: removing MD RAID activation Dec 16 12:16:26.067283 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Dec 16 12:16:26.072000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:16:26.076397 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 16 12:16:26.113398 systemd-networkd[746]: lo: Link UP Dec 16 12:16:26.113411 systemd-networkd[746]: lo: Gained carrier Dec 16 12:16:26.118035 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 16 12:16:26.125000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:16:26.126405 systemd[1]: Reached target network.target - Network. Dec 16 12:16:26.242258 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 16 12:16:26.246000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:16:26.251251 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Dec 16 12:16:26.446681 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 16 12:16:26.448047 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 12:16:26.454000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:16:26.456112 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Dec 16 12:16:26.463252 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 16 12:16:26.486012 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Dec 16 12:16:26.486100 kernel: ena 0000:00:05.0: enabling device (0010 -> 0012) Dec 16 12:16:26.498114 kernel: ena 0000:00:05.0: ENA device version: 0.10 Dec 16 12:16:26.498539 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Dec 16 12:16:26.511231 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80110000, mac addr 06:57:7f:0b:11:b5 Dec 16 12:16:26.516446 (udev-worker)[779]: Network interface NamePolicy= disabled on kernel command line. Dec 16 12:16:26.533410 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 12:16:26.538000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:16:26.545643 systemd-networkd[746]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Dec 16 12:16:26.545664 systemd-networkd[746]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 16 12:16:26.558428 systemd-networkd[746]: eth0: Link UP Dec 16 12:16:26.561366 kernel: nvme nvme0: using unchecked data buffer Dec 16 12:16:26.558736 systemd-networkd[746]: eth0: Gained carrier Dec 16 12:16:26.558759 systemd-networkd[746]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Dec 16 12:16:26.577309 systemd-networkd[746]: eth0: DHCPv4 address 172.31.20.6/20, gateway 172.31.16.1 acquired from 172.31.16.1 Dec 16 12:16:26.701979 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Dec 16 12:16:26.730439 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Dec 16 12:16:26.768315 disk-uuid[866]: Primary Header is updated. Dec 16 12:16:26.768315 disk-uuid[866]: Secondary Entries is updated. Dec 16 12:16:26.768315 disk-uuid[866]: Secondary Header is updated. Dec 16 12:16:26.828977 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Dec 16 12:16:26.873833 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Dec 16 12:16:26.941876 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Dec 16 12:16:27.264046 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Dec 16 12:16:27.265000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:16:27.269706 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Dec 16 12:16:27.272599 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 16 12:16:27.275310 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 16 12:16:27.279228 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Dec 16 12:16:27.322297 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Dec 16 12:16:27.324000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:16:27.870592 disk-uuid[873]: Warning: The kernel is still using the old partition table. Dec 16 12:16:27.870592 disk-uuid[873]: The new table will be used at the next reboot or after you Dec 16 12:16:27.870592 disk-uuid[873]: run partprobe(8) or kpartx(8) Dec 16 12:16:27.870592 disk-uuid[873]: The operation has completed successfully. Dec 16 12:16:27.884350 systemd[1]: disk-uuid.service: Deactivated successfully. Dec 16 12:16:27.887000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:16:27.887000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:16:27.884557 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Dec 16 12:16:27.892451 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Dec 16 12:16:27.948256 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 (259:5) scanned by mount (1091) Dec 16 12:16:27.952477 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem eb4bb268-dde2-45a9-b660-8899d8790a47 Dec 16 12:16:27.952527 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Dec 16 12:16:27.998190 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Dec 16 12:16:27.998264 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Dec 16 12:16:28.008241 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem eb4bb268-dde2-45a9-b660-8899d8790a47 Dec 16 12:16:28.011339 systemd[1]: Finished ignition-setup.service - Ignition (setup). Dec 16 12:16:28.010000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:16:28.017725 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Dec 16 12:16:28.522378 systemd-networkd[746]: eth0: Gained IPv6LL Dec 16 12:16:29.255546 ignition[1110]: Ignition 2.24.0 Dec 16 12:16:29.255574 ignition[1110]: Stage: fetch-offline Dec 16 12:16:29.255998 ignition[1110]: no configs at "/usr/lib/ignition/base.d" Dec 16 12:16:29.256027 ignition[1110]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 16 12:16:29.265283 ignition[1110]: Ignition finished successfully Dec 16 12:16:29.272117 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Dec 16 12:16:29.277000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:16:29.280877 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Dec 16 12:16:29.338456 ignition[1117]: Ignition 2.24.0 Dec 16 12:16:29.338972 ignition[1117]: Stage: fetch Dec 16 12:16:29.339413 ignition[1117]: no configs at "/usr/lib/ignition/base.d" Dec 16 12:16:29.339436 ignition[1117]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 16 12:16:29.339568 ignition[1117]: PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 16 12:16:29.362788 ignition[1117]: PUT result: OK Dec 16 12:16:29.366002 ignition[1117]: parsed url from cmdline: "" Dec 16 12:16:29.366139 ignition[1117]: no config URL provided Dec 16 12:16:29.366366 ignition[1117]: reading system config file "/usr/lib/ignition/user.ign" Dec 16 12:16:29.366401 ignition[1117]: no config at "/usr/lib/ignition/user.ign" Dec 16 12:16:29.366436 ignition[1117]: PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 16 12:16:29.375420 ignition[1117]: PUT result: OK Dec 16 12:16:29.376026 ignition[1117]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Dec 16 12:16:29.379845 ignition[1117]: GET result: OK Dec 16 12:16:29.381480 ignition[1117]: parsing config with SHA512: 743ebacb902f58df7d32d016c15db16016e2071bdcc90e0881bcdae2af51fcf3af89382f582d2ab6c5023f5f55e94b6447421a259bb5a66672536e294de34bc2 Dec 16 12:16:29.394941 unknown[1117]: fetched base config from "system" Dec 16 12:16:29.394963 unknown[1117]: fetched base config from "system" Dec 16 12:16:29.395850 ignition[1117]: fetch: fetch complete Dec 16 12:16:29.394977 unknown[1117]: fetched user config from "aws" Dec 16 12:16:29.395861 ignition[1117]: fetch: fetch passed Dec 16 12:16:29.395960 ignition[1117]: Ignition finished successfully Dec 16 12:16:29.409607 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Dec 16 12:16:29.412000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:16:29.415997 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Dec 16 12:16:29.463373 ignition[1123]: Ignition 2.24.0 Dec 16 12:16:29.463868 ignition[1123]: Stage: kargs Dec 16 12:16:29.464228 ignition[1123]: no configs at "/usr/lib/ignition/base.d" Dec 16 12:16:29.464277 ignition[1123]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 16 12:16:29.464414 ignition[1123]: PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 16 12:16:29.468896 ignition[1123]: PUT result: OK Dec 16 12:16:29.480682 ignition[1123]: kargs: kargs passed Dec 16 12:16:29.480814 ignition[1123]: Ignition finished successfully Dec 16 12:16:29.486361 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Dec 16 12:16:29.485000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:16:29.492409 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Dec 16 12:16:29.533834 ignition[1129]: Ignition 2.24.0 Dec 16 12:16:29.533867 ignition[1129]: Stage: disks Dec 16 12:16:29.534263 ignition[1129]: no configs at "/usr/lib/ignition/base.d" Dec 16 12:16:29.534286 ignition[1129]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 16 12:16:29.534875 ignition[1129]: PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 16 12:16:29.541782 ignition[1129]: PUT result: OK Dec 16 12:16:29.549274 ignition[1129]: disks: disks passed Dec 16 12:16:29.549411 ignition[1129]: Ignition finished successfully Dec 16 12:16:29.555498 systemd[1]: Finished ignition-disks.service - Ignition (disks). Dec 16 12:16:29.561000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:16:29.562746 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Dec 16 12:16:29.567847 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Dec 16 12:16:29.571100 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 16 12:16:29.578799 systemd[1]: Reached target sysinit.target - System Initialization. Dec 16 12:16:29.583591 systemd[1]: Reached target basic.target - Basic System. Dec 16 12:16:29.590441 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Dec 16 12:16:29.710304 systemd-fsck[1137]: ROOT: clean, 15/1631200 files, 112378/1617920 blocks Dec 16 12:16:29.715194 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Dec 16 12:16:29.720000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:16:29.723181 systemd[1]: Mounting sysroot.mount - /sysroot... Dec 16 12:16:29.966239 kernel: EXT4-fs (nvme0n1p9): mounted filesystem 0e69f709-36a9-4e15-b0c9-c7e150185653 r/w with ordered data mode. Quota mode: none. Dec 16 12:16:29.967664 systemd[1]: Mounted sysroot.mount - /sysroot. Dec 16 12:16:29.972690 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Dec 16 12:16:30.033797 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 16 12:16:30.041118 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Dec 16 12:16:30.046928 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Dec 16 12:16:30.047019 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Dec 16 12:16:30.047071 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Dec 16 12:16:30.078763 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 (259:5) scanned by mount (1156) Dec 16 12:16:30.084262 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem eb4bb268-dde2-45a9-b660-8899d8790a47 Dec 16 12:16:30.084332 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Dec 16 12:16:30.085445 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Dec 16 12:16:30.087788 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Dec 16 12:16:30.105124 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Dec 16 12:16:30.105194 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Dec 16 12:16:30.108288 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 16 12:16:32.528852 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Dec 16 12:16:32.540358 kernel: kauditd_printk_skb: 22 callbacks suppressed Dec 16 12:16:32.540400 kernel: audit: type=1130 audit(1765887392.530:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:16:32.530000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:16:32.541801 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Dec 16 12:16:32.548623 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Dec 16 12:16:32.578589 systemd[1]: sysroot-oem.mount: Deactivated successfully. Dec 16 12:16:32.582949 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem eb4bb268-dde2-45a9-b660-8899d8790a47 Dec 16 12:16:32.623353 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Dec 16 12:16:32.629557 ignition[1253]: INFO : Ignition 2.24.0 Dec 16 12:16:32.629557 ignition[1253]: INFO : Stage: mount Dec 16 12:16:32.629557 ignition[1253]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 16 12:16:32.629557 ignition[1253]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 16 12:16:32.629557 ignition[1253]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 16 12:16:32.649576 kernel: audit: type=1130 audit(1765887392.629:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:16:32.629000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:16:32.649715 ignition[1253]: INFO : PUT result: OK Dec 16 12:16:32.653340 ignition[1253]: INFO : mount: mount passed Dec 16 12:16:32.653340 ignition[1253]: INFO : Ignition finished successfully Dec 16 12:16:32.655829 systemd[1]: Finished ignition-mount.service - Ignition (mount). Dec 16 12:16:32.661000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:16:32.670541 kernel: audit: type=1130 audit(1765887392.661:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:16:32.668253 systemd[1]: Starting ignition-files.service - Ignition (files)... Dec 16 12:16:32.699582 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 16 12:16:32.736257 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 (259:5) scanned by mount (1264) Dec 16 12:16:32.741331 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem eb4bb268-dde2-45a9-b660-8899d8790a47 Dec 16 12:16:32.741401 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Dec 16 12:16:32.748881 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Dec 16 12:16:32.748960 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Dec 16 12:16:32.752539 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 16 12:16:32.812922 ignition[1281]: INFO : Ignition 2.24.0 Dec 16 12:16:32.812922 ignition[1281]: INFO : Stage: files Dec 16 12:16:32.816933 ignition[1281]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 16 12:16:32.816933 ignition[1281]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 16 12:16:32.816933 ignition[1281]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 16 12:16:32.825567 ignition[1281]: INFO : PUT result: OK Dec 16 12:16:32.833428 ignition[1281]: DEBUG : files: compiled without relabeling support, skipping Dec 16 12:16:32.836985 ignition[1281]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Dec 16 12:16:32.836985 ignition[1281]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Dec 16 12:16:32.848990 ignition[1281]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Dec 16 12:16:32.852478 ignition[1281]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Dec 16 12:16:32.855496 ignition[1281]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Dec 16 12:16:32.853973 unknown[1281]: wrote ssh authorized keys file for user: core Dec 16 12:16:32.863059 ignition[1281]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Dec 16 12:16:32.867552 ignition[1281]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-arm64.tar.gz: attempt #1 Dec 16 12:16:32.975351 ignition[1281]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Dec 16 12:16:33.131282 ignition[1281]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Dec 16 12:16:33.131282 ignition[1281]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Dec 16 12:16:33.141056 ignition[1281]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Dec 16 12:16:33.141056 ignition[1281]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Dec 16 12:16:33.141056 ignition[1281]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Dec 16 12:16:33.141056 ignition[1281]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 16 12:16:33.141056 ignition[1281]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 16 12:16:33.141056 ignition[1281]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 16 12:16:33.141056 ignition[1281]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 16 12:16:33.168703 ignition[1281]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Dec 16 12:16:33.168703 ignition[1281]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Dec 16 12:16:33.168703 ignition[1281]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Dec 16 12:16:33.185084 ignition[1281]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Dec 16 12:16:33.190783 ignition[1281]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Dec 16 12:16:33.195890 ignition[1281]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-arm64.raw: attempt #1 Dec 16 12:16:33.689148 ignition[1281]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Dec 16 12:16:34.102631 ignition[1281]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Dec 16 12:16:34.102631 ignition[1281]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Dec 16 12:16:34.111717 ignition[1281]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 16 12:16:34.122080 ignition[1281]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 16 12:16:34.122080 ignition[1281]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Dec 16 12:16:34.122080 ignition[1281]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Dec 16 12:16:34.122080 ignition[1281]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Dec 16 12:16:34.122080 ignition[1281]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Dec 16 12:16:34.122080 ignition[1281]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Dec 16 12:16:34.122080 ignition[1281]: INFO : files: files passed Dec 16 12:16:34.122080 ignition[1281]: INFO : Ignition finished successfully Dec 16 12:16:34.168367 kernel: audit: type=1130 audit(1765887394.137:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:16:34.137000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:16:34.133378 systemd[1]: Finished ignition-files.service - Ignition (files). Dec 16 12:16:34.147535 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Dec 16 12:16:34.156425 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Dec 16 12:16:34.186976 systemd[1]: ignition-quench.service: Deactivated successfully. Dec 16 12:16:34.190374 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Dec 16 12:16:34.194000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:16:34.194000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:16:34.206365 kernel: audit: type=1130 audit(1765887394.194:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:16:34.206411 kernel: audit: type=1131 audit(1765887394.194:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:16:34.213160 initrd-setup-root-after-ignition[1313]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 16 12:16:34.213160 initrd-setup-root-after-ignition[1313]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Dec 16 12:16:34.223840 initrd-setup-root-after-ignition[1317]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 16 12:16:34.230309 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 16 12:16:34.249142 kernel: audit: type=1130 audit(1765887394.232:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:16:34.232000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:16:34.238738 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Dec 16 12:16:34.246229 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Dec 16 12:16:34.337592 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Dec 16 12:16:34.339254 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Dec 16 12:16:34.356548 kernel: audit: type=1130 audit(1765887394.341:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:16:34.356600 kernel: audit: type=1131 audit(1765887394.341:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:16:34.341000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:16:34.341000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:16:34.348880 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Dec 16 12:16:34.356649 systemd[1]: Reached target initrd.target - Initrd Default Target. Dec 16 12:16:34.361600 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Dec 16 12:16:34.369052 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Dec 16 12:16:34.427346 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 16 12:16:34.443962 kernel: audit: type=1130 audit(1765887394.429:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:16:34.429000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:16:34.433439 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Dec 16 12:16:34.477878 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Dec 16 12:16:34.478384 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Dec 16 12:16:34.484100 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 16 12:16:34.491717 systemd[1]: Stopped target timers.target - Timer Units. Dec 16 12:16:34.494116 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Dec 16 12:16:34.497000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:16:34.494701 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 16 12:16:34.501454 systemd[1]: Stopped target initrd.target - Initrd Default Target. Dec 16 12:16:34.509166 systemd[1]: Stopped target basic.target - Basic System. Dec 16 12:16:34.511618 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Dec 16 12:16:34.516939 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Dec 16 12:16:34.521588 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Dec 16 12:16:34.524901 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Dec 16 12:16:34.532473 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Dec 16 12:16:34.539662 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Dec 16 12:16:34.542798 systemd[1]: Stopped target sysinit.target - System Initialization. Dec 16 12:16:34.550248 systemd[1]: Stopped target local-fs.target - Local File Systems. Dec 16 12:16:34.553223 systemd[1]: Stopped target swap.target - Swaps. Dec 16 12:16:34.559131 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Dec 16 12:16:34.559593 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Dec 16 12:16:34.567179 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Dec 16 12:16:34.565000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:16:34.572356 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 16 12:16:34.573019 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Dec 16 12:16:34.575345 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 16 12:16:34.582000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:16:34.580732 systemd[1]: dracut-initqueue.service: Deactivated successfully. Dec 16 12:16:34.580956 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Dec 16 12:16:34.589823 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Dec 16 12:16:34.595000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:16:34.590180 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 16 12:16:34.597230 systemd[1]: ignition-files.service: Deactivated successfully. Dec 16 12:16:34.597444 systemd[1]: Stopped ignition-files.service - Ignition (files). Dec 16 12:16:34.600000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:16:34.609247 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Dec 16 12:16:34.615149 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Dec 16 12:16:34.615905 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Dec 16 12:16:34.626000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:16:34.640428 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Dec 16 12:16:34.650353 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Dec 16 12:16:34.654906 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 16 12:16:34.662457 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Dec 16 12:16:34.662738 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Dec 16 12:16:34.661000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:16:34.672000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:16:34.673623 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Dec 16 12:16:34.676076 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Dec 16 12:16:34.680000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:16:34.692082 systemd[1]: initrd-cleanup.service: Deactivated successfully. Dec 16 12:16:34.696306 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Dec 16 12:16:34.700000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:16:34.700000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:16:34.707502 ignition[1337]: INFO : Ignition 2.24.0 Dec 16 12:16:34.710035 ignition[1337]: INFO : Stage: umount Dec 16 12:16:34.713243 ignition[1337]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 16 12:16:34.713243 ignition[1337]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 16 12:16:34.713243 ignition[1337]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 16 12:16:34.721268 ignition[1337]: INFO : PUT result: OK Dec 16 12:16:34.728922 ignition[1337]: INFO : umount: umount passed Dec 16 12:16:34.733420 ignition[1337]: INFO : Ignition finished successfully Dec 16 12:16:34.734577 systemd[1]: ignition-mount.service: Deactivated successfully. Dec 16 12:16:34.734829 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Dec 16 12:16:34.742000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:16:34.746983 systemd[1]: sysroot-boot.mount: Deactivated successfully. Dec 16 12:16:34.747986 systemd[1]: ignition-disks.service: Deactivated successfully. Dec 16 12:16:34.748079 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Dec 16 12:16:34.759000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:16:34.761558 systemd[1]: ignition-kargs.service: Deactivated successfully. Dec 16 12:16:34.762818 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Dec 16 12:16:34.765413 systemd[1]: ignition-fetch.service: Deactivated successfully. Dec 16 12:16:34.765511 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Dec 16 12:16:34.766139 systemd[1]: Stopped target network.target - Network. Dec 16 12:16:34.766767 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Dec 16 12:16:34.766846 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Dec 16 12:16:34.767183 systemd[1]: Stopped target paths.target - Path Units. Dec 16 12:16:34.773017 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Dec 16 12:16:34.764000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:16:34.764000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:16:34.765000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:16:34.796909 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 16 12:16:34.799984 systemd[1]: Stopped target slices.target - Slice Units. Dec 16 12:16:34.803269 systemd[1]: Stopped target sockets.target - Socket Units. Dec 16 12:16:34.815469 systemd[1]: iscsid.socket: Deactivated successfully. Dec 16 12:16:34.815685 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Dec 16 12:16:34.821980 systemd[1]: iscsiuio.socket: Deactivated successfully. Dec 16 12:16:34.822060 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 16 12:16:34.826354 systemd[1]: systemd-journald-audit.socket: Deactivated successfully. Dec 16 12:16:34.826409 systemd[1]: Closed systemd-journald-audit.socket - Journal Audit Socket. Dec 16 12:16:34.838000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:16:34.831915 systemd[1]: ignition-setup.service: Deactivated successfully. Dec 16 12:16:34.832034 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Dec 16 12:16:34.848000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup-pre comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:16:34.840076 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Dec 16 12:16:34.840172 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Dec 16 12:16:34.867000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:16:34.849846 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Dec 16 12:16:34.856719 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Dec 16 12:16:34.865657 systemd[1]: systemd-resolved.service: Deactivated successfully. Dec 16 12:16:34.865848 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Dec 16 12:16:34.884537 systemd[1]: systemd-networkd.service: Deactivated successfully. Dec 16 12:16:34.884728 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Dec 16 12:16:34.916000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:16:34.920000 audit: BPF prog-id=6 op=UNLOAD Dec 16 12:16:34.921910 systemd[1]: sysroot-boot.service: Deactivated successfully. Dec 16 12:16:34.922143 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Dec 16 12:16:34.929000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:16:34.935535 systemd[1]: Stopped target network-pre.target - Preparation for Network. Dec 16 12:16:34.945000 audit: BPF prog-id=9 op=UNLOAD Dec 16 12:16:34.946866 systemd[1]: systemd-networkd.socket: Deactivated successfully. Dec 16 12:16:34.946984 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Dec 16 12:16:34.952501 systemd[1]: initrd-setup-root.service: Deactivated successfully. Dec 16 12:16:34.952614 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Dec 16 12:16:34.959699 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Dec 16 12:16:34.955000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:16:34.969356 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Dec 16 12:16:34.969506 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 16 12:16:34.974746 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 16 12:16:34.974975 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Dec 16 12:16:34.973000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:16:34.984860 systemd[1]: systemd-modules-load.service: Deactivated successfully. Dec 16 12:16:34.984969 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Dec 16 12:16:34.988019 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 16 12:16:34.983000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:16:34.986000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:16:35.015718 systemd[1]: systemd-udevd.service: Deactivated successfully. Dec 16 12:16:35.016412 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 16 12:16:35.021000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:16:35.023121 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Dec 16 12:16:35.023238 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Dec 16 12:16:35.026501 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Dec 16 12:16:35.036000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:16:35.026574 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Dec 16 12:16:35.034620 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Dec 16 12:16:35.034716 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Dec 16 12:16:35.041867 systemd[1]: dracut-cmdline.service: Deactivated successfully. Dec 16 12:16:35.044124 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Dec 16 12:16:35.053000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:16:35.054624 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 16 12:16:35.054819 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 16 12:16:35.064334 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Dec 16 12:16:35.061000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:16:35.070017 systemd[1]: systemd-network-generator.service: Deactivated successfully. Dec 16 12:16:35.070326 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Dec 16 12:16:35.077000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:16:35.078824 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Dec 16 12:16:35.078943 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 16 12:16:35.081000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:16:35.086000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:16:35.082402 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 16 12:16:35.082495 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 12:16:35.113962 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Dec 16 12:16:35.114475 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Dec 16 12:16:35.122000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:16:35.123000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:16:35.127768 systemd[1]: network-cleanup.service: Deactivated successfully. Dec 16 12:16:35.129528 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Dec 16 12:16:35.133000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:16:35.135586 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Dec 16 12:16:35.141842 systemd[1]: Starting initrd-switch-root.service - Switch Root... Dec 16 12:16:35.171048 systemd[1]: Switching root. Dec 16 12:16:35.249644 systemd-journald[361]: Journal stopped Dec 16 12:16:39.169647 systemd-journald[361]: Received SIGTERM from PID 1 (systemd). Dec 16 12:16:39.169769 kernel: SELinux: policy capability network_peer_controls=1 Dec 16 12:16:39.169821 kernel: SELinux: policy capability open_perms=1 Dec 16 12:16:39.169854 kernel: SELinux: policy capability extended_socket_class=1 Dec 16 12:16:39.169893 kernel: SELinux: policy capability always_check_network=0 Dec 16 12:16:39.169926 kernel: SELinux: policy capability cgroup_seclabel=1 Dec 16 12:16:39.169964 kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 16 12:16:39.169995 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Dec 16 12:16:39.170028 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Dec 16 12:16:39.170057 kernel: SELinux: policy capability userspace_initial_context=0 Dec 16 12:16:39.170090 systemd[1]: Successfully loaded SELinux policy in 165.619ms. Dec 16 12:16:39.170140 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 15.925ms. Dec 16 12:16:39.170176 systemd[1]: systemd 257.9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Dec 16 12:16:39.170237 systemd[1]: Detected virtualization amazon. Dec 16 12:16:39.170275 systemd[1]: Detected architecture arm64. Dec 16 12:16:39.170307 systemd[1]: Detected first boot. Dec 16 12:16:39.170340 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Dec 16 12:16:39.170370 zram_generator::config[1383]: No configuration found. Dec 16 12:16:39.170411 kernel: NET: Registered PF_VSOCK protocol family Dec 16 12:16:39.170444 systemd[1]: Populated /etc with preset unit settings. Dec 16 12:16:39.170482 kernel: kauditd_printk_skb: 43 callbacks suppressed Dec 16 12:16:39.170515 kernel: audit: type=1334 audit(1765887398.277:88): prog-id=12 op=LOAD Dec 16 12:16:39.170547 kernel: audit: type=1334 audit(1765887398.277:89): prog-id=3 op=UNLOAD Dec 16 12:16:39.170577 kernel: audit: type=1334 audit(1765887398.278:90): prog-id=13 op=LOAD Dec 16 12:16:39.170608 kernel: audit: type=1334 audit(1765887398.279:91): prog-id=14 op=LOAD Dec 16 12:16:39.170636 kernel: audit: type=1334 audit(1765887398.279:92): prog-id=4 op=UNLOAD Dec 16 12:16:39.170663 kernel: audit: type=1334 audit(1765887398.279:93): prog-id=5 op=UNLOAD Dec 16 12:16:39.170699 kernel: audit: type=1131 audit(1765887398.282:94): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:16:39.170737 systemd[1]: initrd-switch-root.service: Deactivated successfully. Dec 16 12:16:39.170769 kernel: audit: type=1334 audit(1765887398.293:95): prog-id=12 op=UNLOAD Dec 16 12:16:39.170798 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Dec 16 12:16:39.170830 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Dec 16 12:16:39.170861 kernel: audit: type=1130 audit(1765887398.300:96): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:16:39.170895 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Dec 16 12:16:39.170935 kernel: audit: type=1131 audit(1765887398.300:97): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:16:39.170966 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Dec 16 12:16:39.170995 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Dec 16 12:16:39.171027 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Dec 16 12:16:39.171059 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Dec 16 12:16:39.171090 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Dec 16 12:16:39.171124 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Dec 16 12:16:39.171156 systemd[1]: Created slice user.slice - User and Session Slice. Dec 16 12:16:39.171185 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 16 12:16:39.172585 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 16 12:16:39.172628 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Dec 16 12:16:39.172659 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Dec 16 12:16:39.172698 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Dec 16 12:16:39.172739 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 16 12:16:39.172769 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Dec 16 12:16:39.172800 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 16 12:16:39.172831 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 16 12:16:39.172862 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Dec 16 12:16:39.172894 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Dec 16 12:16:39.172928 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Dec 16 12:16:39.172959 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Dec 16 12:16:39.172988 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 16 12:16:39.173017 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 16 12:16:39.173067 systemd[1]: Reached target remote-veritysetup.target - Remote Verity Protected Volumes. Dec 16 12:16:39.173100 systemd[1]: Reached target slices.target - Slice Units. Dec 16 12:16:39.173132 systemd[1]: Reached target swap.target - Swaps. Dec 16 12:16:39.173166 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Dec 16 12:16:39.174226 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Dec 16 12:16:39.174295 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Dec 16 12:16:39.174326 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Dec 16 12:16:39.174359 systemd[1]: Listening on systemd-mountfsd.socket - DDI File System Mounter Socket. Dec 16 12:16:39.174391 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 16 12:16:39.174432 systemd[1]: Listening on systemd-nsresourced.socket - Namespace Resource Manager Socket. Dec 16 12:16:39.174468 systemd[1]: Listening on systemd-oomd.socket - Userspace Out-Of-Memory (OOM) Killer Socket. Dec 16 12:16:39.174499 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 16 12:16:39.174531 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 16 12:16:39.174561 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Dec 16 12:16:39.174594 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Dec 16 12:16:39.174623 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Dec 16 12:16:39.174653 systemd[1]: Mounting media.mount - External Media Directory... Dec 16 12:16:39.174686 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Dec 16 12:16:39.174715 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Dec 16 12:16:39.174747 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Dec 16 12:16:39.174777 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Dec 16 12:16:39.174808 systemd[1]: Reached target machines.target - Containers. Dec 16 12:16:39.174837 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Dec 16 12:16:39.174866 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 16 12:16:39.174899 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 16 12:16:39.174931 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Dec 16 12:16:39.174959 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 16 12:16:39.174991 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 16 12:16:39.175022 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 16 12:16:39.175051 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Dec 16 12:16:39.175084 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 16 12:16:39.175119 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Dec 16 12:16:39.175151 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Dec 16 12:16:39.175179 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Dec 16 12:16:39.185266 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Dec 16 12:16:39.185327 systemd[1]: Stopped systemd-fsck-usr.service. Dec 16 12:16:39.185361 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 16 12:16:39.185400 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 16 12:16:39.185430 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 16 12:16:39.185460 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Dec 16 12:16:39.185491 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Dec 16 12:16:39.185525 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Dec 16 12:16:39.185555 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 16 12:16:39.185621 kernel: fuse: init (API version 7.41) Dec 16 12:16:39.185653 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Dec 16 12:16:39.185683 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Dec 16 12:16:39.185713 systemd[1]: Mounted media.mount - External Media Directory. Dec 16 12:16:39.185742 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Dec 16 12:16:39.185777 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Dec 16 12:16:39.185809 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Dec 16 12:16:39.185838 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 16 12:16:39.185867 systemd[1]: modprobe@configfs.service: Deactivated successfully. Dec 16 12:16:39.185898 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Dec 16 12:16:39.185927 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 16 12:16:39.185959 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 16 12:16:39.185993 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 16 12:16:39.186023 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 16 12:16:39.186053 systemd[1]: modprobe@fuse.service: Deactivated successfully. Dec 16 12:16:39.186082 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Dec 16 12:16:39.186116 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 16 12:16:39.186148 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 16 12:16:39.186181 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 16 12:16:39.186234 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Dec 16 12:16:39.186273 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Dec 16 12:16:39.186303 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 16 12:16:39.186334 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 16 12:16:39.186366 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Dec 16 12:16:39.186395 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Dec 16 12:16:39.186425 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Dec 16 12:16:39.186458 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Dec 16 12:16:39.186492 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Dec 16 12:16:39.186522 kernel: ACPI: bus type drm_connector registered Dec 16 12:16:39.186551 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 16 12:16:39.186583 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 16 12:16:39.186613 systemd[1]: Reached target network-pre.target - Preparation for Network. Dec 16 12:16:39.186646 systemd[1]: Listening on systemd-importd.socket - Disk Image Download Service Socket. Dec 16 12:16:39.186677 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Dec 16 12:16:39.186714 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 16 12:16:39.186747 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Dec 16 12:16:39.186779 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 16 12:16:39.186811 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Dec 16 12:16:39.186898 systemd-journald[1461]: Collecting audit messages is enabled. Dec 16 12:16:39.186956 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Dec 16 12:16:39.186994 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 16 12:16:39.187027 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Dec 16 12:16:39.187058 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Dec 16 12:16:39.187089 systemd-journald[1461]: Journal started Dec 16 12:16:39.187138 systemd-journald[1461]: Runtime Journal (/run/log/journal/ec2d33883081e5b6c080b075b2fcc1dc) is 8M, max 75.3M, 67.3M free. Dec 16 12:16:38.457000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Dec 16 12:16:38.781000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:16:38.786000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:16:38.797000 audit: BPF prog-id=14 op=UNLOAD Dec 16 12:16:38.797000 audit: BPF prog-id=13 op=UNLOAD Dec 16 12:16:38.798000 audit: BPF prog-id=15 op=LOAD Dec 16 12:16:38.799000 audit: BPF prog-id=16 op=LOAD Dec 16 12:16:38.799000 audit: BPF prog-id=17 op=LOAD Dec 16 12:16:38.922000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:16:38.932000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:16:38.932000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:16:38.942000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:16:38.942000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:16:38.953000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:16:38.953000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:16:38.966000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:16:38.966000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:16:38.975000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:16:38.975000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:16:38.985000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:16:39.051000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:16:39.068000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:16:39.078000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-load-credentials comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:16:39.109000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:16:39.109000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:16:39.159000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Dec 16 12:16:39.159000 audit[1461]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=60 a0=3 a1=ffffce92de00 a2=4000 a3=0 items=0 ppid=1 pid=1461 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:16:39.159000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Dec 16 12:16:38.265543 systemd[1]: Queued start job for default target multi-user.target. Dec 16 12:16:38.282361 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Dec 16 12:16:39.200531 systemd[1]: Started systemd-journald.service - Journal Service. Dec 16 12:16:39.199000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:16:38.283285 systemd[1]: systemd-journald.service: Deactivated successfully. Dec 16 12:16:39.206000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:16:39.204855 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 16 12:16:39.246481 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Dec 16 12:16:39.270928 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Dec 16 12:16:39.272000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:16:39.274141 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Dec 16 12:16:39.283568 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Dec 16 12:16:39.310583 kernel: loop1: detected capacity change from 0 to 61504 Dec 16 12:16:39.315645 systemd-journald[1461]: Time spent on flushing to /var/log/journal/ec2d33883081e5b6c080b075b2fcc1dc is 72.153ms for 1058 entries. Dec 16 12:16:39.315645 systemd-journald[1461]: System Journal (/var/log/journal/ec2d33883081e5b6c080b075b2fcc1dc) is 8M, max 588.1M, 580.1M free. Dec 16 12:16:39.420468 systemd-journald[1461]: Received client request to flush runtime journal. Dec 16 12:16:39.349000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:16:39.367000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:16:39.413000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:16:39.348127 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Dec 16 12:16:39.356728 systemd[1]: Starting systemd-sysusers.service - Create System Users... Dec 16 12:16:39.360485 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Dec 16 12:16:39.365346 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Dec 16 12:16:39.411325 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 16 12:16:39.425295 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Dec 16 12:16:39.427000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:16:39.489373 systemd[1]: Finished systemd-sysusers.service - Create System Users. Dec 16 12:16:39.490000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:16:39.494000 audit: BPF prog-id=18 op=LOAD Dec 16 12:16:39.494000 audit: BPF prog-id=19 op=LOAD Dec 16 12:16:39.494000 audit: BPF prog-id=20 op=LOAD Dec 16 12:16:39.499484 systemd[1]: Starting systemd-oomd.service - Userspace Out-Of-Memory (OOM) Killer... Dec 16 12:16:39.503000 audit: BPF prog-id=21 op=LOAD Dec 16 12:16:39.508718 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 16 12:16:39.516484 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 16 12:16:39.526000 audit: BPF prog-id=22 op=LOAD Dec 16 12:16:39.528000 audit: BPF prog-id=23 op=LOAD Dec 16 12:16:39.528000 audit: BPF prog-id=24 op=LOAD Dec 16 12:16:39.535476 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Dec 16 12:16:39.542000 audit: BPF prog-id=25 op=LOAD Dec 16 12:16:39.542000 audit: BPF prog-id=26 op=LOAD Dec 16 12:16:39.542000 audit: BPF prog-id=27 op=LOAD Dec 16 12:16:39.546666 systemd[1]: Starting systemd-nsresourced.service - Namespace Resource Manager... Dec 16 12:16:39.630238 systemd-tmpfiles[1535]: ACLs are not supported, ignoring. Dec 16 12:16:39.630280 systemd-tmpfiles[1535]: ACLs are not supported, ignoring. Dec 16 12:16:39.651000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:16:39.649101 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 16 12:16:39.674276 systemd[1]: Started systemd-userdbd.service - User Database Manager. Dec 16 12:16:39.675000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:16:39.712424 systemd-nsresourced[1537]: Not setting up BPF subsystem, as functionality has been disabled at compile time. Dec 16 12:16:39.716234 kernel: loop2: detected capacity change from 0 to 100192 Dec 16 12:16:39.721000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-nsresourced comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:16:39.719707 systemd[1]: Started systemd-nsresourced.service - Namespace Resource Manager. Dec 16 12:16:39.851292 systemd-oomd[1533]: No swap; memory pressure usage will be degraded Dec 16 12:16:39.852537 systemd[1]: Started systemd-oomd.service - Userspace Out-Of-Memory (OOM) Killer. Dec 16 12:16:39.854000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-oomd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:16:39.903623 systemd-resolved[1534]: Positive Trust Anchors: Dec 16 12:16:39.903661 systemd-resolved[1534]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 16 12:16:39.903671 systemd-resolved[1534]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Dec 16 12:16:39.903731 systemd-resolved[1534]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 16 12:16:39.918438 systemd-resolved[1534]: Defaulting to hostname 'linux'. Dec 16 12:16:39.920790 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 16 12:16:39.922000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:16:39.923611 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 16 12:16:40.041239 kernel: loop3: detected capacity change from 0 to 45344 Dec 16 12:16:40.319253 kernel: loop4: detected capacity change from 0 to 207008 Dec 16 12:16:40.389302 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Dec 16 12:16:40.391000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:16:40.391000 audit: BPF prog-id=8 op=UNLOAD Dec 16 12:16:40.391000 audit: BPF prog-id=7 op=UNLOAD Dec 16 12:16:40.392000 audit: BPF prog-id=28 op=LOAD Dec 16 12:16:40.392000 audit: BPF prog-id=29 op=LOAD Dec 16 12:16:40.395294 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 16 12:16:40.456981 systemd-udevd[1558]: Using default interface naming scheme 'v257'. Dec 16 12:16:40.496108 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 16 12:16:40.497000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:16:40.501000 audit: BPF prog-id=30 op=LOAD Dec 16 12:16:40.504723 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 16 12:16:40.615609 (udev-worker)[1571]: Network interface NamePolicy= disabled on kernel command line. Dec 16 12:16:40.634351 kernel: loop5: detected capacity change from 0 to 61504 Dec 16 12:16:40.658265 kernel: loop6: detected capacity change from 0 to 100192 Dec 16 12:16:40.658358 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Dec 16 12:16:40.688417 kernel: loop7: detected capacity change from 0 to 45344 Dec 16 12:16:40.713292 kernel: loop1: detected capacity change from 0 to 207008 Dec 16 12:16:40.718897 systemd-networkd[1564]: lo: Link UP Dec 16 12:16:40.719042 systemd-networkd[1564]: lo: Gained carrier Dec 16 12:16:40.723000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:16:40.722019 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 16 12:16:40.724762 systemd[1]: Reached target network.target - Network. Dec 16 12:16:40.730456 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Dec 16 12:16:40.740479 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Dec 16 12:16:40.748658 (sd-merge)[1586]: Using extensions 'containerd-flatcar.raw', 'docker-flatcar.raw', 'kubernetes.raw', 'oem-ami.raw'. Dec 16 12:16:40.755769 (sd-merge)[1586]: Merged extensions into '/usr'. Dec 16 12:16:40.769960 systemd[1]: Reload requested from client PID 1496 ('systemd-sysext') (unit systemd-sysext.service)... Dec 16 12:16:40.770667 systemd[1]: Reloading... Dec 16 12:16:40.772696 systemd-networkd[1564]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Dec 16 12:16:40.772724 systemd-networkd[1564]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 16 12:16:40.788694 systemd-networkd[1564]: eth0: Link UP Dec 16 12:16:40.788990 systemd-networkd[1564]: eth0: Gained carrier Dec 16 12:16:40.789045 systemd-networkd[1564]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Dec 16 12:16:40.806364 systemd-networkd[1564]: eth0: DHCPv4 address 172.31.20.6/20, gateway 172.31.16.1 acquired from 172.31.16.1 Dec 16 12:16:41.029255 zram_generator::config[1644]: No configuration found. Dec 16 12:16:41.645383 systemd[1]: Reloading finished in 873 ms. Dec 16 12:16:41.683840 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Dec 16 12:16:41.685000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:16:41.687712 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Dec 16 12:16:41.689000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd-persistent-storage comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:16:41.751755 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Dec 16 12:16:41.774384 systemd[1]: Starting ensure-sysext.service... Dec 16 12:16:41.790471 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Dec 16 12:16:41.798442 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 16 12:16:41.810680 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 16 12:16:41.830000 audit: BPF prog-id=31 op=LOAD Dec 16 12:16:41.830000 audit: BPF prog-id=32 op=LOAD Dec 16 12:16:41.830000 audit: BPF prog-id=28 op=UNLOAD Dec 16 12:16:41.830000 audit: BPF prog-id=29 op=UNLOAD Dec 16 12:16:41.832000 audit: BPF prog-id=33 op=LOAD Dec 16 12:16:41.832000 audit: BPF prog-id=30 op=UNLOAD Dec 16 12:16:41.837000 audit: BPF prog-id=34 op=LOAD Dec 16 12:16:41.837000 audit: BPF prog-id=25 op=UNLOAD Dec 16 12:16:41.837000 audit: BPF prog-id=35 op=LOAD Dec 16 12:16:41.837000 audit: BPF prog-id=36 op=LOAD Dec 16 12:16:41.837000 audit: BPF prog-id=26 op=UNLOAD Dec 16 12:16:41.837000 audit: BPF prog-id=27 op=UNLOAD Dec 16 12:16:41.838000 audit: BPF prog-id=37 op=LOAD Dec 16 12:16:41.838000 audit: BPF prog-id=18 op=UNLOAD Dec 16 12:16:41.839000 audit: BPF prog-id=38 op=LOAD Dec 16 12:16:41.839000 audit: BPF prog-id=39 op=LOAD Dec 16 12:16:41.839000 audit: BPF prog-id=19 op=UNLOAD Dec 16 12:16:41.839000 audit: BPF prog-id=20 op=UNLOAD Dec 16 12:16:41.843000 audit: BPF prog-id=40 op=LOAD Dec 16 12:16:41.843000 audit: BPF prog-id=15 op=UNLOAD Dec 16 12:16:41.843000 audit: BPF prog-id=41 op=LOAD Dec 16 12:16:41.843000 audit: BPF prog-id=42 op=LOAD Dec 16 12:16:41.843000 audit: BPF prog-id=16 op=UNLOAD Dec 16 12:16:41.843000 audit: BPF prog-id=17 op=UNLOAD Dec 16 12:16:41.848000 audit: BPF prog-id=43 op=LOAD Dec 16 12:16:41.848000 audit: BPF prog-id=22 op=UNLOAD Dec 16 12:16:41.849000 audit: BPF prog-id=44 op=LOAD Dec 16 12:16:41.849000 audit: BPF prog-id=45 op=LOAD Dec 16 12:16:41.849000 audit: BPF prog-id=23 op=UNLOAD Dec 16 12:16:41.849000 audit: BPF prog-id=24 op=UNLOAD Dec 16 12:16:41.850000 audit: BPF prog-id=46 op=LOAD Dec 16 12:16:41.850000 audit: BPF prog-id=21 op=UNLOAD Dec 16 12:16:41.855222 systemd-tmpfiles[1779]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Dec 16 12:16:41.855293 systemd-tmpfiles[1779]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Dec 16 12:16:41.856918 systemd-tmpfiles[1779]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Dec 16 12:16:41.861859 systemd-tmpfiles[1779]: ACLs are not supported, ignoring. Dec 16 12:16:41.862002 systemd-tmpfiles[1779]: ACLs are not supported, ignoring. Dec 16 12:16:41.864359 systemd[1]: Reload requested from client PID 1777 ('systemctl') (unit ensure-sysext.service)... Dec 16 12:16:41.864392 systemd[1]: Reloading... Dec 16 12:16:41.900796 systemd-tmpfiles[1779]: Detected autofs mount point /boot during canonicalization of boot. Dec 16 12:16:41.901086 systemd-tmpfiles[1779]: Skipping /boot Dec 16 12:16:41.929755 systemd-tmpfiles[1779]: Detected autofs mount point /boot during canonicalization of boot. Dec 16 12:16:41.929919 systemd-tmpfiles[1779]: Skipping /boot Dec 16 12:16:42.031245 zram_generator::config[1822]: No configuration found. Dec 16 12:16:42.154401 systemd-networkd[1564]: eth0: Gained IPv6LL Dec 16 12:16:42.482801 systemd[1]: Reloading finished in 617 ms. Dec 16 12:16:42.515710 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Dec 16 12:16:42.517000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd-wait-online comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:16:42.521000 audit: BPF prog-id=47 op=LOAD Dec 16 12:16:42.521000 audit: BPF prog-id=40 op=UNLOAD Dec 16 12:16:42.522000 audit: BPF prog-id=48 op=LOAD Dec 16 12:16:42.522000 audit: BPF prog-id=49 op=LOAD Dec 16 12:16:42.522000 audit: BPF prog-id=41 op=UNLOAD Dec 16 12:16:42.522000 audit: BPF prog-id=42 op=UNLOAD Dec 16 12:16:42.523000 audit: BPF prog-id=50 op=LOAD Dec 16 12:16:42.523000 audit: BPF prog-id=51 op=LOAD Dec 16 12:16:42.524000 audit: BPF prog-id=31 op=UNLOAD Dec 16 12:16:42.524000 audit: BPF prog-id=32 op=UNLOAD Dec 16 12:16:42.528000 audit: BPF prog-id=52 op=LOAD Dec 16 12:16:42.533000 audit: BPF prog-id=43 op=UNLOAD Dec 16 12:16:42.533000 audit: BPF prog-id=53 op=LOAD Dec 16 12:16:42.533000 audit: BPF prog-id=54 op=LOAD Dec 16 12:16:42.533000 audit: BPF prog-id=44 op=UNLOAD Dec 16 12:16:42.533000 audit: BPF prog-id=45 op=UNLOAD Dec 16 12:16:42.534000 audit: BPF prog-id=55 op=LOAD Dec 16 12:16:42.534000 audit: BPF prog-id=34 op=UNLOAD Dec 16 12:16:42.534000 audit: BPF prog-id=56 op=LOAD Dec 16 12:16:42.535000 audit: BPF prog-id=57 op=LOAD Dec 16 12:16:42.535000 audit: BPF prog-id=35 op=UNLOAD Dec 16 12:16:42.535000 audit: BPF prog-id=36 op=UNLOAD Dec 16 12:16:42.537000 audit: BPF prog-id=58 op=LOAD Dec 16 12:16:42.538000 audit: BPF prog-id=37 op=UNLOAD Dec 16 12:16:42.538000 audit: BPF prog-id=59 op=LOAD Dec 16 12:16:42.538000 audit: BPF prog-id=60 op=LOAD Dec 16 12:16:42.538000 audit: BPF prog-id=38 op=UNLOAD Dec 16 12:16:42.539000 audit: BPF prog-id=39 op=UNLOAD Dec 16 12:16:42.540000 audit: BPF prog-id=61 op=LOAD Dec 16 12:16:42.540000 audit: BPF prog-id=46 op=UNLOAD Dec 16 12:16:42.541000 audit: BPF prog-id=62 op=LOAD Dec 16 12:16:42.542000 audit: BPF prog-id=33 op=UNLOAD Dec 16 12:16:42.548177 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Dec 16 12:16:42.550000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:16:42.552315 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 16 12:16:42.554000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:16:42.560000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:16:42.558576 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 12:16:42.577482 systemd[1]: Reached target network-online.target - Network is Online. Dec 16 12:16:42.582756 systemd[1]: Starting audit-rules.service - Load Audit Rules... Dec 16 12:16:42.595381 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Dec 16 12:16:42.604549 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Dec 16 12:16:42.619792 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Dec 16 12:16:42.636358 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Dec 16 12:16:42.648243 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 16 12:16:42.652007 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 16 12:16:42.658002 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 16 12:16:42.664828 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 16 12:16:42.667464 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 16 12:16:42.667829 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Dec 16 12:16:42.668020 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 16 12:16:42.676524 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 16 12:16:42.677775 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 16 12:16:42.678103 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Dec 16 12:16:42.678638 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 16 12:16:42.687000 audit[1885]: SYSTEM_BOOT pid=1885 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Dec 16 12:16:42.689058 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 16 12:16:42.698106 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 16 12:16:42.700719 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 16 12:16:42.701121 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Dec 16 12:16:42.702422 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 16 12:16:42.702727 systemd[1]: Reached target time-set.target - System Time Set. Dec 16 12:16:42.715311 systemd[1]: Finished ensure-sysext.service. Dec 16 12:16:42.716000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:16:42.729338 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Dec 16 12:16:42.731000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:16:42.758441 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Dec 16 12:16:42.760000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:16:42.798122 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 16 12:16:42.803980 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 16 12:16:42.806000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:16:42.806000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:16:42.809047 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 16 12:16:42.810309 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 16 12:16:42.812000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:16:42.812000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:16:42.814521 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 16 12:16:42.815561 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 16 12:16:42.817000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:16:42.817000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:16:42.818900 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 16 12:16:42.820000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:16:42.820000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:16:42.819401 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 16 12:16:42.827151 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 16 12:16:42.827637 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 16 12:16:42.899000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Dec 16 12:16:42.899000 audit[1910]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffc6c155d0 a2=420 a3=0 items=0 ppid=1875 pid=1910 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:16:42.899000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Dec 16 12:16:42.901068 augenrules[1910]: No rules Dec 16 12:16:42.903548 systemd[1]: audit-rules.service: Deactivated successfully. Dec 16 12:16:42.905299 systemd[1]: Finished audit-rules.service - Load Audit Rules. Dec 16 12:16:43.064828 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Dec 16 12:16:43.069251 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 16 12:16:45.233223 ldconfig[1878]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Dec 16 12:16:45.245332 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Dec 16 12:16:45.250785 systemd[1]: Starting systemd-update-done.service - Update is Completed... Dec 16 12:16:45.280308 systemd[1]: Finished systemd-update-done.service - Update is Completed. Dec 16 12:16:45.283450 systemd[1]: Reached target sysinit.target - System Initialization. Dec 16 12:16:45.286263 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Dec 16 12:16:45.289152 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Dec 16 12:16:45.292274 systemd[1]: Started logrotate.timer - Daily rotation of log files. Dec 16 12:16:45.294846 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Dec 16 12:16:45.298157 systemd[1]: Started systemd-sysupdate-reboot.timer - Reboot Automatically After System Update. Dec 16 12:16:45.301425 systemd[1]: Started systemd-sysupdate.timer - Automatic System Update. Dec 16 12:16:45.303876 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Dec 16 12:16:45.307075 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Dec 16 12:16:45.307265 systemd[1]: Reached target paths.target - Path Units. Dec 16 12:16:45.309472 systemd[1]: Reached target timers.target - Timer Units. Dec 16 12:16:45.312987 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Dec 16 12:16:45.318079 systemd[1]: Starting docker.socket - Docker Socket for the API... Dec 16 12:16:45.325328 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Dec 16 12:16:45.328734 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Dec 16 12:16:45.331872 systemd[1]: Reached target ssh-access.target - SSH Access Available. Dec 16 12:16:45.338085 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Dec 16 12:16:45.341150 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Dec 16 12:16:45.345137 systemd[1]: Listening on docker.socket - Docker Socket for the API. Dec 16 12:16:45.347917 systemd[1]: Reached target sockets.target - Socket Units. Dec 16 12:16:45.350921 systemd[1]: Reached target basic.target - Basic System. Dec 16 12:16:45.353258 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Dec 16 12:16:45.353421 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Dec 16 12:16:45.357387 systemd[1]: Starting containerd.service - containerd container runtime... Dec 16 12:16:45.363710 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Dec 16 12:16:45.371289 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Dec 16 12:16:45.382700 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Dec 16 12:16:45.390445 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Dec 16 12:16:45.396809 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Dec 16 12:16:45.399414 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Dec 16 12:16:45.422591 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 12:16:45.428300 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Dec 16 12:16:45.433594 jq[1926]: false Dec 16 12:16:45.435645 systemd[1]: Started ntpd.service - Network Time Service. Dec 16 12:16:45.441138 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Dec 16 12:16:45.450014 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Dec 16 12:16:45.465498 systemd[1]: Starting setup-oem.service - Setup OEM... Dec 16 12:16:45.475620 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Dec 16 12:16:45.488577 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Dec 16 12:16:45.501251 extend-filesystems[1927]: Found /dev/nvme0n1p6 Dec 16 12:16:45.503761 systemd[1]: Starting systemd-logind.service - User Login Management... Dec 16 12:16:45.507573 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Dec 16 12:16:45.513079 extend-filesystems[1927]: Found /dev/nvme0n1p9 Dec 16 12:16:45.521108 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Dec 16 12:16:45.524160 systemd[1]: Starting update-engine.service - Update Engine... Dec 16 12:16:45.532395 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Dec 16 12:16:45.541485 extend-filesystems[1927]: Checking size of /dev/nvme0n1p9 Dec 16 12:16:45.546750 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Dec 16 12:16:45.550468 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Dec 16 12:16:45.551896 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Dec 16 12:16:45.578990 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Dec 16 12:16:45.591914 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Dec 16 12:16:45.621018 jq[1943]: true Dec 16 12:16:45.635496 extend-filesystems[1927]: Resized partition /dev/nvme0n1p9 Dec 16 12:16:45.651995 tar[1948]: linux-arm64/LICENSE Dec 16 12:16:45.651995 tar[1948]: linux-arm64/helm Dec 16 12:16:45.655772 extend-filesystems[1969]: resize2fs 1.47.3 (8-Jul-2025) Dec 16 12:16:45.716254 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 1617920 to 2604027 blocks Dec 16 12:16:45.741406 update_engine[1942]: I20251216 12:16:45.728986 1942 main.cc:92] Flatcar Update Engine starting Dec 16 12:16:45.747658 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 2604027 Dec 16 12:16:45.755689 systemd[1]: motdgen.service: Deactivated successfully. Dec 16 12:16:45.766566 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Dec 16 12:16:45.779450 extend-filesystems[1969]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Dec 16 12:16:45.779450 extend-filesystems[1969]: old_desc_blocks = 1, new_desc_blocks = 2 Dec 16 12:16:45.779450 extend-filesystems[1969]: The filesystem on /dev/nvme0n1p9 is now 2604027 (4k) blocks long. Dec 16 12:16:45.793419 extend-filesystems[1927]: Resized filesystem in /dev/nvme0n1p9 Dec 16 12:16:45.799070 systemd[1]: extend-filesystems.service: Deactivated successfully. Dec 16 12:16:45.799613 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Dec 16 12:16:45.812288 ntpd[1930]: 16 Dec 12:16:45 ntpd[1930]: ntpd 4.2.8p18@1.4062-o Mon Dec 15 23:39:58 UTC 2025 (1): Starting Dec 16 12:16:45.812288 ntpd[1930]: 16 Dec 12:16:45 ntpd[1930]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Dec 16 12:16:45.812288 ntpd[1930]: 16 Dec 12:16:45 ntpd[1930]: ---------------------------------------------------- Dec 16 12:16:45.812288 ntpd[1930]: 16 Dec 12:16:45 ntpd[1930]: ntp-4 is maintained by Network Time Foundation, Dec 16 12:16:45.812288 ntpd[1930]: 16 Dec 12:16:45 ntpd[1930]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Dec 16 12:16:45.812288 ntpd[1930]: 16 Dec 12:16:45 ntpd[1930]: corporation. Support and training for ntp-4 are Dec 16 12:16:45.812288 ntpd[1930]: 16 Dec 12:16:45 ntpd[1930]: available at https://www.nwtime.org/support Dec 16 12:16:45.812288 ntpd[1930]: 16 Dec 12:16:45 ntpd[1930]: ---------------------------------------------------- Dec 16 12:16:45.808575 ntpd[1930]: ntpd 4.2.8p18@1.4062-o Mon Dec 15 23:39:58 UTC 2025 (1): Starting Dec 16 12:16:45.808671 ntpd[1930]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Dec 16 12:16:45.808690 ntpd[1930]: ---------------------------------------------------- Dec 16 12:16:45.808708 ntpd[1930]: ntp-4 is maintained by Network Time Foundation, Dec 16 12:16:45.808725 ntpd[1930]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Dec 16 12:16:45.808742 ntpd[1930]: corporation. Support and training for ntp-4 are Dec 16 12:16:45.808760 ntpd[1930]: available at https://www.nwtime.org/support Dec 16 12:16:45.808776 ntpd[1930]: ---------------------------------------------------- Dec 16 12:16:45.834378 jq[1968]: true Dec 16 12:16:45.834765 ntpd[1930]: 16 Dec 12:16:45 ntpd[1930]: proto: precision = 0.096 usec (-23) Dec 16 12:16:45.834765 ntpd[1930]: 16 Dec 12:16:45 ntpd[1930]: basedate set to 2025-12-03 Dec 16 12:16:45.834765 ntpd[1930]: 16 Dec 12:16:45 ntpd[1930]: gps base set to 2025-12-07 (week 2396) Dec 16 12:16:45.834765 ntpd[1930]: 16 Dec 12:16:45 ntpd[1930]: Listen and drop on 0 v6wildcard [::]:123 Dec 16 12:16:45.834765 ntpd[1930]: 16 Dec 12:16:45 ntpd[1930]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Dec 16 12:16:45.834765 ntpd[1930]: 16 Dec 12:16:45 ntpd[1930]: Listen normally on 2 lo 127.0.0.1:123 Dec 16 12:16:45.834765 ntpd[1930]: 16 Dec 12:16:45 ntpd[1930]: Listen normally on 3 eth0 172.31.20.6:123 Dec 16 12:16:45.834765 ntpd[1930]: 16 Dec 12:16:45 ntpd[1930]: Listen normally on 4 lo [::1]:123 Dec 16 12:16:45.834765 ntpd[1930]: 16 Dec 12:16:45 ntpd[1930]: Listen normally on 5 eth0 [fe80::457:7fff:fe0b:11b5%2]:123 Dec 16 12:16:45.834765 ntpd[1930]: 16 Dec 12:16:45 ntpd[1930]: Listening on routing socket on fd #22 for interface updates Dec 16 12:16:45.816076 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Dec 16 12:16:45.821915 dbus-daemon[1924]: [system] SELinux support is enabled Dec 16 12:16:45.822678 systemd[1]: Started dbus.service - D-Bus System Message Bus. Dec 16 12:16:45.827446 ntpd[1930]: proto: precision = 0.096 usec (-23) Dec 16 12:16:45.831578 ntpd[1930]: basedate set to 2025-12-03 Dec 16 12:16:45.831607 ntpd[1930]: gps base set to 2025-12-07 (week 2396) Dec 16 12:16:45.831788 ntpd[1930]: Listen and drop on 0 v6wildcard [::]:123 Dec 16 12:16:45.831833 ntpd[1930]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Dec 16 12:16:45.834516 ntpd[1930]: Listen normally on 2 lo 127.0.0.1:123 Dec 16 12:16:45.840028 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Dec 16 12:16:45.834569 ntpd[1930]: Listen normally on 3 eth0 172.31.20.6:123 Dec 16 12:16:45.840077 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Dec 16 12:16:45.834618 ntpd[1930]: Listen normally on 4 lo [::1]:123 Dec 16 12:16:45.843382 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Dec 16 12:16:45.834663 ntpd[1930]: Listen normally on 5 eth0 [fe80::457:7fff:fe0b:11b5%2]:123 Dec 16 12:16:45.843419 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Dec 16 12:16:45.834707 ntpd[1930]: Listening on routing socket on fd #22 for interface updates Dec 16 12:16:45.867473 dbus-daemon[1924]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.2' (uid=244 pid=1564 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Dec 16 12:16:45.869413 dbus-daemon[1924]: [system] Successfully activated service 'org.freedesktop.systemd1' Dec 16 12:16:45.892236 update_engine[1942]: I20251216 12:16:45.887590 1942 update_check_scheduler.cc:74] Next update check in 2m21s Dec 16 12:16:45.888574 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Dec 16 12:16:45.894153 systemd[1]: Started update-engine.service - Update Engine. Dec 16 12:16:45.900746 ntpd[1930]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Dec 16 12:16:45.901390 ntpd[1930]: 16 Dec 12:16:45 ntpd[1930]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Dec 16 12:16:45.901390 ntpd[1930]: 16 Dec 12:16:45 ntpd[1930]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Dec 16 12:16:45.900790 ntpd[1930]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Dec 16 12:16:45.952721 systemd[1]: Started locksmithd.service - Cluster reboot manager. Dec 16 12:16:45.956508 systemd[1]: Finished setup-oem.service - Setup OEM. Dec 16 12:16:45.964727 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Dec 16 12:16:46.055427 coreos-metadata[1923]: Dec 16 12:16:46.052 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Dec 16 12:16:46.065234 coreos-metadata[1923]: Dec 16 12:16:46.065 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Dec 16 12:16:46.070779 coreos-metadata[1923]: Dec 16 12:16:46.070 INFO Fetch successful Dec 16 12:16:46.070779 coreos-metadata[1923]: Dec 16 12:16:46.070 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Dec 16 12:16:46.074616 coreos-metadata[1923]: Dec 16 12:16:46.074 INFO Fetch successful Dec 16 12:16:46.077356 coreos-metadata[1923]: Dec 16 12:16:46.074 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Dec 16 12:16:46.080385 coreos-metadata[1923]: Dec 16 12:16:46.080 INFO Fetch successful Dec 16 12:16:46.081845 coreos-metadata[1923]: Dec 16 12:16:46.081 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Dec 16 12:16:46.093674 coreos-metadata[1923]: Dec 16 12:16:46.088 INFO Fetch successful Dec 16 12:16:46.093674 coreos-metadata[1923]: Dec 16 12:16:46.088 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Dec 16 12:16:46.093674 coreos-metadata[1923]: Dec 16 12:16:46.093 INFO Fetch failed with 404: resource not found Dec 16 12:16:46.093674 coreos-metadata[1923]: Dec 16 12:16:46.093 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Dec 16 12:16:46.096837 coreos-metadata[1923]: Dec 16 12:16:46.096 INFO Fetch successful Dec 16 12:16:46.096837 coreos-metadata[1923]: Dec 16 12:16:46.096 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Dec 16 12:16:46.100468 coreos-metadata[1923]: Dec 16 12:16:46.099 INFO Fetch successful Dec 16 12:16:46.103059 coreos-metadata[1923]: Dec 16 12:16:46.101 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Dec 16 12:16:46.112383 coreos-metadata[1923]: Dec 16 12:16:46.110 INFO Fetch successful Dec 16 12:16:46.112383 coreos-metadata[1923]: Dec 16 12:16:46.110 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Dec 16 12:16:46.116256 coreos-metadata[1923]: Dec 16 12:16:46.112 INFO Fetch successful Dec 16 12:16:46.116256 coreos-metadata[1923]: Dec 16 12:16:46.113 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Dec 16 12:16:46.125232 coreos-metadata[1923]: Dec 16 12:16:46.122 INFO Fetch successful Dec 16 12:16:46.175658 bash[2026]: Updated "/home/core/.ssh/authorized_keys" Dec 16 12:16:46.177187 systemd-logind[1939]: Watching system buttons on /dev/input/event0 (Power Button) Dec 16 12:16:46.178452 systemd-logind[1939]: Watching system buttons on /dev/input/event1 (Sleep Button) Dec 16 12:16:46.180377 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Dec 16 12:16:46.190343 systemd-logind[1939]: New seat seat0. Dec 16 12:16:46.195001 systemd[1]: Starting sshkeys.service... Dec 16 12:16:46.208387 systemd[1]: Started systemd-logind.service - User Login Management. Dec 16 12:16:46.311423 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Dec 16 12:16:46.325227 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Dec 16 12:16:46.362013 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Dec 16 12:16:46.370244 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Dec 16 12:16:46.442438 amazon-ssm-agent[2006]: Initializing new seelog logger Dec 16 12:16:46.442438 amazon-ssm-agent[2006]: New Seelog Logger Creation Complete Dec 16 12:16:46.442438 amazon-ssm-agent[2006]: 2025/12/16 12:16:46 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Dec 16 12:16:46.442438 amazon-ssm-agent[2006]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Dec 16 12:16:46.449618 amazon-ssm-agent[2006]: 2025/12/16 12:16:46 processing appconfig overrides Dec 16 12:16:46.450148 amazon-ssm-agent[2006]: 2025/12/16 12:16:46 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Dec 16 12:16:46.450148 amazon-ssm-agent[2006]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Dec 16 12:16:46.450440 amazon-ssm-agent[2006]: 2025/12/16 12:16:46 processing appconfig overrides Dec 16 12:16:46.452243 amazon-ssm-agent[2006]: 2025/12/16 12:16:46 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Dec 16 12:16:46.452243 amazon-ssm-agent[2006]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Dec 16 12:16:46.452243 amazon-ssm-agent[2006]: 2025/12/16 12:16:46 processing appconfig overrides Dec 16 12:16:46.460270 amazon-ssm-agent[2006]: 2025-12-16 12:16:46.4500 INFO Proxy environment variables: Dec 16 12:16:46.483384 amazon-ssm-agent[2006]: 2025/12/16 12:16:46 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Dec 16 12:16:46.483384 amazon-ssm-agent[2006]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Dec 16 12:16:46.483384 amazon-ssm-agent[2006]: 2025/12/16 12:16:46 processing appconfig overrides Dec 16 12:16:46.560327 amazon-ssm-agent[2006]: 2025-12-16 12:16:46.4500 INFO https_proxy: Dec 16 12:16:46.665317 amazon-ssm-agent[2006]: 2025-12-16 12:16:46.4500 INFO http_proxy: Dec 16 12:16:46.718982 coreos-metadata[2056]: Dec 16 12:16:46.718 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Dec 16 12:16:46.722611 coreos-metadata[2056]: Dec 16 12:16:46.722 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Dec 16 12:16:46.723587 coreos-metadata[2056]: Dec 16 12:16:46.723 INFO Fetch successful Dec 16 12:16:46.723587 coreos-metadata[2056]: Dec 16 12:16:46.723 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Dec 16 12:16:46.725189 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Dec 16 12:16:46.731345 coreos-metadata[2056]: Dec 16 12:16:46.731 INFO Fetch successful Dec 16 12:16:46.736147 unknown[2056]: wrote ssh authorized keys file for user: core Dec 16 12:16:46.759716 dbus-daemon[1924]: [system] Successfully activated service 'org.freedesktop.hostname1' Dec 16 12:16:46.767999 amazon-ssm-agent[2006]: 2025-12-16 12:16:46.4500 INFO no_proxy: Dec 16 12:16:46.776515 dbus-daemon[1924]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.6' (uid=0 pid=2000 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Dec 16 12:16:46.791185 systemd[1]: Starting polkit.service - Authorization Manager... Dec 16 12:16:46.868476 amazon-ssm-agent[2006]: 2025-12-16 12:16:46.4502 INFO Checking if agent identity type OnPrem can be assumed Dec 16 12:16:46.900576 update-ssh-keys[2140]: Updated "/home/core/.ssh/authorized_keys" Dec 16 12:16:46.904407 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Dec 16 12:16:46.925013 containerd[1971]: time="2025-12-16T12:16:46Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Dec 16 12:16:46.925013 containerd[1971]: time="2025-12-16T12:16:46.919576438Z" level=info msg="starting containerd" revision=fcd43222d6b07379a4be9786bda52438f0dd16a1 version=v2.1.5 Dec 16 12:16:46.912337 systemd[1]: Finished sshkeys.service. Dec 16 12:16:46.932506 locksmithd[2002]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Dec 16 12:16:46.969145 amazon-ssm-agent[2006]: 2025-12-16 12:16:46.4503 INFO Checking if agent identity type EC2 can be assumed Dec 16 12:16:47.021226 containerd[1971]: time="2025-12-16T12:16:47.016486674Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="14.616µs" Dec 16 12:16:47.028243 containerd[1971]: time="2025-12-16T12:16:47.027002886Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Dec 16 12:16:47.028243 containerd[1971]: time="2025-12-16T12:16:47.027159054Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Dec 16 12:16:47.028243 containerd[1971]: time="2025-12-16T12:16:47.027195006Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Dec 16 12:16:47.028243 containerd[1971]: time="2025-12-16T12:16:47.027515226Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Dec 16 12:16:47.028243 containerd[1971]: time="2025-12-16T12:16:47.027558582Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Dec 16 12:16:47.028243 containerd[1971]: time="2025-12-16T12:16:47.027686334Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Dec 16 12:16:47.028243 containerd[1971]: time="2025-12-16T12:16:47.027714006Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Dec 16 12:16:47.036530 containerd[1971]: time="2025-12-16T12:16:47.034566570Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Dec 16 12:16:47.036530 containerd[1971]: time="2025-12-16T12:16:47.034622742Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Dec 16 12:16:47.036530 containerd[1971]: time="2025-12-16T12:16:47.034659690Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Dec 16 12:16:47.036530 containerd[1971]: time="2025-12-16T12:16:47.034681782Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Dec 16 12:16:47.036530 containerd[1971]: time="2025-12-16T12:16:47.035081178Z" level=info msg="skip loading plugin" error="EROFS unsupported, please `modprobe erofs`: skip plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Dec 16 12:16:47.036530 containerd[1971]: time="2025-12-16T12:16:47.035117538Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Dec 16 12:16:47.036530 containerd[1971]: time="2025-12-16T12:16:47.035313294Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Dec 16 12:16:47.036530 containerd[1971]: time="2025-12-16T12:16:47.035705514Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Dec 16 12:16:47.036530 containerd[1971]: time="2025-12-16T12:16:47.035835594Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Dec 16 12:16:47.036530 containerd[1971]: time="2025-12-16T12:16:47.035865006Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Dec 16 12:16:47.036530 containerd[1971]: time="2025-12-16T12:16:47.035927274Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Dec 16 12:16:47.050223 containerd[1971]: time="2025-12-16T12:16:47.048056826Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Dec 16 12:16:47.050223 containerd[1971]: time="2025-12-16T12:16:47.049392330Z" level=info msg="metadata content store policy set" policy=shared Dec 16 12:16:47.069423 containerd[1971]: time="2025-12-16T12:16:47.068648299Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Dec 16 12:16:47.069423 containerd[1971]: time="2025-12-16T12:16:47.068760199Z" level=info msg="loading plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Dec 16 12:16:47.069423 containerd[1971]: time="2025-12-16T12:16:47.068936491Z" level=info msg="skip loading plugin" error="could not find mkfs.erofs: exec: \"mkfs.erofs\": executable file not found in $PATH: skip plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Dec 16 12:16:47.069423 containerd[1971]: time="2025-12-16T12:16:47.068965195Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Dec 16 12:16:47.069423 containerd[1971]: time="2025-12-16T12:16:47.069012031Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Dec 16 12:16:47.069423 containerd[1971]: time="2025-12-16T12:16:47.069045355Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Dec 16 12:16:47.069423 containerd[1971]: time="2025-12-16T12:16:47.069077755Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Dec 16 12:16:47.069423 containerd[1971]: time="2025-12-16T12:16:47.069103771Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Dec 16 12:16:47.069423 containerd[1971]: time="2025-12-16T12:16:47.069134047Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Dec 16 12:16:47.069423 containerd[1971]: time="2025-12-16T12:16:47.069172939Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Dec 16 12:16:47.069423 containerd[1971]: time="2025-12-16T12:16:47.069245071Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Dec 16 12:16:47.069922 amazon-ssm-agent[2006]: 2025-12-16 12:16:47.0370 INFO Agent will take identity from EC2 Dec 16 12:16:47.070255 containerd[1971]: time="2025-12-16T12:16:47.069277483Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Dec 16 12:16:47.070255 containerd[1971]: time="2025-12-16T12:16:47.070166287Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Dec 16 12:16:47.070255 containerd[1971]: time="2025-12-16T12:16:47.070223935Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Dec 16 12:16:47.072224 containerd[1971]: time="2025-12-16T12:16:47.070611871Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Dec 16 12:16:47.072224 containerd[1971]: time="2025-12-16T12:16:47.070688947Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Dec 16 12:16:47.072224 containerd[1971]: time="2025-12-16T12:16:47.070726063Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Dec 16 12:16:47.072224 containerd[1971]: time="2025-12-16T12:16:47.070780351Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Dec 16 12:16:47.072224 containerd[1971]: time="2025-12-16T12:16:47.070810999Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Dec 16 12:16:47.072224 containerd[1971]: time="2025-12-16T12:16:47.070862767Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Dec 16 12:16:47.072224 containerd[1971]: time="2025-12-16T12:16:47.070895287Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Dec 16 12:16:47.072224 containerd[1971]: time="2025-12-16T12:16:47.070943875Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Dec 16 12:16:47.072224 containerd[1971]: time="2025-12-16T12:16:47.070973815Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Dec 16 12:16:47.072224 containerd[1971]: time="2025-12-16T12:16:47.071028451Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Dec 16 12:16:47.072224 containerd[1971]: time="2025-12-16T12:16:47.071057443Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Dec 16 12:16:47.072224 containerd[1971]: time="2025-12-16T12:16:47.071144263Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Dec 16 12:16:47.072224 containerd[1971]: time="2025-12-16T12:16:47.071316979Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Dec 16 12:16:47.072224 containerd[1971]: time="2025-12-16T12:16:47.071349259Z" level=info msg="Start snapshots syncer" Dec 16 12:16:47.072224 containerd[1971]: time="2025-12-16T12:16:47.071421799Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Dec 16 12:16:47.072864 containerd[1971]: time="2025-12-16T12:16:47.072134899Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"cgroupWritable\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"\",\"binDirs\":[\"/opt/cni/bin\"],\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogLineSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Dec 16 12:16:47.079403 containerd[1971]: time="2025-12-16T12:16:47.077329123Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Dec 16 12:16:47.079403 containerd[1971]: time="2025-12-16T12:16:47.077518363Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Dec 16 12:16:47.079403 containerd[1971]: time="2025-12-16T12:16:47.078075751Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Dec 16 12:16:47.079403 containerd[1971]: time="2025-12-16T12:16:47.079245367Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Dec 16 12:16:47.079403 containerd[1971]: time="2025-12-16T12:16:47.079319743Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Dec 16 12:16:47.079403 containerd[1971]: time="2025-12-16T12:16:47.079355659Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Dec 16 12:16:47.083221 containerd[1971]: time="2025-12-16T12:16:47.079773343Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Dec 16 12:16:47.083221 containerd[1971]: time="2025-12-16T12:16:47.079815763Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Dec 16 12:16:47.083221 containerd[1971]: time="2025-12-16T12:16:47.082265515Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Dec 16 12:16:47.083221 containerd[1971]: time="2025-12-16T12:16:47.082322503Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Dec 16 12:16:47.083221 containerd[1971]: time="2025-12-16T12:16:47.082355659Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Dec 16 12:16:47.083221 containerd[1971]: time="2025-12-16T12:16:47.082458403Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Dec 16 12:16:47.083221 containerd[1971]: time="2025-12-16T12:16:47.082518439Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Dec 16 12:16:47.085889 containerd[1971]: time="2025-12-16T12:16:47.082541707Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Dec 16 12:16:47.085889 containerd[1971]: time="2025-12-16T12:16:47.085287847Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Dec 16 12:16:47.085889 containerd[1971]: time="2025-12-16T12:16:47.085354255Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Dec 16 12:16:47.085889 containerd[1971]: time="2025-12-16T12:16:47.085386883Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Dec 16 12:16:47.085889 containerd[1971]: time="2025-12-16T12:16:47.085443055Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Dec 16 12:16:47.085889 containerd[1971]: time="2025-12-16T12:16:47.085633747Z" level=info msg="runtime interface created" Dec 16 12:16:47.085889 containerd[1971]: time="2025-12-16T12:16:47.085651351Z" level=info msg="created NRI interface" Dec 16 12:16:47.085889 containerd[1971]: time="2025-12-16T12:16:47.085673059Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Dec 16 12:16:47.085889 containerd[1971]: time="2025-12-16T12:16:47.085726759Z" level=info msg="Connect containerd service" Dec 16 12:16:47.085889 containerd[1971]: time="2025-12-16T12:16:47.085808311Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Dec 16 12:16:47.101225 containerd[1971]: time="2025-12-16T12:16:47.095955079Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 16 12:16:47.123359 polkitd[2144]: Started polkitd version 126 Dec 16 12:16:47.168700 amazon-ssm-agent[2006]: 2025-12-16 12:16:47.0387 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.3.0.0 Dec 16 12:16:47.194954 polkitd[2144]: Loading rules from directory /etc/polkit-1/rules.d Dec 16 12:16:47.200001 polkitd[2144]: Loading rules from directory /run/polkit-1/rules.d Dec 16 12:16:47.206381 polkitd[2144]: Error opening rules directory: Error opening directory “/run/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Dec 16 12:16:47.207031 polkitd[2144]: Loading rules from directory /usr/local/share/polkit-1/rules.d Dec 16 12:16:47.207082 polkitd[2144]: Error opening rules directory: Error opening directory “/usr/local/share/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Dec 16 12:16:47.207164 polkitd[2144]: Loading rules from directory /usr/share/polkit-1/rules.d Dec 16 12:16:47.219260 polkitd[2144]: Finished loading, compiling and executing 2 rules Dec 16 12:16:47.220489 systemd[1]: Started polkit.service - Authorization Manager. Dec 16 12:16:47.228706 dbus-daemon[1924]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Dec 16 12:16:47.233291 polkitd[2144]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Dec 16 12:16:47.268524 amazon-ssm-agent[2006]: 2025-12-16 12:16:47.0388 INFO [amazon-ssm-agent] OS: linux, Arch: arm64 Dec 16 12:16:47.329072 systemd-hostnamed[2000]: Hostname set to (transient) Dec 16 12:16:47.329115 systemd-resolved[1534]: System hostname changed to 'ip-172-31-20-6'. Dec 16 12:16:47.333268 sshd_keygen[1981]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Dec 16 12:16:47.369233 amazon-ssm-agent[2006]: 2025-12-16 12:16:47.0388 INFO [amazon-ssm-agent] Starting Core Agent Dec 16 12:16:47.467254 amazon-ssm-agent[2006]: 2025-12-16 12:16:47.0388 INFO [amazon-ssm-agent] Registrar detected. Attempting registration Dec 16 12:16:47.474473 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Dec 16 12:16:47.487345 systemd[1]: Starting issuegen.service - Generate /run/issue... Dec 16 12:16:47.547573 systemd[1]: issuegen.service: Deactivated successfully. Dec 16 12:16:47.548454 systemd[1]: Finished issuegen.service - Generate /run/issue. Dec 16 12:16:47.558368 containerd[1971]: time="2025-12-16T12:16:47.557790453Z" level=info msg="Start subscribing containerd event" Dec 16 12:16:47.558530 containerd[1971]: time="2025-12-16T12:16:47.558332361Z" level=info msg="Start recovering state" Dec 16 12:16:47.559141 containerd[1971]: time="2025-12-16T12:16:47.559110297Z" level=info msg="Start event monitor" Dec 16 12:16:47.559876 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Dec 16 12:16:47.563632 containerd[1971]: time="2025-12-16T12:16:47.562781373Z" level=info msg="Start cni network conf syncer for default" Dec 16 12:16:47.563632 containerd[1971]: time="2025-12-16T12:16:47.562823709Z" level=info msg="Start streaming server" Dec 16 12:16:47.563632 containerd[1971]: time="2025-12-16T12:16:47.562844253Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Dec 16 12:16:47.563632 containerd[1971]: time="2025-12-16T12:16:47.563304801Z" level=info msg="runtime interface starting up..." Dec 16 12:16:47.563632 containerd[1971]: time="2025-12-16T12:16:47.563327325Z" level=info msg="starting plugins..." Dec 16 12:16:47.563632 containerd[1971]: time="2025-12-16T12:16:47.563369241Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Dec 16 12:16:47.564030 containerd[1971]: time="2025-12-16T12:16:47.563997981Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Dec 16 12:16:47.564570 containerd[1971]: time="2025-12-16T12:16:47.564538257Z" level=info msg=serving... address=/run/containerd/containerd.sock Dec 16 12:16:47.565247 containerd[1971]: time="2025-12-16T12:16:47.565020741Z" level=info msg="containerd successfully booted in 0.647182s" Dec 16 12:16:47.565391 systemd[1]: Started containerd.service - containerd container runtime. Dec 16 12:16:47.574675 amazon-ssm-agent[2006]: 2025-12-16 12:16:47.0388 INFO [Registrar] Starting registrar module Dec 16 12:16:47.614503 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Dec 16 12:16:47.623872 systemd[1]: Started getty@tty1.service - Getty on tty1. Dec 16 12:16:47.636222 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Dec 16 12:16:47.640422 systemd[1]: Reached target getty.target - Login Prompts. Dec 16 12:16:47.672953 amazon-ssm-agent[2006]: 2025-12-16 12:16:47.0408 INFO [EC2Identity] Checking disk for registration info Dec 16 12:16:47.774401 amazon-ssm-agent[2006]: 2025-12-16 12:16:47.0408 INFO [EC2Identity] No registration info found for ec2 instance, attempting registration Dec 16 12:16:47.786536 amazon-ssm-agent[2006]: 2025/12/16 12:16:47 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Dec 16 12:16:47.786536 amazon-ssm-agent[2006]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Dec 16 12:16:47.786740 amazon-ssm-agent[2006]: 2025/12/16 12:16:47 processing appconfig overrides Dec 16 12:16:47.788855 tar[1948]: linux-arm64/README.md Dec 16 12:16:47.820457 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Dec 16 12:16:47.823661 amazon-ssm-agent[2006]: 2025-12-16 12:16:47.0408 INFO [EC2Identity] Generating registration keypair Dec 16 12:16:47.823790 amazon-ssm-agent[2006]: 2025-12-16 12:16:47.7389 INFO [EC2Identity] Checking write access before registering Dec 16 12:16:47.823885 amazon-ssm-agent[2006]: 2025-12-16 12:16:47.7396 INFO [EC2Identity] Registering EC2 instance with Systems Manager Dec 16 12:16:47.824119 amazon-ssm-agent[2006]: 2025-12-16 12:16:47.7862 INFO [EC2Identity] EC2 registration was successful. Dec 16 12:16:47.824119 amazon-ssm-agent[2006]: 2025-12-16 12:16:47.7863 INFO [amazon-ssm-agent] Registration attempted. Resuming core agent startup. Dec 16 12:16:47.824388 amazon-ssm-agent[2006]: 2025-12-16 12:16:47.7864 INFO [CredentialRefresher] credentialRefresher has started Dec 16 12:16:47.824388 amazon-ssm-agent[2006]: 2025-12-16 12:16:47.7864 INFO [CredentialRefresher] Starting credentials refresher loop Dec 16 12:16:47.824388 amazon-ssm-agent[2006]: 2025-12-16 12:16:47.8204 INFO EC2RoleProvider Successfully connected with instance profile role credentials Dec 16 12:16:47.824732 amazon-ssm-agent[2006]: 2025-12-16 12:16:47.8235 INFO [CredentialRefresher] Credentials ready Dec 16 12:16:47.874666 amazon-ssm-agent[2006]: 2025-12-16 12:16:47.8246 INFO [CredentialRefresher] Next credential rotation will be in 29.9999305548 minutes Dec 16 12:16:48.851915 amazon-ssm-agent[2006]: 2025-12-16 12:16:48.8516 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Dec 16 12:16:48.952907 amazon-ssm-agent[2006]: 2025-12-16 12:16:48.8573 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2207) started Dec 16 12:16:49.053032 amazon-ssm-agent[2006]: 2025-12-16 12:16:48.8574 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Dec 16 12:16:50.349941 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 12:16:50.353745 systemd[1]: Reached target multi-user.target - Multi-User System. Dec 16 12:16:50.356479 systemd[1]: Startup finished in 4.044s (kernel) + 12.080s (initrd) + 14.509s (userspace) = 30.633s. Dec 16 12:16:50.364438 (kubelet)[2223]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 16 12:16:51.144841 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Dec 16 12:16:51.147488 systemd[1]: Started sshd@0-172.31.20.6:22-147.75.109.163:50156.service - OpenSSH per-connection server daemon (147.75.109.163:50156). Dec 16 12:16:51.461266 sshd[2233]: Accepted publickey for core from 147.75.109.163 port 50156 ssh2: RSA SHA256:akQMr2ov2nuM0LGikPoztxlYyDrZ4eOQOASr+dHXo3U Dec 16 12:16:51.466094 sshd-session[2233]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:16:51.480744 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Dec 16 12:16:51.482704 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Dec 16 12:16:51.498663 systemd-logind[1939]: New session 1 of user core. Dec 16 12:16:51.521746 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Dec 16 12:16:51.527674 systemd[1]: Starting user@500.service - User Manager for UID 500... Dec 16 12:16:51.551812 (systemd)[2239]: pam_unix(systemd-user:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:16:51.558791 systemd-logind[1939]: New session 2 of user core. Dec 16 12:16:51.918363 systemd[2239]: Queued start job for default target default.target. Dec 16 12:16:51.925898 systemd[2239]: Created slice app.slice - User Application Slice. Dec 16 12:16:51.925972 systemd[2239]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of User's Temporary Directories. Dec 16 12:16:51.926004 systemd[2239]: Reached target paths.target - Paths. Dec 16 12:16:51.926108 systemd[2239]: Reached target timers.target - Timers. Dec 16 12:16:51.929439 systemd[2239]: Starting dbus.socket - D-Bus User Message Bus Socket... Dec 16 12:16:51.931617 systemd[2239]: Starting systemd-tmpfiles-setup.service - Create User Files and Directories... Dec 16 12:16:51.961054 systemd[2239]: Finished systemd-tmpfiles-setup.service - Create User Files and Directories. Dec 16 12:16:51.978623 systemd[2239]: Listening on dbus.socket - D-Bus User Message Bus Socket. Dec 16 12:16:51.978880 systemd[2239]: Reached target sockets.target - Sockets. Dec 16 12:16:51.978988 systemd[2239]: Reached target basic.target - Basic System. Dec 16 12:16:51.979072 systemd[2239]: Reached target default.target - Main User Target. Dec 16 12:16:51.979133 systemd[2239]: Startup finished in 407ms. Dec 16 12:16:51.979612 systemd[1]: Started user@500.service - User Manager for UID 500. Dec 16 12:16:51.990511 systemd[1]: Started session-1.scope - Session 1 of User core. Dec 16 12:16:52.083565 systemd[1]: Started sshd@1-172.31.20.6:22-147.75.109.163:50168.service - OpenSSH per-connection server daemon (147.75.109.163:50168). Dec 16 12:16:52.282514 sshd[2255]: Accepted publickey for core from 147.75.109.163 port 50168 ssh2: RSA SHA256:akQMr2ov2nuM0LGikPoztxlYyDrZ4eOQOASr+dHXo3U Dec 16 12:16:52.286084 sshd-session[2255]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:16:52.299643 systemd-logind[1939]: New session 3 of user core. Dec 16 12:16:52.306044 kubelet[2223]: E1216 12:16:52.305984 2223 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 16 12:16:52.308616 systemd[1]: Started session-3.scope - Session 3 of User core. Dec 16 12:16:52.312333 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 16 12:16:52.312674 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 16 12:16:52.316374 systemd[1]: kubelet.service: Consumed 1.455s CPU time, 257.4M memory peak. Dec 16 12:16:52.377962 sshd[2260]: Connection closed by 147.75.109.163 port 50168 Dec 16 12:16:52.379460 sshd-session[2255]: pam_unix(sshd:session): session closed for user core Dec 16 12:16:52.388516 systemd-logind[1939]: Session 3 logged out. Waiting for processes to exit. Dec 16 12:16:52.389431 systemd[1]: sshd@1-172.31.20.6:22-147.75.109.163:50168.service: Deactivated successfully. Dec 16 12:16:52.393735 systemd[1]: session-3.scope: Deactivated successfully. Dec 16 12:16:52.399611 systemd-logind[1939]: Removed session 3. Dec 16 12:16:52.414682 systemd[1]: Started sshd@2-172.31.20.6:22-147.75.109.163:50174.service - OpenSSH per-connection server daemon (147.75.109.163:50174). Dec 16 12:16:52.607634 sshd[2266]: Accepted publickey for core from 147.75.109.163 port 50174 ssh2: RSA SHA256:akQMr2ov2nuM0LGikPoztxlYyDrZ4eOQOASr+dHXo3U Dec 16 12:16:52.609754 sshd-session[2266]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:16:52.619283 systemd-logind[1939]: New session 4 of user core. Dec 16 12:16:52.625492 systemd[1]: Started session-4.scope - Session 4 of User core. Dec 16 12:16:52.681360 sshd[2270]: Connection closed by 147.75.109.163 port 50174 Dec 16 12:16:52.682282 sshd-session[2266]: pam_unix(sshd:session): session closed for user core Dec 16 12:16:52.689810 systemd[1]: sshd@2-172.31.20.6:22-147.75.109.163:50174.service: Deactivated successfully. Dec 16 12:16:52.694175 systemd[1]: session-4.scope: Deactivated successfully. Dec 16 12:16:52.696319 systemd-logind[1939]: Session 4 logged out. Waiting for processes to exit. Dec 16 12:16:52.699105 systemd-logind[1939]: Removed session 4. Dec 16 12:16:52.715421 systemd[1]: Started sshd@3-172.31.20.6:22-147.75.109.163:47334.service - OpenSSH per-connection server daemon (147.75.109.163:47334). Dec 16 12:16:52.356622 systemd-resolved[1534]: Clock change detected. Flushing caches. Dec 16 12:16:52.367591 systemd-journald[1461]: Time jumped backwards, rotating. Dec 16 12:16:52.447187 sshd[2276]: Accepted publickey for core from 147.75.109.163 port 47334 ssh2: RSA SHA256:akQMr2ov2nuM0LGikPoztxlYyDrZ4eOQOASr+dHXo3U Dec 16 12:16:52.449375 sshd-session[2276]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:16:52.459135 systemd-logind[1939]: New session 5 of user core. Dec 16 12:16:52.467348 systemd[1]: Started session-5.scope - Session 5 of User core. Dec 16 12:16:52.530976 sshd[2281]: Connection closed by 147.75.109.163 port 47334 Dec 16 12:16:52.531719 sshd-session[2276]: pam_unix(sshd:session): session closed for user core Dec 16 12:16:52.538666 systemd[1]: sshd@3-172.31.20.6:22-147.75.109.163:47334.service: Deactivated successfully. Dec 16 12:16:52.541831 systemd[1]: session-5.scope: Deactivated successfully. Dec 16 12:16:52.545169 systemd-logind[1939]: Session 5 logged out. Waiting for processes to exit. Dec 16 12:16:52.548191 systemd-logind[1939]: Removed session 5. Dec 16 12:16:52.567283 systemd[1]: Started sshd@4-172.31.20.6:22-147.75.109.163:47340.service - OpenSSH per-connection server daemon (147.75.109.163:47340). Dec 16 12:16:52.746323 sshd[2287]: Accepted publickey for core from 147.75.109.163 port 47340 ssh2: RSA SHA256:akQMr2ov2nuM0LGikPoztxlYyDrZ4eOQOASr+dHXo3U Dec 16 12:16:52.748313 sshd-session[2287]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:16:52.758453 systemd-logind[1939]: New session 6 of user core. Dec 16 12:16:52.766421 systemd[1]: Started session-6.scope - Session 6 of User core. Dec 16 12:16:52.824364 sudo[2292]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Dec 16 12:16:52.825011 sudo[2292]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 16 12:16:52.838934 sudo[2292]: pam_unix(sudo:session): session closed for user root Dec 16 12:16:52.862112 sshd[2291]: Connection closed by 147.75.109.163 port 47340 Dec 16 12:16:52.862022 sshd-session[2287]: pam_unix(sshd:session): session closed for user core Dec 16 12:16:52.871237 systemd[1]: sshd@4-172.31.20.6:22-147.75.109.163:47340.service: Deactivated successfully. Dec 16 12:16:52.876477 systemd[1]: session-6.scope: Deactivated successfully. Dec 16 12:16:52.878473 systemd-logind[1939]: Session 6 logged out. Waiting for processes to exit. Dec 16 12:16:52.881748 systemd-logind[1939]: Removed session 6. Dec 16 12:16:52.902364 systemd[1]: Started sshd@5-172.31.20.6:22-147.75.109.163:47348.service - OpenSSH per-connection server daemon (147.75.109.163:47348). Dec 16 12:16:53.085652 sshd[2299]: Accepted publickey for core from 147.75.109.163 port 47348 ssh2: RSA SHA256:akQMr2ov2nuM0LGikPoztxlYyDrZ4eOQOASr+dHXo3U Dec 16 12:16:53.088284 sshd-session[2299]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:16:53.096701 systemd-logind[1939]: New session 7 of user core. Dec 16 12:16:53.111330 systemd[1]: Started session-7.scope - Session 7 of User core. Dec 16 12:16:53.156710 sudo[2305]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Dec 16 12:16:53.157388 sudo[2305]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 16 12:16:53.162299 sudo[2305]: pam_unix(sudo:session): session closed for user root Dec 16 12:16:53.174306 sudo[2304]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Dec 16 12:16:53.174941 sudo[2304]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 16 12:16:53.190647 systemd[1]: Starting audit-rules.service - Load Audit Rules... Dec 16 12:16:53.253943 kernel: kauditd_printk_skb: 144 callbacks suppressed Dec 16 12:16:53.254092 kernel: audit: type=1305 audit(1765887413.248:238): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Dec 16 12:16:53.248000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Dec 16 12:16:53.254212 augenrules[2329]: No rules Dec 16 12:16:53.248000 audit[2329]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffe277cb90 a2=420 a3=0 items=0 ppid=2310 pid=2329 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:16:53.260880 kernel: audit: type=1300 audit(1765887413.248:238): arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffe277cb90 a2=420 a3=0 items=0 ppid=2310 pid=2329 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:16:53.254582 systemd[1]: audit-rules.service: Deactivated successfully. Dec 16 12:16:53.255110 systemd[1]: Finished audit-rules.service - Load Audit Rules. Dec 16 12:16:53.248000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Dec 16 12:16:53.266131 kernel: audit: type=1327 audit(1765887413.248:238): proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Dec 16 12:16:53.263922 sudo[2304]: pam_unix(sudo:session): session closed for user root Dec 16 12:16:53.254000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:16:53.270673 kernel: audit: type=1130 audit(1765887413.254:239): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:16:53.270806 kernel: audit: type=1131 audit(1765887413.254:240): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:16:53.254000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:16:53.262000 audit[2304]: USER_END pid=2304 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 16 12:16:53.279723 kernel: audit: type=1106 audit(1765887413.262:241): pid=2304 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 16 12:16:53.279763 kernel: audit: type=1104 audit(1765887413.263:242): pid=2304 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 16 12:16:53.263000 audit[2304]: CRED_DISP pid=2304 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 16 12:16:53.287272 sshd[2303]: Connection closed by 147.75.109.163 port 47348 Dec 16 12:16:53.287982 sshd-session[2299]: pam_unix(sshd:session): session closed for user core Dec 16 12:16:53.289000 audit[2299]: USER_END pid=2299 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:16:53.289000 audit[2299]: CRED_DISP pid=2299 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:16:53.304393 kernel: audit: type=1106 audit(1765887413.289:243): pid=2299 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:16:53.304467 kernel: audit: type=1104 audit(1765887413.289:244): pid=2299 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:16:53.305225 systemd[1]: sshd@5-172.31.20.6:22-147.75.109.163:47348.service: Deactivated successfully. Dec 16 12:16:53.304000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-172.31.20.6:22-147.75.109.163:47348 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:16:53.308483 systemd[1]: session-7.scope: Deactivated successfully. Dec 16 12:16:53.313127 kernel: audit: type=1131 audit(1765887413.304:245): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-172.31.20.6:22-147.75.109.163:47348 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:16:53.313085 systemd-logind[1939]: Session 7 logged out. Waiting for processes to exit. Dec 16 12:16:53.330000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-172.31.20.6:22-147.75.109.163:47358 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:16:53.331037 systemd[1]: Started sshd@6-172.31.20.6:22-147.75.109.163:47358.service - OpenSSH per-connection server daemon (147.75.109.163:47358). Dec 16 12:16:53.333836 systemd-logind[1939]: Removed session 7. Dec 16 12:16:53.511000 audit[2338]: USER_ACCT pid=2338 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:16:53.512551 sshd[2338]: Accepted publickey for core from 147.75.109.163 port 47358 ssh2: RSA SHA256:akQMr2ov2nuM0LGikPoztxlYyDrZ4eOQOASr+dHXo3U Dec 16 12:16:53.512000 audit[2338]: CRED_ACQ pid=2338 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:16:53.513000 audit[2338]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffc38f0300 a2=3 a3=0 items=0 ppid=1 pid=2338 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=8 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:16:53.513000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 12:16:53.515096 sshd-session[2338]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:16:53.522774 systemd-logind[1939]: New session 8 of user core. Dec 16 12:16:53.530369 systemd[1]: Started session-8.scope - Session 8 of User core. Dec 16 12:16:53.535000 audit[2338]: USER_START pid=2338 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:16:53.538000 audit[2342]: CRED_ACQ pid=2342 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:16:53.575485 sudo[2343]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Dec 16 12:16:53.574000 audit[2343]: USER_ACCT pid=2343 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_unix,pam_faillock acct="core" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 16 12:16:53.574000 audit[2343]: CRED_REFR pid=2343 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 16 12:16:53.576206 sudo[2343]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 16 12:16:53.575000 audit[2343]: USER_START pid=2343 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 16 12:16:54.801592 systemd[1]: Starting docker.service - Docker Application Container Engine... Dec 16 12:16:54.816573 (dockerd)[2362]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Dec 16 12:16:55.905089 dockerd[2362]: time="2025-12-16T12:16:55.904889009Z" level=info msg="Starting up" Dec 16 12:16:55.908711 dockerd[2362]: time="2025-12-16T12:16:55.908647433Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Dec 16 12:16:55.930306 dockerd[2362]: time="2025-12-16T12:16:55.930226409Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Dec 16 12:16:55.998867 dockerd[2362]: time="2025-12-16T12:16:55.998804058Z" level=info msg="Loading containers: start." Dec 16 12:16:56.014108 kernel: Initializing XFRM netlink socket Dec 16 12:16:56.100000 audit[2411]: NETFILTER_CFG table=nat:2 family=2 entries=2 op=nft_register_chain pid=2411 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:16:56.100000 audit[2411]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=116 a0=3 a1=ffffed96e0c0 a2=0 a3=0 items=0 ppid=2362 pid=2411 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:16:56.100000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Dec 16 12:16:56.104000 audit[2413]: NETFILTER_CFG table=filter:3 family=2 entries=2 op=nft_register_chain pid=2413 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:16:56.104000 audit[2413]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=124 a0=3 a1=ffffc33aeb70 a2=0 a3=0 items=0 ppid=2362 pid=2413 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:16:56.104000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Dec 16 12:16:56.108000 audit[2415]: NETFILTER_CFG table=filter:4 family=2 entries=1 op=nft_register_chain pid=2415 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:16:56.108000 audit[2415]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffddcb7140 a2=0 a3=0 items=0 ppid=2362 pid=2415 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:16:56.108000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D464F5257415244 Dec 16 12:16:56.113000 audit[2417]: NETFILTER_CFG table=filter:5 family=2 entries=1 op=nft_register_chain pid=2417 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:16:56.113000 audit[2417]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffcbac0d40 a2=0 a3=0 items=0 ppid=2362 pid=2417 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:16:56.113000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D425249444745 Dec 16 12:16:56.117000 audit[2419]: NETFILTER_CFG table=filter:6 family=2 entries=1 op=nft_register_chain pid=2419 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:16:56.117000 audit[2419]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffddbae800 a2=0 a3=0 items=0 ppid=2362 pid=2419 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:16:56.117000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D4354 Dec 16 12:16:56.121000 audit[2421]: NETFILTER_CFG table=filter:7 family=2 entries=1 op=nft_register_chain pid=2421 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:16:56.121000 audit[2421]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=112 a0=3 a1=ffffe6fda160 a2=0 a3=0 items=0 ppid=2362 pid=2421 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:16:56.121000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Dec 16 12:16:56.125000 audit[2423]: NETFILTER_CFG table=filter:8 family=2 entries=1 op=nft_register_chain pid=2423 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:16:56.125000 audit[2423]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=112 a0=3 a1=ffffe39387c0 a2=0 a3=0 items=0 ppid=2362 pid=2423 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:16:56.125000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Dec 16 12:16:56.130000 audit[2425]: NETFILTER_CFG table=nat:9 family=2 entries=2 op=nft_register_chain pid=2425 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:16:56.130000 audit[2425]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=384 a0=3 a1=ffffec757b70 a2=0 a3=0 items=0 ppid=2362 pid=2425 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:16:56.130000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Dec 16 12:16:56.165000 audit[2428]: NETFILTER_CFG table=nat:10 family=2 entries=2 op=nft_register_chain pid=2428 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:16:56.165000 audit[2428]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=472 a0=3 a1=ffffd9238a20 a2=0 a3=0 items=0 ppid=2362 pid=2428 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:16:56.165000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003132372E302E302E302F38 Dec 16 12:16:56.170000 audit[2430]: NETFILTER_CFG table=filter:11 family=2 entries=2 op=nft_register_chain pid=2430 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:16:56.170000 audit[2430]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=340 a0=3 a1=ffffff63e4b0 a2=0 a3=0 items=0 ppid=2362 pid=2430 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:16:56.170000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D464F5257415244 Dec 16 12:16:56.174000 audit[2432]: NETFILTER_CFG table=filter:12 family=2 entries=1 op=nft_register_rule pid=2432 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:16:56.174000 audit[2432]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=236 a0=3 a1=ffffc6ca6190 a2=0 a3=0 items=0 ppid=2362 pid=2432 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:16:56.174000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D425249444745 Dec 16 12:16:56.178000 audit[2434]: NETFILTER_CFG table=filter:13 family=2 entries=1 op=nft_register_rule pid=2434 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:16:56.178000 audit[2434]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=248 a0=3 a1=ffffed7a4ce0 a2=0 a3=0 items=0 ppid=2362 pid=2434 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:16:56.178000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Dec 16 12:16:56.183000 audit[2436]: NETFILTER_CFG table=filter:14 family=2 entries=1 op=nft_register_rule pid=2436 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:16:56.183000 audit[2436]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=232 a0=3 a1=fffffbaadd00 a2=0 a3=0 items=0 ppid=2362 pid=2436 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:16:56.183000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D4354 Dec 16 12:16:56.250000 audit[2466]: NETFILTER_CFG table=nat:15 family=10 entries=2 op=nft_register_chain pid=2466 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:16:56.250000 audit[2466]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=116 a0=3 a1=ffffc3d0baf0 a2=0 a3=0 items=0 ppid=2362 pid=2466 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:16:56.250000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Dec 16 12:16:56.254000 audit[2468]: NETFILTER_CFG table=filter:16 family=10 entries=2 op=nft_register_chain pid=2468 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:16:56.254000 audit[2468]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=124 a0=3 a1=ffffcbf45f10 a2=0 a3=0 items=0 ppid=2362 pid=2468 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:16:56.254000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Dec 16 12:16:56.258000 audit[2470]: NETFILTER_CFG table=filter:17 family=10 entries=1 op=nft_register_chain pid=2470 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:16:56.258000 audit[2470]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffce0504a0 a2=0 a3=0 items=0 ppid=2362 pid=2470 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:16:56.258000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D464F5257415244 Dec 16 12:16:56.262000 audit[2472]: NETFILTER_CFG table=filter:18 family=10 entries=1 op=nft_register_chain pid=2472 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:16:56.262000 audit[2472]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffff900b90 a2=0 a3=0 items=0 ppid=2362 pid=2472 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:16:56.262000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D425249444745 Dec 16 12:16:56.266000 audit[2474]: NETFILTER_CFG table=filter:19 family=10 entries=1 op=nft_register_chain pid=2474 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:16:56.266000 audit[2474]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=fffff0f51070 a2=0 a3=0 items=0 ppid=2362 pid=2474 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:16:56.266000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D4354 Dec 16 12:16:56.271000 audit[2476]: NETFILTER_CFG table=filter:20 family=10 entries=1 op=nft_register_chain pid=2476 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:16:56.271000 audit[2476]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=112 a0=3 a1=ffffc16def20 a2=0 a3=0 items=0 ppid=2362 pid=2476 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:16:56.271000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Dec 16 12:16:56.275000 audit[2478]: NETFILTER_CFG table=filter:21 family=10 entries=1 op=nft_register_chain pid=2478 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:16:56.275000 audit[2478]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=112 a0=3 a1=fffffffef150 a2=0 a3=0 items=0 ppid=2362 pid=2478 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:16:56.275000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Dec 16 12:16:56.279000 audit[2480]: NETFILTER_CFG table=nat:22 family=10 entries=2 op=nft_register_chain pid=2480 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:16:56.279000 audit[2480]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=384 a0=3 a1=ffffd6a60170 a2=0 a3=0 items=0 ppid=2362 pid=2480 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:16:56.279000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Dec 16 12:16:56.284000 audit[2482]: NETFILTER_CFG table=nat:23 family=10 entries=2 op=nft_register_chain pid=2482 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:16:56.284000 audit[2482]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=484 a0=3 a1=ffffc935d280 a2=0 a3=0 items=0 ppid=2362 pid=2482 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:16:56.284000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003A3A312F313238 Dec 16 12:16:56.288000 audit[2484]: NETFILTER_CFG table=filter:24 family=10 entries=2 op=nft_register_chain pid=2484 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:16:56.288000 audit[2484]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=340 a0=3 a1=fffff3f97f30 a2=0 a3=0 items=0 ppid=2362 pid=2484 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:16:56.288000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D464F5257415244 Dec 16 12:16:56.292000 audit[2486]: NETFILTER_CFG table=filter:25 family=10 entries=1 op=nft_register_rule pid=2486 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:16:56.292000 audit[2486]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=236 a0=3 a1=ffffec760fd0 a2=0 a3=0 items=0 ppid=2362 pid=2486 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:16:56.292000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D425249444745 Dec 16 12:16:56.296000 audit[2488]: NETFILTER_CFG table=filter:26 family=10 entries=1 op=nft_register_rule pid=2488 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:16:56.296000 audit[2488]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=248 a0=3 a1=ffffc98366a0 a2=0 a3=0 items=0 ppid=2362 pid=2488 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:16:56.296000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Dec 16 12:16:56.300000 audit[2490]: NETFILTER_CFG table=filter:27 family=10 entries=1 op=nft_register_rule pid=2490 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:16:56.300000 audit[2490]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=232 a0=3 a1=fffff8981220 a2=0 a3=0 items=0 ppid=2362 pid=2490 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:16:56.300000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D4354 Dec 16 12:16:56.311000 audit[2495]: NETFILTER_CFG table=filter:28 family=2 entries=1 op=nft_register_chain pid=2495 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:16:56.311000 audit[2495]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=fffff09dee70 a2=0 a3=0 items=0 ppid=2362 pid=2495 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:16:56.311000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Dec 16 12:16:56.315000 audit[2497]: NETFILTER_CFG table=filter:29 family=2 entries=1 op=nft_register_rule pid=2497 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:16:56.315000 audit[2497]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=212 a0=3 a1=ffffd3fafbf0 a2=0 a3=0 items=0 ppid=2362 pid=2497 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:16:56.315000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Dec 16 12:16:56.319000 audit[2499]: NETFILTER_CFG table=filter:30 family=2 entries=1 op=nft_register_rule pid=2499 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:16:56.319000 audit[2499]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=224 a0=3 a1=ffffc623ea50 a2=0 a3=0 items=0 ppid=2362 pid=2499 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:16:56.319000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Dec 16 12:16:56.323000 audit[2501]: NETFILTER_CFG table=filter:31 family=10 entries=1 op=nft_register_chain pid=2501 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:16:56.323000 audit[2501]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffee3a6750 a2=0 a3=0 items=0 ppid=2362 pid=2501 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:16:56.323000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Dec 16 12:16:56.328000 audit[2503]: NETFILTER_CFG table=filter:32 family=10 entries=1 op=nft_register_rule pid=2503 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:16:56.328000 audit[2503]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=212 a0=3 a1=ffffc6a95050 a2=0 a3=0 items=0 ppid=2362 pid=2503 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:16:56.328000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Dec 16 12:16:56.332000 audit[2505]: NETFILTER_CFG table=filter:33 family=10 entries=1 op=nft_register_rule pid=2505 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:16:56.332000 audit[2505]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=224 a0=3 a1=ffffc5e17260 a2=0 a3=0 items=0 ppid=2362 pid=2505 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:16:56.332000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Dec 16 12:16:56.351419 (udev-worker)[2383]: Network interface NamePolicy= disabled on kernel command line. Dec 16 12:16:56.370000 audit[2510]: NETFILTER_CFG table=nat:34 family=2 entries=2 op=nft_register_chain pid=2510 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:16:56.370000 audit[2510]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=520 a0=3 a1=fffffd7936c0 a2=0 a3=0 items=0 ppid=2362 pid=2510 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:16:56.370000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4900504F5354524F5554494E47002D73003137322E31372E302E302F31360000002D6F00646F636B657230002D6A004D415351554552414445 Dec 16 12:16:56.375000 audit[2512]: NETFILTER_CFG table=nat:35 family=2 entries=1 op=nft_register_rule pid=2512 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:16:56.375000 audit[2512]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=288 a0=3 a1=ffffd560f9e0 a2=0 a3=0 items=0 ppid=2362 pid=2512 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:16:56.375000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4900444F434B4552002D6900646F636B657230002D6A0052455455524E Dec 16 12:16:56.393000 audit[2520]: NETFILTER_CFG table=filter:36 family=2 entries=1 op=nft_register_rule pid=2520 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:16:56.393000 audit[2520]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=300 a0=3 a1=fffffd8c7540 a2=0 a3=0 items=0 ppid=2362 pid=2520 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:16:56.393000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D464F5257415244002D6900646F636B657230002D6A00414343455054 Dec 16 12:16:56.411000 audit[2526]: NETFILTER_CFG table=filter:37 family=2 entries=1 op=nft_register_rule pid=2526 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:16:56.411000 audit[2526]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=376 a0=3 a1=ffffc04fba40 a2=0 a3=0 items=0 ppid=2362 pid=2526 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:16:56.411000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45520000002D6900646F636B657230002D6F00646F636B657230002D6A0044524F50 Dec 16 12:16:56.417000 audit[2528]: NETFILTER_CFG table=filter:38 family=2 entries=1 op=nft_register_rule pid=2528 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:16:56.417000 audit[2528]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=512 a0=3 a1=ffffcb6d98d0 a2=0 a3=0 items=0 ppid=2362 pid=2528 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:16:56.417000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D4354002D6F00646F636B657230002D6D00636F6E6E747261636B002D2D637473746174650052454C415445442C45535441424C4953484544002D6A00414343455054 Dec 16 12:16:56.421000 audit[2530]: NETFILTER_CFG table=filter:39 family=2 entries=1 op=nft_register_rule pid=2530 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:16:56.421000 audit[2530]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=312 a0=3 a1=fffffd43eb90 a2=0 a3=0 items=0 ppid=2362 pid=2530 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:16:56.421000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D425249444745002D6F00646F636B657230002D6A00444F434B4552 Dec 16 12:16:56.425000 audit[2532]: NETFILTER_CFG table=filter:40 family=2 entries=1 op=nft_register_rule pid=2532 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:16:56.425000 audit[2532]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=428 a0=3 a1=ffffd6774e70 a2=0 a3=0 items=0 ppid=2362 pid=2532 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:16:56.425000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D49534F4C4154494F4E2D53544147452D31002D6900646F636B6572300000002D6F00646F636B657230002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D32 Dec 16 12:16:56.429000 audit[2534]: NETFILTER_CFG table=filter:41 family=2 entries=1 op=nft_register_rule pid=2534 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:16:56.429000 audit[2534]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=312 a0=3 a1=ffffdeec5980 a2=0 a3=0 items=0 ppid=2362 pid=2534 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:16:56.429000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D32002D6F00646F636B657230002D6A0044524F50 Dec 16 12:16:56.431996 systemd-networkd[1564]: docker0: Link UP Dec 16 12:16:56.443990 dockerd[2362]: time="2025-12-16T12:16:56.443919940Z" level=info msg="Loading containers: done." Dec 16 12:16:56.490837 dockerd[2362]: time="2025-12-16T12:16:56.490775800Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Dec 16 12:16:56.491039 dockerd[2362]: time="2025-12-16T12:16:56.490891804Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Dec 16 12:16:56.491229 dockerd[2362]: time="2025-12-16T12:16:56.491186824Z" level=info msg="Initializing buildkit" Dec 16 12:16:56.542226 dockerd[2362]: time="2025-12-16T12:16:56.542092900Z" level=info msg="Completed buildkit initialization" Dec 16 12:16:56.556401 dockerd[2362]: time="2025-12-16T12:16:56.556309864Z" level=info msg="Daemon has completed initialization" Dec 16 12:16:56.556646 dockerd[2362]: time="2025-12-16T12:16:56.556419700Z" level=info msg="API listen on /run/docker.sock" Dec 16 12:16:56.557295 systemd[1]: Started docker.service - Docker Application Container Engine. Dec 16 12:16:56.556000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=docker comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:16:56.962691 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck694757155-merged.mount: Deactivated successfully. Dec 16 12:16:58.408983 containerd[1971]: time="2025-12-16T12:16:58.408909042Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.10\"" Dec 16 12:16:59.093347 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount919111412.mount: Deactivated successfully. Dec 16 12:17:00.543095 containerd[1971]: time="2025-12-16T12:17:00.542533736Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:17:00.545342 containerd[1971]: time="2025-12-16T12:17:00.545276336Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.10: active requests=0, bytes read=24835766" Dec 16 12:17:00.547974 containerd[1971]: time="2025-12-16T12:17:00.547902296Z" level=info msg="ImageCreate event name:\"sha256:03aec5fd5841efdd990b8fe285e036fc1386e2f8851378ce2c9dfd1b331897ea\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:17:00.553632 containerd[1971]: time="2025-12-16T12:17:00.553553264Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:af4ee57c047e31a7f58422b94a9ec4c62221d3deebb16755bdeff720df796189\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:17:00.555744 containerd[1971]: time="2025-12-16T12:17:00.555473192Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.10\" with image id \"sha256:03aec5fd5841efdd990b8fe285e036fc1386e2f8851378ce2c9dfd1b331897ea\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.10\", repo digest \"registry.k8s.io/kube-apiserver@sha256:af4ee57c047e31a7f58422b94a9ec4c62221d3deebb16755bdeff720df796189\", size \"26428558\" in 2.146499038s" Dec 16 12:17:00.555744 containerd[1971]: time="2025-12-16T12:17:00.555528632Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.10\" returns image reference \"sha256:03aec5fd5841efdd990b8fe285e036fc1386e2f8851378ce2c9dfd1b331897ea\"" Dec 16 12:17:00.557446 containerd[1971]: time="2025-12-16T12:17:00.557400716Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.10\"" Dec 16 12:17:02.029141 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Dec 16 12:17:02.032247 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 12:17:02.229785 containerd[1971]: time="2025-12-16T12:17:02.229708125Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:17:02.234408 containerd[1971]: time="2025-12-16T12:17:02.234314481Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.10: active requests=0, bytes read=22610801" Dec 16 12:17:02.238103 containerd[1971]: time="2025-12-16T12:17:02.237827889Z" level=info msg="ImageCreate event name:\"sha256:66490a6490dde2df4a78eba21320da67070ad88461899536880edb5301ec2ba3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:17:02.246441 containerd[1971]: time="2025-12-16T12:17:02.246366033Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:efbd9d1dfcd2940e1c73a1476c880c3c2cdf04cc60722d329b21cd48745c8660\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:17:02.249326 containerd[1971]: time="2025-12-16T12:17:02.249047637Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.10\" with image id \"sha256:66490a6490dde2df4a78eba21320da67070ad88461899536880edb5301ec2ba3\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.10\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:efbd9d1dfcd2940e1c73a1476c880c3c2cdf04cc60722d329b21cd48745c8660\", size \"24203439\" in 1.691447637s" Dec 16 12:17:02.249326 containerd[1971]: time="2025-12-16T12:17:02.249152661Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.10\" returns image reference \"sha256:66490a6490dde2df4a78eba21320da67070ad88461899536880edb5301ec2ba3\"" Dec 16 12:17:02.250146 containerd[1971]: time="2025-12-16T12:17:02.249972357Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.10\"" Dec 16 12:17:02.416673 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 12:17:02.424458 kernel: kauditd_printk_skb: 132 callbacks suppressed Dec 16 12:17:02.424561 kernel: audit: type=1130 audit(1765887422.416:296): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:17:02.416000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:17:02.427594 (kubelet)[2644]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 16 12:17:02.504704 kubelet[2644]: E1216 12:17:02.504645 2644 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 16 12:17:02.512333 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 16 12:17:02.512639 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 16 12:17:02.512000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Dec 16 12:17:02.513666 systemd[1]: kubelet.service: Consumed 320ms CPU time, 107.2M memory peak. Dec 16 12:17:02.518102 kernel: audit: type=1131 audit(1765887422.512:297): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Dec 16 12:17:03.672447 containerd[1971]: time="2025-12-16T12:17:03.672368640Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:17:03.674462 containerd[1971]: time="2025-12-16T12:17:03.674387556Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.10: active requests=0, bytes read=17613052" Dec 16 12:17:03.677119 containerd[1971]: time="2025-12-16T12:17:03.676334640Z" level=info msg="ImageCreate event name:\"sha256:fcf368a1abd0b48cff2fd3cca12fcc008aaf52eeab885656f11e7773c6a188a3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:17:03.681884 containerd[1971]: time="2025-12-16T12:17:03.680907636Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:9c58e1adcad5af66d1d9ca5cf9a4c266e4054b8f19f91a8fff1993549e657b10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:17:03.683000 containerd[1971]: time="2025-12-16T12:17:03.682942404Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.10\" with image id \"sha256:fcf368a1abd0b48cff2fd3cca12fcc008aaf52eeab885656f11e7773c6a188a3\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.10\", repo digest \"registry.k8s.io/kube-scheduler@sha256:9c58e1adcad5af66d1d9ca5cf9a4c266e4054b8f19f91a8fff1993549e657b10\", size \"19202938\" in 1.432645867s" Dec 16 12:17:03.683122 containerd[1971]: time="2025-12-16T12:17:03.682997124Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.10\" returns image reference \"sha256:fcf368a1abd0b48cff2fd3cca12fcc008aaf52eeab885656f11e7773c6a188a3\"" Dec 16 12:17:03.683869 containerd[1971]: time="2025-12-16T12:17:03.683812356Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.10\"" Dec 16 12:17:05.001678 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1186705402.mount: Deactivated successfully. Dec 16 12:17:05.594752 containerd[1971]: time="2025-12-16T12:17:05.594700249Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:17:05.597350 containerd[1971]: time="2025-12-16T12:17:05.597272557Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.10: active requests=0, bytes read=17716804" Dec 16 12:17:05.598624 containerd[1971]: time="2025-12-16T12:17:05.598567261Z" level=info msg="ImageCreate event name:\"sha256:8b57c1f8bd2ddfa793889457b41e87132f192046e262b32ab0514f32d28be47d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:17:05.601571 containerd[1971]: time="2025-12-16T12:17:05.601509697Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:e3dda1c7b384f9eb5b2fa1c27493b23b80e6204b9fa2ee8791b2de078f468cbf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:17:05.603011 containerd[1971]: time="2025-12-16T12:17:05.602968669Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.10\" with image id \"sha256:8b57c1f8bd2ddfa793889457b41e87132f192046e262b32ab0514f32d28be47d\", repo tag \"registry.k8s.io/kube-proxy:v1.32.10\", repo digest \"registry.k8s.io/kube-proxy@sha256:e3dda1c7b384f9eb5b2fa1c27493b23b80e6204b9fa2ee8791b2de078f468cbf\", size \"27560818\" in 1.919099673s" Dec 16 12:17:05.603174 containerd[1971]: time="2025-12-16T12:17:05.603145357Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.10\" returns image reference \"sha256:8b57c1f8bd2ddfa793889457b41e87132f192046e262b32ab0514f32d28be47d\"" Dec 16 12:17:05.603842 containerd[1971]: time="2025-12-16T12:17:05.603779341Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Dec 16 12:17:06.113594 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4086629097.mount: Deactivated successfully. Dec 16 12:17:07.329337 containerd[1971]: time="2025-12-16T12:17:07.329267594Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:17:07.332077 containerd[1971]: time="2025-12-16T12:17:07.331986110Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=15956282" Dec 16 12:17:07.334795 containerd[1971]: time="2025-12-16T12:17:07.334694258Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:17:07.341099 containerd[1971]: time="2025-12-16T12:17:07.340889642Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:17:07.343315 containerd[1971]: time="2025-12-16T12:17:07.342884354Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 1.738920309s" Dec 16 12:17:07.343315 containerd[1971]: time="2025-12-16T12:17:07.342936206Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" Dec 16 12:17:07.344324 containerd[1971]: time="2025-12-16T12:17:07.344273462Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Dec 16 12:17:07.809090 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2438555294.mount: Deactivated successfully. Dec 16 12:17:07.825040 containerd[1971]: time="2025-12-16T12:17:07.824101108Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 16 12:17:07.826188 containerd[1971]: time="2025-12-16T12:17:07.826103440Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Dec 16 12:17:07.828725 containerd[1971]: time="2025-12-16T12:17:07.828653404Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 16 12:17:07.833203 containerd[1971]: time="2025-12-16T12:17:07.833126164Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 16 12:17:07.834571 containerd[1971]: time="2025-12-16T12:17:07.834366292Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 490.034354ms" Dec 16 12:17:07.834571 containerd[1971]: time="2025-12-16T12:17:07.834420664Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Dec 16 12:17:07.836123 containerd[1971]: time="2025-12-16T12:17:07.835949980Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Dec 16 12:17:08.376225 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount410256407.mount: Deactivated successfully. Dec 16 12:17:10.808120 containerd[1971]: time="2025-12-16T12:17:10.807772219Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:17:10.809931 containerd[1971]: time="2025-12-16T12:17:10.809844139Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=56456774" Dec 16 12:17:10.812534 containerd[1971]: time="2025-12-16T12:17:10.812467159Z" level=info msg="ImageCreate event name:\"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:17:10.819214 containerd[1971]: time="2025-12-16T12:17:10.818294359Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:17:10.820476 containerd[1971]: time="2025-12-16T12:17:10.820416103Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"67941650\" in 2.984378199s" Dec 16 12:17:10.820557 containerd[1971]: time="2025-12-16T12:17:10.820473283Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\"" Dec 16 12:17:12.529144 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Dec 16 12:17:12.533392 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 12:17:12.868754 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 12:17:12.867000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:17:12.878102 kernel: audit: type=1130 audit(1765887432.867:298): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:17:12.880729 (kubelet)[2802]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 16 12:17:12.962078 kubelet[2802]: E1216 12:17:12.961984 2802 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 16 12:17:12.965924 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 16 12:17:12.966329 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 16 12:17:12.966983 systemd[1]: kubelet.service: Consumed 299ms CPU time, 105.6M memory peak. Dec 16 12:17:12.965000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Dec 16 12:17:12.973146 kernel: audit: type=1131 audit(1765887432.965:299): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Dec 16 12:17:16.910737 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Dec 16 12:17:16.911000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hostnamed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:17:16.923301 kernel: audit: type=1131 audit(1765887436.911:300): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hostnamed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:17:16.927000 audit: BPF prog-id=66 op=UNLOAD Dec 16 12:17:16.930091 kernel: audit: type=1334 audit(1765887436.927:301): prog-id=66 op=UNLOAD Dec 16 12:17:19.035825 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 12:17:19.035000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:17:19.036437 systemd[1]: kubelet.service: Consumed 299ms CPU time, 105.6M memory peak. Dec 16 12:17:19.040817 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 12:17:19.035000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:17:19.047161 kernel: audit: type=1130 audit(1765887439.035:302): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:17:19.048438 kernel: audit: type=1131 audit(1765887439.035:303): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:17:19.103530 systemd[1]: Reload requested from client PID 2820 ('systemctl') (unit session-8.scope)... Dec 16 12:17:19.103748 systemd[1]: Reloading... Dec 16 12:17:19.377105 zram_generator::config[2870]: No configuration found. Dec 16 12:17:19.854749 systemd[1]: Reloading finished in 750 ms. Dec 16 12:17:19.910279 kernel: audit: type=1334 audit(1765887439.902:304): prog-id=70 op=LOAD Dec 16 12:17:19.910391 kernel: audit: type=1334 audit(1765887439.902:305): prog-id=62 op=UNLOAD Dec 16 12:17:19.910433 kernel: audit: type=1334 audit(1765887439.908:306): prog-id=71 op=LOAD Dec 16 12:17:19.910476 kernel: audit: type=1334 audit(1765887439.908:307): prog-id=63 op=UNLOAD Dec 16 12:17:19.902000 audit: BPF prog-id=70 op=LOAD Dec 16 12:17:19.902000 audit: BPF prog-id=62 op=UNLOAD Dec 16 12:17:19.908000 audit: BPF prog-id=71 op=LOAD Dec 16 12:17:19.908000 audit: BPF prog-id=63 op=UNLOAD Dec 16 12:17:19.909000 audit: BPF prog-id=72 op=LOAD Dec 16 12:17:19.915088 kernel: audit: type=1334 audit(1765887439.909:308): prog-id=72 op=LOAD Dec 16 12:17:19.915192 kernel: audit: type=1334 audit(1765887439.911:309): prog-id=73 op=LOAD Dec 16 12:17:19.915237 kernel: audit: type=1334 audit(1765887439.911:310): prog-id=64 op=UNLOAD Dec 16 12:17:19.911000 audit: BPF prog-id=73 op=LOAD Dec 16 12:17:19.911000 audit: BPF prog-id=64 op=UNLOAD Dec 16 12:17:19.918078 kernel: audit: type=1334 audit(1765887439.911:311): prog-id=65 op=UNLOAD Dec 16 12:17:19.911000 audit: BPF prog-id=65 op=UNLOAD Dec 16 12:17:19.919000 audit: BPF prog-id=74 op=LOAD Dec 16 12:17:19.919000 audit: BPF prog-id=55 op=UNLOAD Dec 16 12:17:19.919000 audit: BPF prog-id=75 op=LOAD Dec 16 12:17:19.919000 audit: BPF prog-id=76 op=LOAD Dec 16 12:17:19.919000 audit: BPF prog-id=56 op=UNLOAD Dec 16 12:17:19.919000 audit: BPF prog-id=57 op=UNLOAD Dec 16 12:17:19.922000 audit: BPF prog-id=77 op=LOAD Dec 16 12:17:19.928000 audit: BPF prog-id=61 op=UNLOAD Dec 16 12:17:19.931000 audit: BPF prog-id=78 op=LOAD Dec 16 12:17:19.931000 audit: BPF prog-id=47 op=UNLOAD Dec 16 12:17:19.932000 audit: BPF prog-id=79 op=LOAD Dec 16 12:17:19.932000 audit: BPF prog-id=80 op=LOAD Dec 16 12:17:19.932000 audit: BPF prog-id=48 op=UNLOAD Dec 16 12:17:19.932000 audit: BPF prog-id=49 op=UNLOAD Dec 16 12:17:19.933000 audit: BPF prog-id=81 op=LOAD Dec 16 12:17:19.933000 audit: BPF prog-id=58 op=UNLOAD Dec 16 12:17:19.934000 audit: BPF prog-id=82 op=LOAD Dec 16 12:17:19.934000 audit: BPF prog-id=83 op=LOAD Dec 16 12:17:19.934000 audit: BPF prog-id=59 op=UNLOAD Dec 16 12:17:19.934000 audit: BPF prog-id=60 op=UNLOAD Dec 16 12:17:19.934000 audit: BPF prog-id=84 op=LOAD Dec 16 12:17:19.934000 audit: BPF prog-id=85 op=LOAD Dec 16 12:17:19.934000 audit: BPF prog-id=50 op=UNLOAD Dec 16 12:17:19.934000 audit: BPF prog-id=51 op=UNLOAD Dec 16 12:17:19.936000 audit: BPF prog-id=86 op=LOAD Dec 16 12:17:19.936000 audit: BPF prog-id=52 op=UNLOAD Dec 16 12:17:19.936000 audit: BPF prog-id=87 op=LOAD Dec 16 12:17:19.936000 audit: BPF prog-id=88 op=LOAD Dec 16 12:17:19.937000 audit: BPF prog-id=53 op=UNLOAD Dec 16 12:17:19.937000 audit: BPF prog-id=54 op=UNLOAD Dec 16 12:17:19.938000 audit: BPF prog-id=89 op=LOAD Dec 16 12:17:19.938000 audit: BPF prog-id=69 op=UNLOAD Dec 16 12:17:19.968400 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Dec 16 12:17:19.968594 systemd[1]: kubelet.service: Failed with result 'signal'. Dec 16 12:17:19.970151 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 12:17:19.970273 systemd[1]: kubelet.service: Consumed 229ms CPU time, 95.1M memory peak. Dec 16 12:17:19.969000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Dec 16 12:17:19.974183 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 12:17:20.300105 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 12:17:20.299000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:17:20.315904 (kubelet)[2931]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 16 12:17:20.391606 kubelet[2931]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 16 12:17:20.393098 kubelet[2931]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Dec 16 12:17:20.393098 kubelet[2931]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 16 12:17:20.393098 kubelet[2931]: I1216 12:17:20.392246 2931 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 16 12:17:20.912571 kubelet[2931]: I1216 12:17:20.912499 2931 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Dec 16 12:17:20.912571 kubelet[2931]: I1216 12:17:20.912549 2931 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 16 12:17:20.913079 kubelet[2931]: I1216 12:17:20.913010 2931 server.go:954] "Client rotation is on, will bootstrap in background" Dec 16 12:17:20.961863 kubelet[2931]: E1216 12:17:20.961815 2931 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://172.31.20.6:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.20.6:6443: connect: connection refused" logger="UnhandledError" Dec 16 12:17:20.973550 kubelet[2931]: I1216 12:17:20.972849 2931 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 16 12:17:20.986140 kubelet[2931]: I1216 12:17:20.986087 2931 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Dec 16 12:17:20.993106 kubelet[2931]: I1216 12:17:20.991843 2931 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 16 12:17:20.993106 kubelet[2931]: I1216 12:17:20.992316 2931 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 16 12:17:20.993106 kubelet[2931]: I1216 12:17:20.992356 2931 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-20-6","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 16 12:17:20.993106 kubelet[2931]: I1216 12:17:20.992783 2931 topology_manager.go:138] "Creating topology manager with none policy" Dec 16 12:17:20.993503 kubelet[2931]: I1216 12:17:20.992803 2931 container_manager_linux.go:304] "Creating device plugin manager" Dec 16 12:17:20.993503 kubelet[2931]: I1216 12:17:20.993160 2931 state_mem.go:36] "Initialized new in-memory state store" Dec 16 12:17:21.001739 kubelet[2931]: I1216 12:17:21.001178 2931 kubelet.go:446] "Attempting to sync node with API server" Dec 16 12:17:21.001739 kubelet[2931]: I1216 12:17:21.001235 2931 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 16 12:17:21.001739 kubelet[2931]: I1216 12:17:21.001280 2931 kubelet.go:352] "Adding apiserver pod source" Dec 16 12:17:21.001739 kubelet[2931]: I1216 12:17:21.001309 2931 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 16 12:17:21.009132 kubelet[2931]: W1216 12:17:21.008994 2931 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.20.6:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-20-6&limit=500&resourceVersion=0": dial tcp 172.31.20.6:6443: connect: connection refused Dec 16 12:17:21.009336 kubelet[2931]: E1216 12:17:21.009303 2931 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.31.20.6:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-20-6&limit=500&resourceVersion=0\": dial tcp 172.31.20.6:6443: connect: connection refused" logger="UnhandledError" Dec 16 12:17:21.009588 kubelet[2931]: I1216 12:17:21.009564 2931 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.1.5" apiVersion="v1" Dec 16 12:17:21.010746 kubelet[2931]: I1216 12:17:21.010717 2931 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 16 12:17:21.011082 kubelet[2931]: W1216 12:17:21.011046 2931 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Dec 16 12:17:21.013619 kubelet[2931]: I1216 12:17:21.013574 2931 watchdog_linux.go:99] "Systemd watchdog is not enabled" Dec 16 12:17:21.013793 kubelet[2931]: I1216 12:17:21.013775 2931 server.go:1287] "Started kubelet" Dec 16 12:17:21.017729 kubelet[2931]: W1216 12:17:21.017632 2931 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.20.6:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.31.20.6:6443: connect: connection refused Dec 16 12:17:21.017891 kubelet[2931]: E1216 12:17:21.017729 2931 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.31.20.6:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.20.6:6443: connect: connection refused" logger="UnhandledError" Dec 16 12:17:21.017968 kubelet[2931]: I1216 12:17:21.017932 2931 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Dec 16 12:17:21.019498 kubelet[2931]: I1216 12:17:21.019422 2931 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 16 12:17:21.020192 kubelet[2931]: I1216 12:17:21.020161 2931 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 16 12:17:21.021710 kubelet[2931]: E1216 12:17:21.021254 2931 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.20.6:6443/api/v1/namespaces/default/events\": dial tcp 172.31.20.6:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-20-6.1881b14582f2a022 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-20-6,UID:ip-172-31-20-6,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-20-6,},FirstTimestamp:2025-12-16 12:17:21.01374365 +0000 UTC m=+0.691200365,LastTimestamp:2025-12-16 12:17:21.01374365 +0000 UTC m=+0.691200365,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-20-6,}" Dec 16 12:17:21.022918 kubelet[2931]: I1216 12:17:21.021888 2931 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 16 12:17:21.027000 audit[2942]: NETFILTER_CFG table=mangle:42 family=2 entries=2 op=nft_register_chain pid=2942 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:17:21.027000 audit[2942]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=136 a0=3 a1=ffffe5f6fbc0 a2=0 a3=0 items=0 ppid=2931 pid=2942 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:17:21.027000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Dec 16 12:17:21.029849 kubelet[2931]: I1216 12:17:21.022693 2931 server.go:479] "Adding debug handlers to kubelet server" Dec 16 12:17:21.032115 kubelet[2931]: E1216 12:17:21.032009 2931 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-20-6\" not found" Dec 16 12:17:21.032000 audit[2943]: NETFILTER_CFG table=filter:43 family=2 entries=1 op=nft_register_chain pid=2943 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:17:21.032000 audit[2943]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffffda0a150 a2=0 a3=0 items=0 ppid=2931 pid=2943 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:17:21.032000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Dec 16 12:17:21.035108 kubelet[2931]: I1216 12:17:21.033365 2931 volume_manager.go:297] "Starting Kubelet Volume Manager" Dec 16 12:17:21.035108 kubelet[2931]: I1216 12:17:21.033386 2931 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Dec 16 12:17:21.035108 kubelet[2931]: I1216 12:17:21.022967 2931 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Dec 16 12:17:21.037394 kubelet[2931]: I1216 12:17:21.037327 2931 reconciler.go:26] "Reconciler: start to sync state" Dec 16 12:17:21.041600 kubelet[2931]: W1216 12:17:21.041493 2931 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.20.6:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.20.6:6443: connect: connection refused Dec 16 12:17:21.041736 kubelet[2931]: E1216 12:17:21.041611 2931 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.31.20.6:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.20.6:6443: connect: connection refused" logger="UnhandledError" Dec 16 12:17:21.043994 kubelet[2931]: I1216 12:17:21.043249 2931 factory.go:221] Registration of the systemd container factory successfully Dec 16 12:17:21.045315 kubelet[2931]: I1216 12:17:21.045275 2931 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 16 12:17:21.044000 audit[2945]: NETFILTER_CFG table=filter:44 family=2 entries=2 op=nft_register_chain pid=2945 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:17:21.044000 audit[2945]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=340 a0=3 a1=ffffd0c5fd40 a2=0 a3=0 items=0 ppid=2931 pid=2945 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:17:21.044000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Dec 16 12:17:21.046735 kubelet[2931]: E1216 12:17:21.045305 2931 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.20.6:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-20-6?timeout=10s\": dial tcp 172.31.20.6:6443: connect: connection refused" interval="200ms" Dec 16 12:17:21.048326 kubelet[2931]: E1216 12:17:21.048290 2931 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 16 12:17:21.051080 kubelet[2931]: I1216 12:17:21.050041 2931 factory.go:221] Registration of the containerd container factory successfully Dec 16 12:17:21.052000 audit[2947]: NETFILTER_CFG table=filter:45 family=2 entries=2 op=nft_register_chain pid=2947 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:17:21.052000 audit[2947]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=340 a0=3 a1=ffffce023940 a2=0 a3=0 items=0 ppid=2931 pid=2947 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:17:21.052000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Dec 16 12:17:21.077000 audit[2954]: NETFILTER_CFG table=filter:46 family=2 entries=1 op=nft_register_rule pid=2954 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:17:21.077000 audit[2954]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=924 a0=3 a1=ffffca42c8d0 a2=0 a3=0 items=0 ppid=2931 pid=2954 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:17:21.077000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E7400626C6F636B20696E636F6D696E67206C6F63616C6E657420636F6E6E656374696F6E73002D2D647374003132372E302E302E302F38 Dec 16 12:17:21.080319 kubelet[2931]: I1216 12:17:21.080257 2931 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 16 12:17:21.081000 audit[2957]: NETFILTER_CFG table=mangle:47 family=2 entries=1 op=nft_register_chain pid=2957 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:17:21.081000 audit[2957]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffcf7c47b0 a2=0 a3=0 items=0 ppid=2931 pid=2957 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:17:21.081000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Dec 16 12:17:21.082000 audit[2956]: NETFILTER_CFG table=mangle:48 family=10 entries=2 op=nft_register_chain pid=2956 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:17:21.082000 audit[2956]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=136 a0=3 a1=ffffeaf78470 a2=0 a3=0 items=0 ppid=2931 pid=2956 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:17:21.082000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Dec 16 12:17:21.084452 kubelet[2931]: I1216 12:17:21.084417 2931 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 16 12:17:21.084664 kubelet[2931]: I1216 12:17:21.084620 2931 status_manager.go:227] "Starting to sync pod status with apiserver" Dec 16 12:17:21.084814 kubelet[2931]: I1216 12:17:21.084794 2931 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Dec 16 12:17:21.084963 kubelet[2931]: I1216 12:17:21.084945 2931 kubelet.go:2382] "Starting kubelet main sync loop" Dec 16 12:17:21.084000 audit[2958]: NETFILTER_CFG table=nat:49 family=2 entries=1 op=nft_register_chain pid=2958 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:17:21.086461 kubelet[2931]: E1216 12:17:21.086268 2931 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 16 12:17:21.086960 kubelet[2931]: W1216 12:17:21.086907 2931 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.20.6:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.20.6:6443: connect: connection refused Dec 16 12:17:21.087204 kubelet[2931]: E1216 12:17:21.087147 2931 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.31.20.6:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.20.6:6443: connect: connection refused" logger="UnhandledError" Dec 16 12:17:21.084000 audit[2958]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffff88ec000 a2=0 a3=0 items=0 ppid=2931 pid=2958 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:17:21.084000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Dec 16 12:17:21.087855 kubelet[2931]: I1216 12:17:21.087796 2931 cpu_manager.go:221] "Starting CPU manager" policy="none" Dec 16 12:17:21.087973 kubelet[2931]: I1216 12:17:21.087952 2931 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Dec 16 12:17:21.088194 kubelet[2931]: I1216 12:17:21.088153 2931 state_mem.go:36] "Initialized new in-memory state store" Dec 16 12:17:21.087000 audit[2959]: NETFILTER_CFG table=mangle:50 family=10 entries=1 op=nft_register_chain pid=2959 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:17:21.087000 audit[2959]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffdc03a7b0 a2=0 a3=0 items=0 ppid=2931 pid=2959 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:17:21.087000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Dec 16 12:17:21.091000 audit[2960]: NETFILTER_CFG table=filter:51 family=2 entries=1 op=nft_register_chain pid=2960 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:17:21.091000 audit[2960]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffd1ca22b0 a2=0 a3=0 items=0 ppid=2931 pid=2960 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:17:21.091000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Dec 16 12:17:21.091000 audit[2961]: NETFILTER_CFG table=nat:52 family=10 entries=1 op=nft_register_chain pid=2961 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:17:21.091000 audit[2961]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffff9f80d60 a2=0 a3=0 items=0 ppid=2931 pid=2961 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:17:21.091000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Dec 16 12:17:21.093078 kubelet[2931]: I1216 12:17:21.093030 2931 policy_none.go:49] "None policy: Start" Dec 16 12:17:21.093239 kubelet[2931]: I1216 12:17:21.093219 2931 memory_manager.go:186] "Starting memorymanager" policy="None" Dec 16 12:17:21.093603 kubelet[2931]: I1216 12:17:21.093329 2931 state_mem.go:35] "Initializing new in-memory state store" Dec 16 12:17:21.094000 audit[2962]: NETFILTER_CFG table=filter:53 family=10 entries=1 op=nft_register_chain pid=2962 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:17:21.094000 audit[2962]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffec8c7730 a2=0 a3=0 items=0 ppid=2931 pid=2962 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:17:21.094000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Dec 16 12:17:21.105882 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Dec 16 12:17:21.122958 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Dec 16 12:17:21.130398 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Dec 16 12:17:21.134469 kubelet[2931]: E1216 12:17:21.134403 2931 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-20-6\" not found" Dec 16 12:17:21.141992 kubelet[2931]: I1216 12:17:21.141937 2931 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 16 12:17:21.142334 kubelet[2931]: I1216 12:17:21.142315 2931 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 16 12:17:21.142513 kubelet[2931]: I1216 12:17:21.142337 2931 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 16 12:17:21.144557 kubelet[2931]: I1216 12:17:21.142684 2931 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 16 12:17:21.147350 kubelet[2931]: E1216 12:17:21.147302 2931 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Dec 16 12:17:21.147490 kubelet[2931]: E1216 12:17:21.147373 2931 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-20-6\" not found" Dec 16 12:17:21.210785 systemd[1]: Created slice kubepods-burstable-podb1b652fec7071822d2ad5e5c3467308a.slice - libcontainer container kubepods-burstable-podb1b652fec7071822d2ad5e5c3467308a.slice. Dec 16 12:17:21.216206 kubelet[2931]: W1216 12:17:21.216137 2931 helpers.go:245] readString: Failed to read "/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb1b652fec7071822d2ad5e5c3467308a.slice/cpu.weight": open /sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb1b652fec7071822d2ad5e5c3467308a.slice/cpu.weight: no such device Dec 16 12:17:21.234795 kubelet[2931]: E1216 12:17:21.234413 2931 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-20-6\" not found" node="ip-172-31-20-6" Dec 16 12:17:21.239301 kubelet[2931]: I1216 12:17:21.239229 2931 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/db86a9c599c7516e8f7d726a117eb187-kubeconfig\") pod \"kube-scheduler-ip-172-31-20-6\" (UID: \"db86a9c599c7516e8f7d726a117eb187\") " pod="kube-system/kube-scheduler-ip-172-31-20-6" Dec 16 12:17:21.239539 kubelet[2931]: I1216 12:17:21.239300 2931 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b1b652fec7071822d2ad5e5c3467308a-ca-certs\") pod \"kube-apiserver-ip-172-31-20-6\" (UID: \"b1b652fec7071822d2ad5e5c3467308a\") " pod="kube-system/kube-apiserver-ip-172-31-20-6" Dec 16 12:17:21.239539 kubelet[2931]: I1216 12:17:21.239344 2931 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b1b652fec7071822d2ad5e5c3467308a-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-20-6\" (UID: \"b1b652fec7071822d2ad5e5c3467308a\") " pod="kube-system/kube-apiserver-ip-172-31-20-6" Dec 16 12:17:21.239539 kubelet[2931]: I1216 12:17:21.239390 2931 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/7ad99346c528774cdc680c6de71930cf-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-20-6\" (UID: \"7ad99346c528774cdc680c6de71930cf\") " pod="kube-system/kube-controller-manager-ip-172-31-20-6" Dec 16 12:17:21.239539 kubelet[2931]: I1216 12:17:21.239444 2931 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7ad99346c528774cdc680c6de71930cf-k8s-certs\") pod \"kube-controller-manager-ip-172-31-20-6\" (UID: \"7ad99346c528774cdc680c6de71930cf\") " pod="kube-system/kube-controller-manager-ip-172-31-20-6" Dec 16 12:17:21.239539 kubelet[2931]: I1216 12:17:21.239481 2931 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7ad99346c528774cdc680c6de71930cf-kubeconfig\") pod \"kube-controller-manager-ip-172-31-20-6\" (UID: \"7ad99346c528774cdc680c6de71930cf\") " pod="kube-system/kube-controller-manager-ip-172-31-20-6" Dec 16 12:17:21.239807 kubelet[2931]: I1216 12:17:21.239516 2931 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b1b652fec7071822d2ad5e5c3467308a-k8s-certs\") pod \"kube-apiserver-ip-172-31-20-6\" (UID: \"b1b652fec7071822d2ad5e5c3467308a\") " pod="kube-system/kube-apiserver-ip-172-31-20-6" Dec 16 12:17:21.239807 kubelet[2931]: I1216 12:17:21.239553 2931 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7ad99346c528774cdc680c6de71930cf-ca-certs\") pod \"kube-controller-manager-ip-172-31-20-6\" (UID: \"7ad99346c528774cdc680c6de71930cf\") " pod="kube-system/kube-controller-manager-ip-172-31-20-6" Dec 16 12:17:21.239807 kubelet[2931]: I1216 12:17:21.239592 2931 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7ad99346c528774cdc680c6de71930cf-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-20-6\" (UID: \"7ad99346c528774cdc680c6de71930cf\") " pod="kube-system/kube-controller-manager-ip-172-31-20-6" Dec 16 12:17:21.242901 systemd[1]: Created slice kubepods-burstable-pod7ad99346c528774cdc680c6de71930cf.slice - libcontainer container kubepods-burstable-pod7ad99346c528774cdc680c6de71930cf.slice. Dec 16 12:17:21.247724 kubelet[2931]: W1216 12:17:21.247590 2931 helpers.go:245] readString: Failed to read "/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7ad99346c528774cdc680c6de71930cf.slice/cpuset.cpus.effective": read /sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7ad99346c528774cdc680c6de71930cf.slice/cpuset.cpus.effective: no such device Dec 16 12:17:21.248690 kubelet[2931]: E1216 12:17:21.248601 2931 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.20.6:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-20-6?timeout=10s\": dial tcp 172.31.20.6:6443: connect: connection refused" interval="400ms" Dec 16 12:17:21.249428 kubelet[2931]: I1216 12:17:21.249292 2931 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-20-6" Dec 16 12:17:21.250088 kubelet[2931]: E1216 12:17:21.250018 2931 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.20.6:6443/api/v1/nodes\": dial tcp 172.31.20.6:6443: connect: connection refused" node="ip-172-31-20-6" Dec 16 12:17:21.257078 kubelet[2931]: E1216 12:17:21.256942 2931 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-20-6\" not found" node="ip-172-31-20-6" Dec 16 12:17:21.265959 systemd[1]: Created slice kubepods-burstable-poddb86a9c599c7516e8f7d726a117eb187.slice - libcontainer container kubepods-burstable-poddb86a9c599c7516e8f7d726a117eb187.slice. Dec 16 12:17:21.271755 kubelet[2931]: E1216 12:17:21.271704 2931 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-20-6\" not found" node="ip-172-31-20-6" Dec 16 12:17:21.453719 kubelet[2931]: I1216 12:17:21.453662 2931 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-20-6" Dec 16 12:17:21.454303 kubelet[2931]: E1216 12:17:21.454150 2931 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.20.6:6443/api/v1/nodes\": dial tcp 172.31.20.6:6443: connect: connection refused" node="ip-172-31-20-6" Dec 16 12:17:21.537465 containerd[1971]: time="2025-12-16T12:17:21.537215740Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-20-6,Uid:b1b652fec7071822d2ad5e5c3467308a,Namespace:kube-system,Attempt:0,}" Dec 16 12:17:21.561700 containerd[1971]: time="2025-12-16T12:17:21.561621857Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-20-6,Uid:7ad99346c528774cdc680c6de71930cf,Namespace:kube-system,Attempt:0,}" Dec 16 12:17:21.587112 containerd[1971]: time="2025-12-16T12:17:21.587001365Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-20-6,Uid:db86a9c599c7516e8f7d726a117eb187,Namespace:kube-system,Attempt:0,}" Dec 16 12:17:21.593382 containerd[1971]: time="2025-12-16T12:17:21.593295461Z" level=info msg="connecting to shim 0b0f3994fc4f8da6d8b341d591f5dd338356a0e2f97828886dee80224897a51e" address="unix:///run/containerd/s/1712bce331c2fc426c4417e097f13163dbc70ffcc3505f331a0a1a871f6426a7" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:17:21.648390 containerd[1971]: time="2025-12-16T12:17:21.648237821Z" level=info msg="connecting to shim 10501a4b98e3306760e7659fcb35b25bffeb293eb3b06519aa824d5b82b07aa7" address="unix:///run/containerd/s/54cae46c8b306d43329af42cd45bbab01f0ff2cf83421204d687166d9ee956d6" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:17:21.652329 kubelet[2931]: E1216 12:17:21.652241 2931 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.20.6:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-20-6?timeout=10s\": dial tcp 172.31.20.6:6443: connect: connection refused" interval="800ms" Dec 16 12:17:21.687985 containerd[1971]: time="2025-12-16T12:17:21.685626437Z" level=info msg="connecting to shim 67dbfed748dc2b7843943459ec777e7789f02e9c990df6eebbc8e33edf94b39a" address="unix:///run/containerd/s/4a05be2f28b5aba37ec9f2b295b4fa1940412bd2f5846a709a18fc40935cca78" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:17:21.695490 systemd[1]: Started cri-containerd-0b0f3994fc4f8da6d8b341d591f5dd338356a0e2f97828886dee80224897a51e.scope - libcontainer container 0b0f3994fc4f8da6d8b341d591f5dd338356a0e2f97828886dee80224897a51e. Dec 16 12:17:21.734099 systemd[1]: Started cri-containerd-10501a4b98e3306760e7659fcb35b25bffeb293eb3b06519aa824d5b82b07aa7.scope - libcontainer container 10501a4b98e3306760e7659fcb35b25bffeb293eb3b06519aa824d5b82b07aa7. Dec 16 12:17:21.757000 audit: BPF prog-id=90 op=LOAD Dec 16 12:17:21.759000 audit: BPF prog-id=91 op=LOAD Dec 16 12:17:21.759000 audit[2983]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178180 a2=98 a3=0 items=0 ppid=2972 pid=2983 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:17:21.759000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3062306633393934666334663864613664386233343164353931663564 Dec 16 12:17:21.760000 audit: BPF prog-id=91 op=UNLOAD Dec 16 12:17:21.760000 audit[2983]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2972 pid=2983 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:17:21.760000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3062306633393934666334663864613664386233343164353931663564 Dec 16 12:17:21.763000 audit: BPF prog-id=92 op=LOAD Dec 16 12:17:21.763000 audit[2983]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=2972 pid=2983 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:17:21.763000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3062306633393934666334663864613664386233343164353931663564 Dec 16 12:17:21.764000 audit: BPF prog-id=93 op=LOAD Dec 16 12:17:21.764000 audit[2983]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000178168 a2=98 a3=0 items=0 ppid=2972 pid=2983 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:17:21.764000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3062306633393934666334663864613664386233343164353931663564 Dec 16 12:17:21.765000 audit: BPF prog-id=93 op=UNLOAD Dec 16 12:17:21.765000 audit[2983]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2972 pid=2983 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:17:21.765000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3062306633393934666334663864613664386233343164353931663564 Dec 16 12:17:21.766000 audit: BPF prog-id=92 op=UNLOAD Dec 16 12:17:21.766000 audit[2983]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2972 pid=2983 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:17:21.766000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3062306633393934666334663864613664386233343164353931663564 Dec 16 12:17:21.767000 audit: BPF prog-id=94 op=LOAD Dec 16 12:17:21.767000 audit[2983]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178648 a2=98 a3=0 items=0 ppid=2972 pid=2983 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:17:21.767000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3062306633393934666334663864613664386233343164353931663564 Dec 16 12:17:21.771773 systemd[1]: Started cri-containerd-67dbfed748dc2b7843943459ec777e7789f02e9c990df6eebbc8e33edf94b39a.scope - libcontainer container 67dbfed748dc2b7843943459ec777e7789f02e9c990df6eebbc8e33edf94b39a. Dec 16 12:17:21.780000 audit: BPF prog-id=95 op=LOAD Dec 16 12:17:21.781000 audit: BPF prog-id=96 op=LOAD Dec 16 12:17:21.781000 audit[3025]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000128180 a2=98 a3=0 items=0 ppid=2997 pid=3025 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:17:21.781000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3130353031613462393865333330363736306537363539666362333562 Dec 16 12:17:21.781000 audit: BPF prog-id=96 op=UNLOAD Dec 16 12:17:21.781000 audit[3025]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2997 pid=3025 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:17:21.781000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3130353031613462393865333330363736306537363539666362333562 Dec 16 12:17:21.783000 audit: BPF prog-id=97 op=LOAD Dec 16 12:17:21.783000 audit[3025]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001283e8 a2=98 a3=0 items=0 ppid=2997 pid=3025 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:17:21.783000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3130353031613462393865333330363736306537363539666362333562 Dec 16 12:17:21.783000 audit: BPF prog-id=98 op=LOAD Dec 16 12:17:21.783000 audit[3025]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000128168 a2=98 a3=0 items=0 ppid=2997 pid=3025 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:17:21.783000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3130353031613462393865333330363736306537363539666362333562 Dec 16 12:17:21.783000 audit: BPF prog-id=98 op=UNLOAD Dec 16 12:17:21.783000 audit[3025]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2997 pid=3025 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:17:21.783000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3130353031613462393865333330363736306537363539666362333562 Dec 16 12:17:21.783000 audit: BPF prog-id=97 op=UNLOAD Dec 16 12:17:21.783000 audit[3025]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2997 pid=3025 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:17:21.783000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3130353031613462393865333330363736306537363539666362333562 Dec 16 12:17:21.783000 audit: BPF prog-id=99 op=LOAD Dec 16 12:17:21.783000 audit[3025]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000128648 a2=98 a3=0 items=0 ppid=2997 pid=3025 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:17:21.783000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3130353031613462393865333330363736306537363539666362333562 Dec 16 12:17:21.832000 audit: BPF prog-id=100 op=LOAD Dec 16 12:17:21.840000 audit: BPF prog-id=101 op=LOAD Dec 16 12:17:21.840000 audit[3049]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178180 a2=98 a3=0 items=0 ppid=3023 pid=3049 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:17:21.840000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3637646266656437343864633262373834333934333435396563373737 Dec 16 12:17:21.841000 audit: BPF prog-id=101 op=UNLOAD Dec 16 12:17:21.841000 audit[3049]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3023 pid=3049 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:17:21.841000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3637646266656437343864633262373834333934333435396563373737 Dec 16 12:17:21.841000 audit: BPF prog-id=102 op=LOAD Dec 16 12:17:21.841000 audit[3049]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=3023 pid=3049 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:17:21.841000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3637646266656437343864633262373834333934333435396563373737 Dec 16 12:17:21.841000 audit: BPF prog-id=103 op=LOAD Dec 16 12:17:21.841000 audit[3049]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000178168 a2=98 a3=0 items=0 ppid=3023 pid=3049 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:17:21.841000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3637646266656437343864633262373834333934333435396563373737 Dec 16 12:17:21.841000 audit: BPF prog-id=103 op=UNLOAD Dec 16 12:17:21.841000 audit[3049]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3023 pid=3049 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:17:21.841000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3637646266656437343864633262373834333934333435396563373737 Dec 16 12:17:21.841000 audit: BPF prog-id=102 op=UNLOAD Dec 16 12:17:21.841000 audit[3049]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3023 pid=3049 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:17:21.841000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3637646266656437343864633262373834333934333435396563373737 Dec 16 12:17:21.841000 audit: BPF prog-id=104 op=LOAD Dec 16 12:17:21.841000 audit[3049]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178648 a2=98 a3=0 items=0 ppid=3023 pid=3049 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:17:21.841000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3637646266656437343864633262373834333934333435396563373737 Dec 16 12:17:21.863454 kubelet[2931]: I1216 12:17:21.863401 2931 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-20-6" Dec 16 12:17:21.865879 kubelet[2931]: E1216 12:17:21.865125 2931 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.20.6:6443/api/v1/nodes\": dial tcp 172.31.20.6:6443: connect: connection refused" node="ip-172-31-20-6" Dec 16 12:17:21.889464 containerd[1971]: time="2025-12-16T12:17:21.889395042Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-20-6,Uid:b1b652fec7071822d2ad5e5c3467308a,Namespace:kube-system,Attempt:0,} returns sandbox id \"0b0f3994fc4f8da6d8b341d591f5dd338356a0e2f97828886dee80224897a51e\"" Dec 16 12:17:21.895098 containerd[1971]: time="2025-12-16T12:17:21.893883846Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-20-6,Uid:7ad99346c528774cdc680c6de71930cf,Namespace:kube-system,Attempt:0,} returns sandbox id \"10501a4b98e3306760e7659fcb35b25bffeb293eb3b06519aa824d5b82b07aa7\"" Dec 16 12:17:21.907680 containerd[1971]: time="2025-12-16T12:17:21.907580034Z" level=info msg="CreateContainer within sandbox \"0b0f3994fc4f8da6d8b341d591f5dd338356a0e2f97828886dee80224897a51e\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Dec 16 12:17:21.917526 containerd[1971]: time="2025-12-16T12:17:21.916165722Z" level=info msg="CreateContainer within sandbox \"10501a4b98e3306760e7659fcb35b25bffeb293eb3b06519aa824d5b82b07aa7\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Dec 16 12:17:21.935862 containerd[1971]: time="2025-12-16T12:17:21.935798382Z" level=info msg="Container 229433922c3ac451e2bdbb8135a857cb44699b1a55ccbb05a45d71092c620be6: CDI devices from CRI Config.CDIDevices: []" Dec 16 12:17:21.939889 containerd[1971]: time="2025-12-16T12:17:21.939819318Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-20-6,Uid:db86a9c599c7516e8f7d726a117eb187,Namespace:kube-system,Attempt:0,} returns sandbox id \"67dbfed748dc2b7843943459ec777e7789f02e9c990df6eebbc8e33edf94b39a\"" Dec 16 12:17:21.945303 containerd[1971]: time="2025-12-16T12:17:21.945238806Z" level=info msg="CreateContainer within sandbox \"67dbfed748dc2b7843943459ec777e7789f02e9c990df6eebbc8e33edf94b39a\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Dec 16 12:17:21.949871 containerd[1971]: time="2025-12-16T12:17:21.949806366Z" level=info msg="Container e8f64c3e95425875968841a31bda77e8ddaaf3cfa6d58e82005d2056847e8db0: CDI devices from CRI Config.CDIDevices: []" Dec 16 12:17:21.963603 containerd[1971]: time="2025-12-16T12:17:21.963475327Z" level=info msg="CreateContainer within sandbox \"0b0f3994fc4f8da6d8b341d591f5dd338356a0e2f97828886dee80224897a51e\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"229433922c3ac451e2bdbb8135a857cb44699b1a55ccbb05a45d71092c620be6\"" Dec 16 12:17:21.964757 containerd[1971]: time="2025-12-16T12:17:21.964707643Z" level=info msg="StartContainer for \"229433922c3ac451e2bdbb8135a857cb44699b1a55ccbb05a45d71092c620be6\"" Dec 16 12:17:21.971163 containerd[1971]: time="2025-12-16T12:17:21.971019835Z" level=info msg="connecting to shim 229433922c3ac451e2bdbb8135a857cb44699b1a55ccbb05a45d71092c620be6" address="unix:///run/containerd/s/1712bce331c2fc426c4417e097f13163dbc70ffcc3505f331a0a1a871f6426a7" protocol=ttrpc version=3 Dec 16 12:17:21.985265 containerd[1971]: time="2025-12-16T12:17:21.985028515Z" level=info msg="CreateContainer within sandbox \"10501a4b98e3306760e7659fcb35b25bffeb293eb3b06519aa824d5b82b07aa7\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"e8f64c3e95425875968841a31bda77e8ddaaf3cfa6d58e82005d2056847e8db0\"" Dec 16 12:17:21.987171 containerd[1971]: time="2025-12-16T12:17:21.986839399Z" level=info msg="StartContainer for \"e8f64c3e95425875968841a31bda77e8ddaaf3cfa6d58e82005d2056847e8db0\"" Dec 16 12:17:21.991302 containerd[1971]: time="2025-12-16T12:17:21.991244335Z" level=info msg="connecting to shim e8f64c3e95425875968841a31bda77e8ddaaf3cfa6d58e82005d2056847e8db0" address="unix:///run/containerd/s/54cae46c8b306d43329af42cd45bbab01f0ff2cf83421204d687166d9ee956d6" protocol=ttrpc version=3 Dec 16 12:17:21.992318 containerd[1971]: time="2025-12-16T12:17:21.992272711Z" level=info msg="Container d923840943e98981b5e857f240c392215f59ed3248a0d6d81f5d5b3c6cc3da0a: CDI devices from CRI Config.CDIDevices: []" Dec 16 12:17:22.014706 containerd[1971]: time="2025-12-16T12:17:22.014605923Z" level=info msg="CreateContainer within sandbox \"67dbfed748dc2b7843943459ec777e7789f02e9c990df6eebbc8e33edf94b39a\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"d923840943e98981b5e857f240c392215f59ed3248a0d6d81f5d5b3c6cc3da0a\"" Dec 16 12:17:22.019931 containerd[1971]: time="2025-12-16T12:17:22.019698207Z" level=info msg="StartContainer for \"d923840943e98981b5e857f240c392215f59ed3248a0d6d81f5d5b3c6cc3da0a\"" Dec 16 12:17:22.022603 systemd[1]: Started cri-containerd-229433922c3ac451e2bdbb8135a857cb44699b1a55ccbb05a45d71092c620be6.scope - libcontainer container 229433922c3ac451e2bdbb8135a857cb44699b1a55ccbb05a45d71092c620be6. Dec 16 12:17:22.028091 containerd[1971]: time="2025-12-16T12:17:22.027665835Z" level=info msg="connecting to shim d923840943e98981b5e857f240c392215f59ed3248a0d6d81f5d5b3c6cc3da0a" address="unix:///run/containerd/s/4a05be2f28b5aba37ec9f2b295b4fa1940412bd2f5846a709a18fc40935cca78" protocol=ttrpc version=3 Dec 16 12:17:22.051687 systemd[1]: Started cri-containerd-e8f64c3e95425875968841a31bda77e8ddaaf3cfa6d58e82005d2056847e8db0.scope - libcontainer container e8f64c3e95425875968841a31bda77e8ddaaf3cfa6d58e82005d2056847e8db0. Dec 16 12:17:22.082000 audit: BPF prog-id=105 op=LOAD Dec 16 12:17:22.084000 audit: BPF prog-id=106 op=LOAD Dec 16 12:17:22.084000 audit[3105]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000186180 a2=98 a3=0 items=0 ppid=2972 pid=3105 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:17:22.084000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3232393433333932326333616334353165326264626238313335613835 Dec 16 12:17:22.085000 audit: BPF prog-id=106 op=UNLOAD Dec 16 12:17:22.085000 audit[3105]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2972 pid=3105 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:17:22.085000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3232393433333932326333616334353165326264626238313335613835 Dec 16 12:17:22.085000 audit: BPF prog-id=107 op=LOAD Dec 16 12:17:22.085000 audit[3105]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001863e8 a2=98 a3=0 items=0 ppid=2972 pid=3105 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:17:22.085000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3232393433333932326333616334353165326264626238313335613835 Dec 16 12:17:22.085000 audit: BPF prog-id=108 op=LOAD Dec 16 12:17:22.085000 audit[3105]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000186168 a2=98 a3=0 items=0 ppid=2972 pid=3105 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:17:22.085000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3232393433333932326333616334353165326264626238313335613835 Dec 16 12:17:22.086000 audit: BPF prog-id=108 op=UNLOAD Dec 16 12:17:22.086000 audit[3105]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2972 pid=3105 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:17:22.086000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3232393433333932326333616334353165326264626238313335613835 Dec 16 12:17:22.086000 audit: BPF prog-id=107 op=UNLOAD Dec 16 12:17:22.086000 audit[3105]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2972 pid=3105 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:17:22.086000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3232393433333932326333616334353165326264626238313335613835 Dec 16 12:17:22.087000 audit: BPF prog-id=109 op=LOAD Dec 16 12:17:22.087000 audit[3105]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000186648 a2=98 a3=0 items=0 ppid=2972 pid=3105 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:17:22.087000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3232393433333932326333616334353165326264626238313335613835 Dec 16 12:17:22.090667 systemd[1]: Started cri-containerd-d923840943e98981b5e857f240c392215f59ed3248a0d6d81f5d5b3c6cc3da0a.scope - libcontainer container d923840943e98981b5e857f240c392215f59ed3248a0d6d81f5d5b3c6cc3da0a. Dec 16 12:17:22.140000 audit: BPF prog-id=110 op=LOAD Dec 16 12:17:22.144000 audit: BPF prog-id=111 op=LOAD Dec 16 12:17:22.144000 audit[3112]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130180 a2=98 a3=0 items=0 ppid=2997 pid=3112 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:17:22.144000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6538663634633365393534323538373539363838343161333162646137 Dec 16 12:17:22.146000 audit: BPF prog-id=111 op=UNLOAD Dec 16 12:17:22.146000 audit[3112]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2997 pid=3112 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:17:22.146000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6538663634633365393534323538373539363838343161333162646137 Dec 16 12:17:22.149000 audit: BPF prog-id=112 op=LOAD Dec 16 12:17:22.149000 audit[3112]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001303e8 a2=98 a3=0 items=0 ppid=2997 pid=3112 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:17:22.149000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6538663634633365393534323538373539363838343161333162646137 Dec 16 12:17:22.152000 audit: BPF prog-id=113 op=LOAD Dec 16 12:17:22.152000 audit[3112]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000130168 a2=98 a3=0 items=0 ppid=2997 pid=3112 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:17:22.152000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6538663634633365393534323538373539363838343161333162646137 Dec 16 12:17:22.153000 audit: BPF prog-id=113 op=UNLOAD Dec 16 12:17:22.153000 audit[3112]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2997 pid=3112 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:17:22.153000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6538663634633365393534323538373539363838343161333162646137 Dec 16 12:17:22.153000 audit: BPF prog-id=112 op=UNLOAD Dec 16 12:17:22.153000 audit[3112]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2997 pid=3112 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:17:22.153000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6538663634633365393534323538373539363838343161333162646137 Dec 16 12:17:22.155000 audit: BPF prog-id=114 op=LOAD Dec 16 12:17:22.155000 audit[3112]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130648 a2=98 a3=0 items=0 ppid=2997 pid=3112 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:17:22.155000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6538663634633365393534323538373539363838343161333162646137 Dec 16 12:17:22.191000 audit: BPF prog-id=115 op=LOAD Dec 16 12:17:22.194000 audit: BPF prog-id=116 op=LOAD Dec 16 12:17:22.194000 audit[3132]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130180 a2=98 a3=0 items=0 ppid=3023 pid=3132 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:17:22.194000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6439323338343039343365393839383162356538353766323430633339 Dec 16 12:17:22.194000 audit: BPF prog-id=116 op=UNLOAD Dec 16 12:17:22.194000 audit[3132]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3023 pid=3132 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:17:22.194000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6439323338343039343365393839383162356538353766323430633339 Dec 16 12:17:22.196000 audit: BPF prog-id=117 op=LOAD Dec 16 12:17:22.196000 audit[3132]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001303e8 a2=98 a3=0 items=0 ppid=3023 pid=3132 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:17:22.196000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6439323338343039343365393839383162356538353766323430633339 Dec 16 12:17:22.197000 audit: BPF prog-id=118 op=LOAD Dec 16 12:17:22.197000 audit[3132]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000130168 a2=98 a3=0 items=0 ppid=3023 pid=3132 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:17:22.197000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6439323338343039343365393839383162356538353766323430633339 Dec 16 12:17:22.198000 audit: BPF prog-id=118 op=UNLOAD Dec 16 12:17:22.198000 audit[3132]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3023 pid=3132 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:17:22.198000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6439323338343039343365393839383162356538353766323430633339 Dec 16 12:17:22.200000 audit: BPF prog-id=117 op=UNLOAD Dec 16 12:17:22.200000 audit[3132]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3023 pid=3132 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:17:22.200000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6439323338343039343365393839383162356538353766323430633339 Dec 16 12:17:22.201000 audit: BPF prog-id=119 op=LOAD Dec 16 12:17:22.203428 containerd[1971]: time="2025-12-16T12:17:22.203371300Z" level=info msg="StartContainer for \"229433922c3ac451e2bdbb8135a857cb44699b1a55ccbb05a45d71092c620be6\" returns successfully" Dec 16 12:17:22.201000 audit[3132]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130648 a2=98 a3=0 items=0 ppid=3023 pid=3132 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:17:22.201000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6439323338343039343365393839383162356538353766323430633339 Dec 16 12:17:22.265392 containerd[1971]: time="2025-12-16T12:17:22.265304824Z" level=info msg="StartContainer for \"e8f64c3e95425875968841a31bda77e8ddaaf3cfa6d58e82005d2056847e8db0\" returns successfully" Dec 16 12:17:22.344296 containerd[1971]: time="2025-12-16T12:17:22.344179936Z" level=info msg="StartContainer for \"d923840943e98981b5e857f240c392215f59ed3248a0d6d81f5d5b3c6cc3da0a\" returns successfully" Dec 16 12:17:22.368485 kubelet[2931]: W1216 12:17:22.368395 2931 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.20.6:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.31.20.6:6443: connect: connection refused Dec 16 12:17:22.368768 kubelet[2931]: E1216 12:17:22.368506 2931 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.31.20.6:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.20.6:6443: connect: connection refused" logger="UnhandledError" Dec 16 12:17:22.672201 kubelet[2931]: I1216 12:17:22.669910 2931 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-20-6" Dec 16 12:17:23.143931 kubelet[2931]: E1216 12:17:23.143481 2931 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-20-6\" not found" node="ip-172-31-20-6" Dec 16 12:17:23.153455 kubelet[2931]: E1216 12:17:23.153392 2931 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-20-6\" not found" node="ip-172-31-20-6" Dec 16 12:17:23.160085 kubelet[2931]: E1216 12:17:23.159118 2931 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-20-6\" not found" node="ip-172-31-20-6" Dec 16 12:17:24.163072 kubelet[2931]: E1216 12:17:24.162997 2931 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-20-6\" not found" node="ip-172-31-20-6" Dec 16 12:17:24.164588 kubelet[2931]: E1216 12:17:24.164213 2931 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-20-6\" not found" node="ip-172-31-20-6" Dec 16 12:17:24.167438 kubelet[2931]: E1216 12:17:24.167352 2931 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-20-6\" not found" node="ip-172-31-20-6" Dec 16 12:17:25.163991 kubelet[2931]: E1216 12:17:25.163904 2931 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-20-6\" not found" node="ip-172-31-20-6" Dec 16 12:17:25.167094 kubelet[2931]: E1216 12:17:25.165712 2931 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-20-6\" not found" node="ip-172-31-20-6" Dec 16 12:17:27.020015 kubelet[2931]: I1216 12:17:27.019947 2931 apiserver.go:52] "Watching apiserver" Dec 16 12:17:27.039972 kubelet[2931]: E1216 12:17:27.039848 2931 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-20-6\" not found" node="ip-172-31-20-6" Dec 16 12:17:27.084927 kubelet[2931]: E1216 12:17:27.084863 2931 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-20-6\" not found" node="ip-172-31-20-6" Dec 16 12:17:27.135015 kubelet[2931]: I1216 12:17:27.134951 2931 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-20-6" Dec 16 12:17:27.135015 kubelet[2931]: E1216 12:17:27.135008 2931 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"ip-172-31-20-6\": node \"ip-172-31-20-6\" not found" Dec 16 12:17:27.135565 kubelet[2931]: I1216 12:17:27.135378 2931 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Dec 16 12:17:27.235204 kubelet[2931]: I1216 12:17:27.235143 2931 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-20-6" Dec 16 12:17:27.268112 kubelet[2931]: E1216 12:17:27.268035 2931 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ip-172-31-20-6\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ip-172-31-20-6" Dec 16 12:17:27.268112 kubelet[2931]: I1216 12:17:27.268105 2931 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-20-6" Dec 16 12:17:27.281956 kubelet[2931]: E1216 12:17:27.281915 2931 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-20-6\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ip-172-31-20-6" Dec 16 12:17:27.282187 kubelet[2931]: I1216 12:17:27.282167 2931 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-20-6" Dec 16 12:17:27.287868 kubelet[2931]: E1216 12:17:27.287814 2931 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ip-172-31-20-6\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ip-172-31-20-6" Dec 16 12:17:29.298626 systemd[1]: Reload requested from client PID 3204 ('systemctl') (unit session-8.scope)... Dec 16 12:17:29.298660 systemd[1]: Reloading... Dec 16 12:17:29.487006 kubelet[2931]: I1216 12:17:29.484511 2931 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-20-6" Dec 16 12:17:29.517101 zram_generator::config[3254]: No configuration found. Dec 16 12:17:30.125466 systemd[1]: Reloading finished in 826 ms. Dec 16 12:17:30.170470 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 12:17:30.190880 systemd[1]: kubelet.service: Deactivated successfully. Dec 16 12:17:30.198897 kernel: kauditd_printk_skb: 202 callbacks suppressed Dec 16 12:17:30.199165 kernel: audit: type=1131 audit(1765887450.191:406): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:17:30.191000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:17:30.192158 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 12:17:30.192261 systemd[1]: kubelet.service: Consumed 1.451s CPU time, 128.9M memory peak. Dec 16 12:17:30.202846 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 12:17:30.205000 audit: BPF prog-id=120 op=LOAD Dec 16 12:17:30.214839 kernel: audit: type=1334 audit(1765887450.205:407): prog-id=120 op=LOAD Dec 16 12:17:30.214970 kernel: audit: type=1334 audit(1765887450.205:408): prog-id=71 op=UNLOAD Dec 16 12:17:30.205000 audit: BPF prog-id=71 op=UNLOAD Dec 16 12:17:30.216931 kernel: audit: type=1334 audit(1765887450.207:409): prog-id=121 op=LOAD Dec 16 12:17:30.207000 audit: BPF prog-id=121 op=LOAD Dec 16 12:17:30.222199 kernel: audit: type=1334 audit(1765887450.207:410): prog-id=122 op=LOAD Dec 16 12:17:30.207000 audit: BPF prog-id=122 op=LOAD Dec 16 12:17:30.224821 kernel: audit: type=1334 audit(1765887450.207:411): prog-id=72 op=UNLOAD Dec 16 12:17:30.207000 audit: BPF prog-id=72 op=UNLOAD Dec 16 12:17:30.207000 audit: BPF prog-id=73 op=UNLOAD Dec 16 12:17:30.209000 audit: BPF prog-id=123 op=LOAD Dec 16 12:17:30.229344 kernel: audit: type=1334 audit(1765887450.207:412): prog-id=73 op=UNLOAD Dec 16 12:17:30.229393 kernel: audit: type=1334 audit(1765887450.209:413): prog-id=123 op=LOAD Dec 16 12:17:30.209000 audit: BPF prog-id=77 op=UNLOAD Dec 16 12:17:30.231142 kernel: audit: type=1334 audit(1765887450.209:414): prog-id=77 op=UNLOAD Dec 16 12:17:30.212000 audit: BPF prog-id=124 op=LOAD Dec 16 12:17:30.234574 kernel: audit: type=1334 audit(1765887450.212:415): prog-id=124 op=LOAD Dec 16 12:17:30.212000 audit: BPF prog-id=86 op=UNLOAD Dec 16 12:17:30.212000 audit: BPF prog-id=125 op=LOAD Dec 16 12:17:30.212000 audit: BPF prog-id=126 op=LOAD Dec 16 12:17:30.212000 audit: BPF prog-id=87 op=UNLOAD Dec 16 12:17:30.212000 audit: BPF prog-id=88 op=UNLOAD Dec 16 12:17:30.214000 audit: BPF prog-id=127 op=LOAD Dec 16 12:17:30.214000 audit: BPF prog-id=128 op=LOAD Dec 16 12:17:30.214000 audit: BPF prog-id=84 op=UNLOAD Dec 16 12:17:30.214000 audit: BPF prog-id=85 op=UNLOAD Dec 16 12:17:30.216000 audit: BPF prog-id=129 op=LOAD Dec 16 12:17:30.216000 audit: BPF prog-id=89 op=UNLOAD Dec 16 12:17:30.218000 audit: BPF prog-id=130 op=LOAD Dec 16 12:17:30.219000 audit: BPF prog-id=70 op=UNLOAD Dec 16 12:17:30.221000 audit: BPF prog-id=131 op=LOAD Dec 16 12:17:30.221000 audit: BPF prog-id=78 op=UNLOAD Dec 16 12:17:30.222000 audit: BPF prog-id=132 op=LOAD Dec 16 12:17:30.222000 audit: BPF prog-id=133 op=LOAD Dec 16 12:17:30.222000 audit: BPF prog-id=79 op=UNLOAD Dec 16 12:17:30.222000 audit: BPF prog-id=80 op=UNLOAD Dec 16 12:17:30.226000 audit: BPF prog-id=134 op=LOAD Dec 16 12:17:30.232000 audit: BPF prog-id=81 op=UNLOAD Dec 16 12:17:30.232000 audit: BPF prog-id=135 op=LOAD Dec 16 12:17:30.232000 audit: BPF prog-id=136 op=LOAD Dec 16 12:17:30.232000 audit: BPF prog-id=82 op=UNLOAD Dec 16 12:17:30.232000 audit: BPF prog-id=83 op=UNLOAD Dec 16 12:17:30.235000 audit: BPF prog-id=137 op=LOAD Dec 16 12:17:30.235000 audit: BPF prog-id=74 op=UNLOAD Dec 16 12:17:30.236000 audit: BPF prog-id=138 op=LOAD Dec 16 12:17:30.236000 audit: BPF prog-id=139 op=LOAD Dec 16 12:17:30.236000 audit: BPF prog-id=75 op=UNLOAD Dec 16 12:17:30.236000 audit: BPF prog-id=76 op=UNLOAD Dec 16 12:17:30.270994 update_engine[1942]: I20251216 12:17:30.270148 1942 update_attempter.cc:509] Updating boot flags... Dec 16 12:17:30.782775 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 12:17:30.782000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:17:30.809162 (kubelet)[3411]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 16 12:17:31.027381 kubelet[3411]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 16 12:17:31.029553 kubelet[3411]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Dec 16 12:17:31.029553 kubelet[3411]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 16 12:17:31.029553 kubelet[3411]: I1216 12:17:31.027998 3411 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 16 12:17:31.067564 kubelet[3411]: I1216 12:17:31.066572 3411 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Dec 16 12:17:31.067564 kubelet[3411]: I1216 12:17:31.066628 3411 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 16 12:17:31.070172 kubelet[3411]: I1216 12:17:31.069231 3411 server.go:954] "Client rotation is on, will bootstrap in background" Dec 16 12:17:31.077625 kubelet[3411]: I1216 12:17:31.077508 3411 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Dec 16 12:17:31.092837 kubelet[3411]: I1216 12:17:31.092771 3411 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 16 12:17:31.134235 kubelet[3411]: I1216 12:17:31.134186 3411 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Dec 16 12:17:31.152983 kubelet[3411]: I1216 12:17:31.152924 3411 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 16 12:17:31.154029 kubelet[3411]: I1216 12:17:31.153730 3411 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 16 12:17:31.154448 kubelet[3411]: I1216 12:17:31.153793 3411 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-20-6","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 16 12:17:31.155643 kubelet[3411]: I1216 12:17:31.154471 3411 topology_manager.go:138] "Creating topology manager with none policy" Dec 16 12:17:31.155643 kubelet[3411]: I1216 12:17:31.154494 3411 container_manager_linux.go:304] "Creating device plugin manager" Dec 16 12:17:31.155643 kubelet[3411]: I1216 12:17:31.154573 3411 state_mem.go:36] "Initialized new in-memory state store" Dec 16 12:17:31.155643 kubelet[3411]: I1216 12:17:31.155509 3411 kubelet.go:446] "Attempting to sync node with API server" Dec 16 12:17:31.156605 kubelet[3411]: I1216 12:17:31.155874 3411 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 16 12:17:31.156605 kubelet[3411]: I1216 12:17:31.155962 3411 kubelet.go:352] "Adding apiserver pod source" Dec 16 12:17:31.156605 kubelet[3411]: I1216 12:17:31.155985 3411 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 16 12:17:31.171066 kubelet[3411]: I1216 12:17:31.170835 3411 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.1.5" apiVersion="v1" Dec 16 12:17:31.175659 kubelet[3411]: I1216 12:17:31.174756 3411 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 16 12:17:31.183403 kubelet[3411]: I1216 12:17:31.179707 3411 watchdog_linux.go:99] "Systemd watchdog is not enabled" Dec 16 12:17:31.183403 kubelet[3411]: I1216 12:17:31.179771 3411 server.go:1287] "Started kubelet" Dec 16 12:17:31.189269 kubelet[3411]: I1216 12:17:31.184332 3411 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 16 12:17:31.189269 kubelet[3411]: I1216 12:17:31.184774 3411 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 16 12:17:31.210465 kubelet[3411]: I1216 12:17:31.210292 3411 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 16 12:17:31.217966 kubelet[3411]: I1216 12:17:31.217158 3411 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Dec 16 12:17:31.227263 kubelet[3411]: I1216 12:17:31.223677 3411 server.go:479] "Adding debug handlers to kubelet server" Dec 16 12:17:31.229264 kubelet[3411]: I1216 12:17:31.228590 3411 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Dec 16 12:17:31.230762 kubelet[3411]: I1216 12:17:31.230712 3411 volume_manager.go:297] "Starting Kubelet Volume Manager" Dec 16 12:17:31.232757 kubelet[3411]: E1216 12:17:31.232100 3411 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-20-6\" not found" Dec 16 12:17:31.234855 kubelet[3411]: I1216 12:17:31.234806 3411 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Dec 16 12:17:31.235894 kubelet[3411]: I1216 12:17:31.235817 3411 reconciler.go:26] "Reconciler: start to sync state" Dec 16 12:17:31.309618 kubelet[3411]: I1216 12:17:31.309358 3411 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 16 12:17:31.348246 kubelet[3411]: E1216 12:17:31.346224 3411 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-20-6\" not found" Dec 16 12:17:31.489556 kubelet[3411]: E1216 12:17:31.460709 3411 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-20-6\" not found" Dec 16 12:17:31.490444 kubelet[3411]: E1216 12:17:31.490392 3411 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 16 12:17:31.522164 kubelet[3411]: I1216 12:17:31.521463 3411 factory.go:221] Registration of the containerd container factory successfully Dec 16 12:17:31.522613 kubelet[3411]: I1216 12:17:31.522584 3411 factory.go:221] Registration of the systemd container factory successfully Dec 16 12:17:31.525070 kubelet[3411]: I1216 12:17:31.522811 3411 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 16 12:17:31.544206 kubelet[3411]: I1216 12:17:31.544106 3411 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 16 12:17:31.544749 kubelet[3411]: I1216 12:17:31.544347 3411 status_manager.go:227] "Starting to sync pod status with apiserver" Dec 16 12:17:31.544749 kubelet[3411]: I1216 12:17:31.544515 3411 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Dec 16 12:17:31.544749 kubelet[3411]: I1216 12:17:31.544532 3411 kubelet.go:2382] "Starting kubelet main sync loop" Dec 16 12:17:31.545447 kubelet[3411]: E1216 12:17:31.545141 3411 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 16 12:17:31.649574 kubelet[3411]: E1216 12:17:31.647850 3411 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Dec 16 12:17:31.747294 kubelet[3411]: I1216 12:17:31.747249 3411 cpu_manager.go:221] "Starting CPU manager" policy="none" Dec 16 12:17:31.747294 kubelet[3411]: I1216 12:17:31.747285 3411 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Dec 16 12:17:31.747506 kubelet[3411]: I1216 12:17:31.747323 3411 state_mem.go:36] "Initialized new in-memory state store" Dec 16 12:17:31.750577 kubelet[3411]: I1216 12:17:31.750516 3411 state_mem.go:88] "Updated default CPUSet" cpuSet="" Dec 16 12:17:31.750723 kubelet[3411]: I1216 12:17:31.750558 3411 state_mem.go:96] "Updated CPUSet assignments" assignments={} Dec 16 12:17:31.750723 kubelet[3411]: I1216 12:17:31.750610 3411 policy_none.go:49] "None policy: Start" Dec 16 12:17:31.750723 kubelet[3411]: I1216 12:17:31.750629 3411 memory_manager.go:186] "Starting memorymanager" policy="None" Dec 16 12:17:31.750723 kubelet[3411]: I1216 12:17:31.750652 3411 state_mem.go:35] "Initializing new in-memory state store" Dec 16 12:17:31.750915 kubelet[3411]: I1216 12:17:31.750854 3411 state_mem.go:75] "Updated machine memory state" Dec 16 12:17:31.769401 kubelet[3411]: I1216 12:17:31.769357 3411 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 16 12:17:31.769671 kubelet[3411]: I1216 12:17:31.769642 3411 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 16 12:17:31.769781 kubelet[3411]: I1216 12:17:31.769672 3411 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 16 12:17:31.773838 kubelet[3411]: I1216 12:17:31.773792 3411 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 16 12:17:31.776725 kubelet[3411]: E1216 12:17:31.776675 3411 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Dec 16 12:17:31.851447 kubelet[3411]: I1216 12:17:31.851346 3411 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-20-6" Dec 16 12:17:31.852716 kubelet[3411]: I1216 12:17:31.852367 3411 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-20-6" Dec 16 12:17:31.856030 kubelet[3411]: I1216 12:17:31.852948 3411 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-20-6" Dec 16 12:17:31.873346 kubelet[3411]: E1216 12:17:31.873217 3411 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ip-172-31-20-6\" already exists" pod="kube-system/kube-controller-manager-ip-172-31-20-6" Dec 16 12:17:31.889325 kubelet[3411]: I1216 12:17:31.889157 3411 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-20-6" Dec 16 12:17:31.906519 kubelet[3411]: I1216 12:17:31.906356 3411 kubelet_node_status.go:124] "Node was previously registered" node="ip-172-31-20-6" Dec 16 12:17:31.907504 kubelet[3411]: I1216 12:17:31.907469 3411 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-20-6" Dec 16 12:17:31.975290 kubelet[3411]: I1216 12:17:31.975220 3411 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b1b652fec7071822d2ad5e5c3467308a-ca-certs\") pod \"kube-apiserver-ip-172-31-20-6\" (UID: \"b1b652fec7071822d2ad5e5c3467308a\") " pod="kube-system/kube-apiserver-ip-172-31-20-6" Dec 16 12:17:31.975444 kubelet[3411]: I1216 12:17:31.975296 3411 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7ad99346c528774cdc680c6de71930cf-ca-certs\") pod \"kube-controller-manager-ip-172-31-20-6\" (UID: \"7ad99346c528774cdc680c6de71930cf\") " pod="kube-system/kube-controller-manager-ip-172-31-20-6" Dec 16 12:17:31.975444 kubelet[3411]: I1216 12:17:31.975343 3411 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/7ad99346c528774cdc680c6de71930cf-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-20-6\" (UID: \"7ad99346c528774cdc680c6de71930cf\") " pod="kube-system/kube-controller-manager-ip-172-31-20-6" Dec 16 12:17:31.975444 kubelet[3411]: I1216 12:17:31.975391 3411 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/db86a9c599c7516e8f7d726a117eb187-kubeconfig\") pod \"kube-scheduler-ip-172-31-20-6\" (UID: \"db86a9c599c7516e8f7d726a117eb187\") " pod="kube-system/kube-scheduler-ip-172-31-20-6" Dec 16 12:17:31.975444 kubelet[3411]: I1216 12:17:31.975426 3411 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b1b652fec7071822d2ad5e5c3467308a-k8s-certs\") pod \"kube-apiserver-ip-172-31-20-6\" (UID: \"b1b652fec7071822d2ad5e5c3467308a\") " pod="kube-system/kube-apiserver-ip-172-31-20-6" Dec 16 12:17:31.975675 kubelet[3411]: I1216 12:17:31.975460 3411 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b1b652fec7071822d2ad5e5c3467308a-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-20-6\" (UID: \"b1b652fec7071822d2ad5e5c3467308a\") " pod="kube-system/kube-apiserver-ip-172-31-20-6" Dec 16 12:17:31.975675 kubelet[3411]: I1216 12:17:31.975497 3411 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7ad99346c528774cdc680c6de71930cf-k8s-certs\") pod \"kube-controller-manager-ip-172-31-20-6\" (UID: \"7ad99346c528774cdc680c6de71930cf\") " pod="kube-system/kube-controller-manager-ip-172-31-20-6" Dec 16 12:17:31.975675 kubelet[3411]: I1216 12:17:31.975534 3411 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7ad99346c528774cdc680c6de71930cf-kubeconfig\") pod \"kube-controller-manager-ip-172-31-20-6\" (UID: \"7ad99346c528774cdc680c6de71930cf\") " pod="kube-system/kube-controller-manager-ip-172-31-20-6" Dec 16 12:17:31.975675 kubelet[3411]: I1216 12:17:31.975575 3411 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7ad99346c528774cdc680c6de71930cf-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-20-6\" (UID: \"7ad99346c528774cdc680c6de71930cf\") " pod="kube-system/kube-controller-manager-ip-172-31-20-6" Dec 16 12:17:32.162265 kubelet[3411]: I1216 12:17:32.162130 3411 apiserver.go:52] "Watching apiserver" Dec 16 12:17:32.235937 kubelet[3411]: I1216 12:17:32.235882 3411 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Dec 16 12:17:32.273993 kubelet[3411]: I1216 12:17:32.273893 3411 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-20-6" podStartSLOduration=1.27387323 podStartE2EDuration="1.27387323s" podCreationTimestamp="2025-12-16 12:17:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 12:17:32.269301398 +0000 UTC m=+1.444271876" watchObservedRunningTime="2025-12-16 12:17:32.27387323 +0000 UTC m=+1.448843672" Dec 16 12:17:32.322406 kubelet[3411]: I1216 12:17:32.322319 3411 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-20-6" podStartSLOduration=3.322294526 podStartE2EDuration="3.322294526s" podCreationTimestamp="2025-12-16 12:17:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 12:17:32.302636522 +0000 UTC m=+1.477606988" watchObservedRunningTime="2025-12-16 12:17:32.322294526 +0000 UTC m=+1.497264968" Dec 16 12:17:32.340720 kubelet[3411]: I1216 12:17:32.340633 3411 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-20-6" podStartSLOduration=1.340610078 podStartE2EDuration="1.340610078s" podCreationTimestamp="2025-12-16 12:17:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 12:17:32.323827166 +0000 UTC m=+1.498797632" watchObservedRunningTime="2025-12-16 12:17:32.340610078 +0000 UTC m=+1.515580520" Dec 16 12:17:34.147777 kubelet[3411]: I1216 12:17:34.147251 3411 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Dec 16 12:17:34.150378 containerd[1971]: time="2025-12-16T12:17:34.148757307Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Dec 16 12:17:34.151193 kubelet[3411]: I1216 12:17:34.151147 3411 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Dec 16 12:17:35.164816 systemd[1]: Created slice kubepods-besteffort-pod8284bc2e_1c8c_4e49_b0af_483f32a97484.slice - libcontainer container kubepods-besteffort-pod8284bc2e_1c8c_4e49_b0af_483f32a97484.slice. Dec 16 12:17:35.193971 kubelet[3411]: I1216 12:17:35.193902 3411 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/8284bc2e-1c8c-4e49-b0af-483f32a97484-kube-proxy\") pod \"kube-proxy-zc8nq\" (UID: \"8284bc2e-1c8c-4e49-b0af-483f32a97484\") " pod="kube-system/kube-proxy-zc8nq" Dec 16 12:17:35.194558 kubelet[3411]: I1216 12:17:35.193993 3411 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8284bc2e-1c8c-4e49-b0af-483f32a97484-xtables-lock\") pod \"kube-proxy-zc8nq\" (UID: \"8284bc2e-1c8c-4e49-b0af-483f32a97484\") " pod="kube-system/kube-proxy-zc8nq" Dec 16 12:17:35.194558 kubelet[3411]: I1216 12:17:35.194032 3411 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8284bc2e-1c8c-4e49-b0af-483f32a97484-lib-modules\") pod \"kube-proxy-zc8nq\" (UID: \"8284bc2e-1c8c-4e49-b0af-483f32a97484\") " pod="kube-system/kube-proxy-zc8nq" Dec 16 12:17:35.194558 kubelet[3411]: I1216 12:17:35.194332 3411 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6hdgm\" (UniqueName: \"kubernetes.io/projected/8284bc2e-1c8c-4e49-b0af-483f32a97484-kube-api-access-6hdgm\") pod \"kube-proxy-zc8nq\" (UID: \"8284bc2e-1c8c-4e49-b0af-483f32a97484\") " pod="kube-system/kube-proxy-zc8nq" Dec 16 12:17:35.348529 systemd[1]: Created slice kubepods-besteffort-podf6268e6c_0326_4253_ad6f_b6289565f354.slice - libcontainer container kubepods-besteffort-podf6268e6c_0326_4253_ad6f_b6289565f354.slice. Dec 16 12:17:35.395473 kubelet[3411]: I1216 12:17:35.395416 3411 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/f6268e6c-0326-4253-ad6f-b6289565f354-var-lib-calico\") pod \"tigera-operator-7dcd859c48-sjp2r\" (UID: \"f6268e6c-0326-4253-ad6f-b6289565f354\") " pod="tigera-operator/tigera-operator-7dcd859c48-sjp2r" Dec 16 12:17:35.395473 kubelet[3411]: I1216 12:17:35.395495 3411 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dtfjg\" (UniqueName: \"kubernetes.io/projected/f6268e6c-0326-4253-ad6f-b6289565f354-kube-api-access-dtfjg\") pod \"tigera-operator-7dcd859c48-sjp2r\" (UID: \"f6268e6c-0326-4253-ad6f-b6289565f354\") " pod="tigera-operator/tigera-operator-7dcd859c48-sjp2r" Dec 16 12:17:35.479213 containerd[1971]: time="2025-12-16T12:17:35.478820418Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-zc8nq,Uid:8284bc2e-1c8c-4e49-b0af-483f32a97484,Namespace:kube-system,Attempt:0,}" Dec 16 12:17:35.533519 containerd[1971]: time="2025-12-16T12:17:35.533318958Z" level=info msg="connecting to shim 304d65ff677c663639e35d2037c434ae525bb4f169b1ccc960b952631aa62913" address="unix:///run/containerd/s/30f8ae59d95eff1a2137046297ab5dbc68dfffd4f6e80fbea1b78d44dc2eb515" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:17:35.592475 systemd[1]: Started cri-containerd-304d65ff677c663639e35d2037c434ae525bb4f169b1ccc960b952631aa62913.scope - libcontainer container 304d65ff677c663639e35d2037c434ae525bb4f169b1ccc960b952631aa62913. Dec 16 12:17:35.615455 kernel: kauditd_printk_skb: 32 callbacks suppressed Dec 16 12:17:35.615607 kernel: audit: type=1334 audit(1765887455.611:448): prog-id=140 op=LOAD Dec 16 12:17:35.611000 audit: BPF prog-id=140 op=LOAD Dec 16 12:17:35.613000 audit: BPF prog-id=141 op=LOAD Dec 16 12:17:35.617480 kernel: audit: type=1334 audit(1765887455.613:449): prog-id=141 op=LOAD Dec 16 12:17:35.613000 audit[3563]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130180 a2=98 a3=0 items=0 ppid=3552 pid=3563 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:17:35.613000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3330346436356666363737633636333633396533356432303337633433 Dec 16 12:17:35.630786 kernel: audit: type=1300 audit(1765887455.613:449): arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130180 a2=98 a3=0 items=0 ppid=3552 pid=3563 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:17:35.630939 kernel: audit: type=1327 audit(1765887455.613:449): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3330346436356666363737633636333633396533356432303337633433 Dec 16 12:17:35.614000 audit: BPF prog-id=141 op=UNLOAD Dec 16 12:17:35.633469 kernel: audit: type=1334 audit(1765887455.614:450): prog-id=141 op=UNLOAD Dec 16 12:17:35.633909 kernel: audit: type=1300 audit(1765887455.614:450): arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3552 pid=3563 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:17:35.614000 audit[3563]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3552 pid=3563 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:17:35.614000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3330346436356666363737633636333633396533356432303337633433 Dec 16 12:17:35.646089 kernel: audit: type=1327 audit(1765887455.614:450): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3330346436356666363737633636333633396533356432303337633433 Dec 16 12:17:35.614000 audit: BPF prog-id=142 op=LOAD Dec 16 12:17:35.648561 kernel: audit: type=1334 audit(1765887455.614:451): prog-id=142 op=LOAD Dec 16 12:17:35.648859 kernel: audit: type=1300 audit(1765887455.614:451): arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001303e8 a2=98 a3=0 items=0 ppid=3552 pid=3563 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:17:35.614000 audit[3563]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001303e8 a2=98 a3=0 items=0 ppid=3552 pid=3563 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:17:35.614000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3330346436356666363737633636333633396533356432303337633433 Dec 16 12:17:35.661595 kernel: audit: type=1327 audit(1765887455.614:451): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3330346436356666363737633636333633396533356432303337633433 Dec 16 12:17:35.614000 audit: BPF prog-id=143 op=LOAD Dec 16 12:17:35.614000 audit[3563]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000130168 a2=98 a3=0 items=0 ppid=3552 pid=3563 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:17:35.614000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3330346436356666363737633636333633396533356432303337633433 Dec 16 12:17:35.614000 audit: BPF prog-id=143 op=UNLOAD Dec 16 12:17:35.614000 audit[3563]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3552 pid=3563 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:17:35.614000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3330346436356666363737633636333633396533356432303337633433 Dec 16 12:17:35.614000 audit: BPF prog-id=142 op=UNLOAD Dec 16 12:17:35.614000 audit[3563]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3552 pid=3563 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:17:35.614000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3330346436356666363737633636333633396533356432303337633433 Dec 16 12:17:35.614000 audit: BPF prog-id=144 op=LOAD Dec 16 12:17:35.614000 audit[3563]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130648 a2=98 a3=0 items=0 ppid=3552 pid=3563 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:17:35.614000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3330346436356666363737633636333633396533356432303337633433 Dec 16 12:17:35.665312 containerd[1971]: time="2025-12-16T12:17:35.663626791Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-sjp2r,Uid:f6268e6c-0326-4253-ad6f-b6289565f354,Namespace:tigera-operator,Attempt:0,}" Dec 16 12:17:35.686270 containerd[1971]: time="2025-12-16T12:17:35.686092927Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-zc8nq,Uid:8284bc2e-1c8c-4e49-b0af-483f32a97484,Namespace:kube-system,Attempt:0,} returns sandbox id \"304d65ff677c663639e35d2037c434ae525bb4f169b1ccc960b952631aa62913\"" Dec 16 12:17:35.693075 containerd[1971]: time="2025-12-16T12:17:35.692137591Z" level=info msg="CreateContainer within sandbox \"304d65ff677c663639e35d2037c434ae525bb4f169b1ccc960b952631aa62913\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Dec 16 12:17:35.727653 containerd[1971]: time="2025-12-16T12:17:35.727555339Z" level=info msg="connecting to shim caecc70f2907c364a3748809972cf3693dca789785eb9dd95eeeaa3c5ab89eee" address="unix:///run/containerd/s/8822b850440a04f998592980a40b5be9139a4d107f089a0abb02aa56a783d4a6" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:17:35.728999 containerd[1971]: time="2025-12-16T12:17:35.728914387Z" level=info msg="Container 8d07989f773eb0b9607759953a93f7adf74500a2305411e7b77d5955d63abb88: CDI devices from CRI Config.CDIDevices: []" Dec 16 12:17:35.751936 containerd[1971]: time="2025-12-16T12:17:35.751753183Z" level=info msg="CreateContainer within sandbox \"304d65ff677c663639e35d2037c434ae525bb4f169b1ccc960b952631aa62913\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"8d07989f773eb0b9607759953a93f7adf74500a2305411e7b77d5955d63abb88\"" Dec 16 12:17:35.754113 containerd[1971]: time="2025-12-16T12:17:35.753436639Z" level=info msg="StartContainer for \"8d07989f773eb0b9607759953a93f7adf74500a2305411e7b77d5955d63abb88\"" Dec 16 12:17:35.763647 containerd[1971]: time="2025-12-16T12:17:35.763497919Z" level=info msg="connecting to shim 8d07989f773eb0b9607759953a93f7adf74500a2305411e7b77d5955d63abb88" address="unix:///run/containerd/s/30f8ae59d95eff1a2137046297ab5dbc68dfffd4f6e80fbea1b78d44dc2eb515" protocol=ttrpc version=3 Dec 16 12:17:35.785515 systemd[1]: Started cri-containerd-caecc70f2907c364a3748809972cf3693dca789785eb9dd95eeeaa3c5ab89eee.scope - libcontainer container caecc70f2907c364a3748809972cf3693dca789785eb9dd95eeeaa3c5ab89eee. Dec 16 12:17:35.823666 systemd[1]: Started cri-containerd-8d07989f773eb0b9607759953a93f7adf74500a2305411e7b77d5955d63abb88.scope - libcontainer container 8d07989f773eb0b9607759953a93f7adf74500a2305411e7b77d5955d63abb88. Dec 16 12:17:35.829000 audit: BPF prog-id=145 op=LOAD Dec 16 12:17:35.832000 audit: BPF prog-id=146 op=LOAD Dec 16 12:17:35.832000 audit[3606]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000106180 a2=98 a3=0 items=0 ppid=3595 pid=3606 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:17:35.832000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6361656363373066323930376333363461333734383830393937326366 Dec 16 12:17:35.832000 audit: BPF prog-id=146 op=UNLOAD Dec 16 12:17:35.832000 audit[3606]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3595 pid=3606 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:17:35.832000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6361656363373066323930376333363461333734383830393937326366 Dec 16 12:17:35.834000 audit: BPF prog-id=147 op=LOAD Dec 16 12:17:35.834000 audit[3606]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001063e8 a2=98 a3=0 items=0 ppid=3595 pid=3606 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:17:35.834000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6361656363373066323930376333363461333734383830393937326366 Dec 16 12:17:35.836000 audit: BPF prog-id=148 op=LOAD Dec 16 12:17:35.836000 audit[3606]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000106168 a2=98 a3=0 items=0 ppid=3595 pid=3606 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:17:35.836000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6361656363373066323930376333363461333734383830393937326366 Dec 16 12:17:35.836000 audit: BPF prog-id=148 op=UNLOAD Dec 16 12:17:35.836000 audit[3606]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3595 pid=3606 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:17:35.836000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6361656363373066323930376333363461333734383830393937326366 Dec 16 12:17:35.836000 audit: BPF prog-id=147 op=UNLOAD Dec 16 12:17:35.836000 audit[3606]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3595 pid=3606 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:17:35.836000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6361656363373066323930376333363461333734383830393937326366 Dec 16 12:17:35.836000 audit: BPF prog-id=149 op=LOAD Dec 16 12:17:35.836000 audit[3606]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000106648 a2=98 a3=0 items=0 ppid=3595 pid=3606 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:17:35.836000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6361656363373066323930376333363461333734383830393937326366 Dec 16 12:17:35.914172 containerd[1971]: time="2025-12-16T12:17:35.912204368Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-sjp2r,Uid:f6268e6c-0326-4253-ad6f-b6289565f354,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"caecc70f2907c364a3748809972cf3693dca789785eb9dd95eeeaa3c5ab89eee\"" Dec 16 12:17:35.917397 containerd[1971]: time="2025-12-16T12:17:35.917347388Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Dec 16 12:17:35.928000 audit: BPF prog-id=150 op=LOAD Dec 16 12:17:35.928000 audit[3618]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001a03e8 a2=98 a3=0 items=0 ppid=3552 pid=3618 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:17:35.928000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3864303739383966373733656230623936303737353939353361393366 Dec 16 12:17:35.929000 audit: BPF prog-id=151 op=LOAD Dec 16 12:17:35.929000 audit[3618]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=40001a0168 a2=98 a3=0 items=0 ppid=3552 pid=3618 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:17:35.929000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3864303739383966373733656230623936303737353939353361393366 Dec 16 12:17:35.929000 audit: BPF prog-id=151 op=UNLOAD Dec 16 12:17:35.929000 audit[3618]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3552 pid=3618 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:17:35.929000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3864303739383966373733656230623936303737353939353361393366 Dec 16 12:17:35.929000 audit: BPF prog-id=150 op=UNLOAD Dec 16 12:17:35.929000 audit[3618]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3552 pid=3618 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:17:35.929000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3864303739383966373733656230623936303737353939353361393366 Dec 16 12:17:35.929000 audit: BPF prog-id=152 op=LOAD Dec 16 12:17:35.929000 audit[3618]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001a0648 a2=98 a3=0 items=0 ppid=3552 pid=3618 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:17:35.929000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3864303739383966373733656230623936303737353939353361393366 Dec 16 12:17:35.973762 containerd[1971]: time="2025-12-16T12:17:35.973622540Z" level=info msg="StartContainer for \"8d07989f773eb0b9607759953a93f7adf74500a2305411e7b77d5955d63abb88\" returns successfully" Dec 16 12:17:36.238000 audit[3695]: NETFILTER_CFG table=mangle:54 family=2 entries=1 op=nft_register_chain pid=3695 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:17:36.238000 audit[3695]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffdc547fc0 a2=0 a3=1 items=0 ppid=3639 pid=3695 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:17:36.238000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Dec 16 12:17:36.239000 audit[3696]: NETFILTER_CFG table=mangle:55 family=10 entries=1 op=nft_register_chain pid=3696 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:17:36.239000 audit[3696]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=fffffe712a20 a2=0 a3=1 items=0 ppid=3639 pid=3696 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:17:36.239000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Dec 16 12:17:36.242000 audit[3698]: NETFILTER_CFG table=nat:56 family=10 entries=1 op=nft_register_chain pid=3698 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:17:36.242000 audit[3698]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffca720de0 a2=0 a3=1 items=0 ppid=3639 pid=3698 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:17:36.242000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Dec 16 12:17:36.245000 audit[3699]: NETFILTER_CFG table=filter:57 family=10 entries=1 op=nft_register_chain pid=3699 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:17:36.245000 audit[3699]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffd29fa700 a2=0 a3=1 items=0 ppid=3639 pid=3699 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:17:36.245000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Dec 16 12:17:36.245000 audit[3697]: NETFILTER_CFG table=nat:58 family=2 entries=1 op=nft_register_chain pid=3697 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:17:36.245000 audit[3697]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffc8a4ddb0 a2=0 a3=1 items=0 ppid=3639 pid=3697 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:17:36.245000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Dec 16 12:17:36.251000 audit[3701]: NETFILTER_CFG table=filter:59 family=2 entries=1 op=nft_register_chain pid=3701 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:17:36.251000 audit[3701]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffda646400 a2=0 a3=1 items=0 ppid=3639 pid=3701 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:17:36.251000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Dec 16 12:17:36.350000 audit[3702]: NETFILTER_CFG table=filter:60 family=2 entries=1 op=nft_register_chain pid=3702 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:17:36.350000 audit[3702]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=108 a0=3 a1=ffffd040aef0 a2=0 a3=1 items=0 ppid=3639 pid=3702 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:17:36.350000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Dec 16 12:17:36.356000 audit[3704]: NETFILTER_CFG table=filter:61 family=2 entries=1 op=nft_register_rule pid=3704 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:17:36.356000 audit[3704]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=752 a0=3 a1=ffffe35a6c10 a2=0 a3=1 items=0 ppid=3639 pid=3704 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:17:36.356000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276696365 Dec 16 12:17:36.365000 audit[3707]: NETFILTER_CFG table=filter:62 family=2 entries=1 op=nft_register_rule pid=3707 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:17:36.365000 audit[3707]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=752 a0=3 a1=ffffcb283840 a2=0 a3=1 items=0 ppid=3639 pid=3707 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:17:36.365000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669 Dec 16 12:17:36.368000 audit[3708]: NETFILTER_CFG table=filter:63 family=2 entries=1 op=nft_register_chain pid=3708 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:17:36.368000 audit[3708]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffe2b418b0 a2=0 a3=1 items=0 ppid=3639 pid=3708 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:17:36.368000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Dec 16 12:17:36.373000 audit[3710]: NETFILTER_CFG table=filter:64 family=2 entries=1 op=nft_register_rule pid=3710 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:17:36.373000 audit[3710]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffe3556d70 a2=0 a3=1 items=0 ppid=3639 pid=3710 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:17:36.373000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Dec 16 12:17:36.378000 audit[3711]: NETFILTER_CFG table=filter:65 family=2 entries=1 op=nft_register_chain pid=3711 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:17:36.378000 audit[3711]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffe7f6de10 a2=0 a3=1 items=0 ppid=3639 pid=3711 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:17:36.378000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Dec 16 12:17:36.384000 audit[3713]: NETFILTER_CFG table=filter:66 family=2 entries=1 op=nft_register_rule pid=3713 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:17:36.384000 audit[3713]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=744 a0=3 a1=fffffebf2060 a2=0 a3=1 items=0 ppid=3639 pid=3713 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:17:36.384000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Dec 16 12:17:36.392000 audit[3716]: NETFILTER_CFG table=filter:67 family=2 entries=1 op=nft_register_rule pid=3716 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:17:36.392000 audit[3716]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=744 a0=3 a1=ffffcbfdfbc0 a2=0 a3=1 items=0 ppid=3639 pid=3716 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:17:36.392000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D53 Dec 16 12:17:36.395000 audit[3717]: NETFILTER_CFG table=filter:68 family=2 entries=1 op=nft_register_chain pid=3717 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:17:36.395000 audit[3717]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffebab7bc0 a2=0 a3=1 items=0 ppid=3639 pid=3717 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:17:36.395000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Dec 16 12:17:36.401000 audit[3719]: NETFILTER_CFG table=filter:69 family=2 entries=1 op=nft_register_rule pid=3719 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:17:36.401000 audit[3719]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffd25e2740 a2=0 a3=1 items=0 ppid=3639 pid=3719 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:17:36.401000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Dec 16 12:17:36.403000 audit[3720]: NETFILTER_CFG table=filter:70 family=2 entries=1 op=nft_register_chain pid=3720 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:17:36.403000 audit[3720]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=fffff83ff670 a2=0 a3=1 items=0 ppid=3639 pid=3720 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:17:36.403000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Dec 16 12:17:36.411000 audit[3722]: NETFILTER_CFG table=filter:71 family=2 entries=1 op=nft_register_rule pid=3722 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:17:36.411000 audit[3722]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffc6fd1620 a2=0 a3=1 items=0 ppid=3639 pid=3722 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:17:36.411000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Dec 16 12:17:36.419000 audit[3725]: NETFILTER_CFG table=filter:72 family=2 entries=1 op=nft_register_rule pid=3725 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:17:36.419000 audit[3725]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffc2dce000 a2=0 a3=1 items=0 ppid=3639 pid=3725 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:17:36.419000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Dec 16 12:17:36.427000 audit[3728]: NETFILTER_CFG table=filter:73 family=2 entries=1 op=nft_register_rule pid=3728 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:17:36.427000 audit[3728]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=fffffeaa4070 a2=0 a3=1 items=0 ppid=3639 pid=3728 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:17:36.427000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Dec 16 12:17:36.430000 audit[3729]: NETFILTER_CFG table=nat:74 family=2 entries=1 op=nft_register_chain pid=3729 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:17:36.430000 audit[3729]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffe47d8d80 a2=0 a3=1 items=0 ppid=3639 pid=3729 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:17:36.430000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Dec 16 12:17:36.436000 audit[3731]: NETFILTER_CFG table=nat:75 family=2 entries=1 op=nft_register_rule pid=3731 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:17:36.436000 audit[3731]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=524 a0=3 a1=fffffecd0ac0 a2=0 a3=1 items=0 ppid=3639 pid=3731 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:17:36.436000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Dec 16 12:17:36.443000 audit[3734]: NETFILTER_CFG table=nat:76 family=2 entries=1 op=nft_register_rule pid=3734 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:17:36.443000 audit[3734]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffdfa241b0 a2=0 a3=1 items=0 ppid=3639 pid=3734 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:17:36.443000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Dec 16 12:17:36.446000 audit[3735]: NETFILTER_CFG table=nat:77 family=2 entries=1 op=nft_register_chain pid=3735 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:17:36.446000 audit[3735]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffe2992220 a2=0 a3=1 items=0 ppid=3639 pid=3735 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:17:36.446000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Dec 16 12:17:36.452000 audit[3737]: NETFILTER_CFG table=nat:78 family=2 entries=1 op=nft_register_rule pid=3737 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:17:36.452000 audit[3737]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=532 a0=3 a1=ffffcb9e5320 a2=0 a3=1 items=0 ppid=3639 pid=3737 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:17:36.452000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Dec 16 12:17:36.490000 audit[3743]: NETFILTER_CFG table=filter:79 family=2 entries=8 op=nft_register_rule pid=3743 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:17:36.490000 audit[3743]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=ffffc60ae6c0 a2=0 a3=1 items=0 ppid=3639 pid=3743 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:17:36.490000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:17:36.501000 audit[3743]: NETFILTER_CFG table=nat:80 family=2 entries=14 op=nft_register_chain pid=3743 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:17:36.501000 audit[3743]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5508 a0=3 a1=ffffc60ae6c0 a2=0 a3=1 items=0 ppid=3639 pid=3743 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:17:36.501000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:17:36.504000 audit[3748]: NETFILTER_CFG table=filter:81 family=10 entries=1 op=nft_register_chain pid=3748 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:17:36.504000 audit[3748]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=108 a0=3 a1=fffffa1db310 a2=0 a3=1 items=0 ppid=3639 pid=3748 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:17:36.504000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Dec 16 12:17:36.509000 audit[3750]: NETFILTER_CFG table=filter:82 family=10 entries=2 op=nft_register_chain pid=3750 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:17:36.509000 audit[3750]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=836 a0=3 a1=ffffea5b5070 a2=0 a3=1 items=0 ppid=3639 pid=3750 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:17:36.509000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C6520736572766963 Dec 16 12:17:36.518000 audit[3753]: NETFILTER_CFG table=filter:83 family=10 entries=1 op=nft_register_rule pid=3753 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:17:36.518000 audit[3753]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=752 a0=3 a1=ffffc223c5c0 a2=0 a3=1 items=0 ppid=3639 pid=3753 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:17:36.518000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276 Dec 16 12:17:36.520000 audit[3754]: NETFILTER_CFG table=filter:84 family=10 entries=1 op=nft_register_chain pid=3754 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:17:36.520000 audit[3754]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffff8de86a0 a2=0 a3=1 items=0 ppid=3639 pid=3754 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:17:36.520000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Dec 16 12:17:36.527000 audit[3756]: NETFILTER_CFG table=filter:85 family=10 entries=1 op=nft_register_rule pid=3756 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:17:36.527000 audit[3756]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffe0ce78d0 a2=0 a3=1 items=0 ppid=3639 pid=3756 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:17:36.527000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Dec 16 12:17:36.530000 audit[3757]: NETFILTER_CFG table=filter:86 family=10 entries=1 op=nft_register_chain pid=3757 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:17:36.530000 audit[3757]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffcf8ee470 a2=0 a3=1 items=0 ppid=3639 pid=3757 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:17:36.530000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Dec 16 12:17:36.539000 audit[3759]: NETFILTER_CFG table=filter:87 family=10 entries=1 op=nft_register_rule pid=3759 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:17:36.539000 audit[3759]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=744 a0=3 a1=ffffe0797630 a2=0 a3=1 items=0 ppid=3639 pid=3759 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:17:36.539000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B554245 Dec 16 12:17:36.548000 audit[3762]: NETFILTER_CFG table=filter:88 family=10 entries=2 op=nft_register_chain pid=3762 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:17:36.548000 audit[3762]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=828 a0=3 a1=ffffd11f8770 a2=0 a3=1 items=0 ppid=3639 pid=3762 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:17:36.548000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Dec 16 12:17:36.550000 audit[3763]: NETFILTER_CFG table=filter:89 family=10 entries=1 op=nft_register_chain pid=3763 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:17:36.550000 audit[3763]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffd5df3fa0 a2=0 a3=1 items=0 ppid=3639 pid=3763 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:17:36.550000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Dec 16 12:17:36.556000 audit[3765]: NETFILTER_CFG table=filter:90 family=10 entries=1 op=nft_register_rule pid=3765 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:17:36.556000 audit[3765]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffdff58680 a2=0 a3=1 items=0 ppid=3639 pid=3765 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:17:36.556000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Dec 16 12:17:36.558000 audit[3766]: NETFILTER_CFG table=filter:91 family=10 entries=1 op=nft_register_chain pid=3766 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:17:36.558000 audit[3766]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=fffff5759550 a2=0 a3=1 items=0 ppid=3639 pid=3766 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:17:36.558000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Dec 16 12:17:36.565000 audit[3768]: NETFILTER_CFG table=filter:92 family=10 entries=1 op=nft_register_rule pid=3768 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:17:36.565000 audit[3768]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffebf175b0 a2=0 a3=1 items=0 ppid=3639 pid=3768 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:17:36.565000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Dec 16 12:17:36.573000 audit[3771]: NETFILTER_CFG table=filter:93 family=10 entries=1 op=nft_register_rule pid=3771 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:17:36.573000 audit[3771]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffd02613e0 a2=0 a3=1 items=0 ppid=3639 pid=3771 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:17:36.573000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Dec 16 12:17:36.581000 audit[3774]: NETFILTER_CFG table=filter:94 family=10 entries=1 op=nft_register_rule pid=3774 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:17:36.581000 audit[3774]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffe56d1bd0 a2=0 a3=1 items=0 ppid=3639 pid=3774 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:17:36.581000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C Dec 16 12:17:36.584000 audit[3775]: NETFILTER_CFG table=nat:95 family=10 entries=1 op=nft_register_chain pid=3775 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:17:36.584000 audit[3775]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=fffff2892010 a2=0 a3=1 items=0 ppid=3639 pid=3775 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:17:36.584000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Dec 16 12:17:36.591000 audit[3777]: NETFILTER_CFG table=nat:96 family=10 entries=1 op=nft_register_rule pid=3777 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:17:36.591000 audit[3777]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=524 a0=3 a1=ffffd7a5c610 a2=0 a3=1 items=0 ppid=3639 pid=3777 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:17:36.591000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Dec 16 12:17:36.599000 audit[3780]: NETFILTER_CFG table=nat:97 family=10 entries=1 op=nft_register_rule pid=3780 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:17:36.599000 audit[3780]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffe7f3e150 a2=0 a3=1 items=0 ppid=3639 pid=3780 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:17:36.599000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Dec 16 12:17:36.602000 audit[3781]: NETFILTER_CFG table=nat:98 family=10 entries=1 op=nft_register_chain pid=3781 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:17:36.602000 audit[3781]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffcbb4c1f0 a2=0 a3=1 items=0 ppid=3639 pid=3781 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:17:36.602000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Dec 16 12:17:36.607000 audit[3783]: NETFILTER_CFG table=nat:99 family=10 entries=2 op=nft_register_chain pid=3783 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:17:36.607000 audit[3783]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=612 a0=3 a1=ffffddf837e0 a2=0 a3=1 items=0 ppid=3639 pid=3783 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:17:36.607000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Dec 16 12:17:36.610000 audit[3784]: NETFILTER_CFG table=filter:100 family=10 entries=1 op=nft_register_chain pid=3784 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:17:36.610000 audit[3784]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffd276dc60 a2=0 a3=1 items=0 ppid=3639 pid=3784 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:17:36.610000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Dec 16 12:17:36.616000 audit[3786]: NETFILTER_CFG table=filter:101 family=10 entries=1 op=nft_register_rule pid=3786 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:17:36.616000 audit[3786]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=228 a0=3 a1=fffff564cb50 a2=0 a3=1 items=0 ppid=3639 pid=3786 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:17:36.616000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Dec 16 12:17:36.625000 audit[3789]: NETFILTER_CFG table=filter:102 family=10 entries=1 op=nft_register_rule pid=3789 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:17:36.625000 audit[3789]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=228 a0=3 a1=ffffcd6dd2c0 a2=0 a3=1 items=0 ppid=3639 pid=3789 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:17:36.625000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Dec 16 12:17:36.632000 audit[3791]: NETFILTER_CFG table=filter:103 family=10 entries=3 op=nft_register_rule pid=3791 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Dec 16 12:17:36.632000 audit[3791]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2088 a0=3 a1=ffffe7e55050 a2=0 a3=1 items=0 ppid=3639 pid=3791 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:17:36.632000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:17:36.634000 audit[3791]: NETFILTER_CFG table=nat:104 family=10 entries=7 op=nft_register_chain pid=3791 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Dec 16 12:17:36.634000 audit[3791]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2056 a0=3 a1=ffffe7e55050 a2=0 a3=1 items=0 ppid=3639 pid=3791 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:17:36.634000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:17:36.710043 kubelet[3411]: I1216 12:17:36.709140 3411 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-zc8nq" podStartSLOduration=1.709117064 podStartE2EDuration="1.709117064s" podCreationTimestamp="2025-12-16 12:17:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 12:17:36.708527996 +0000 UTC m=+5.883498450" watchObservedRunningTime="2025-12-16 12:17:36.709117064 +0000 UTC m=+5.884087530" Dec 16 12:17:37.275027 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3044210033.mount: Deactivated successfully. Dec 16 12:17:38.181095 containerd[1971]: time="2025-12-16T12:17:38.180928051Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:17:38.185291 containerd[1971]: time="2025-12-16T12:17:38.185210323Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=20773434" Dec 16 12:17:38.188044 containerd[1971]: time="2025-12-16T12:17:38.187944667Z" level=info msg="ImageCreate event name:\"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:17:38.195002 containerd[1971]: time="2025-12-16T12:17:38.194918827Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:17:38.197544 containerd[1971]: time="2025-12-16T12:17:38.197373187Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"22147999\" in 2.279516387s" Dec 16 12:17:38.197544 containerd[1971]: time="2025-12-16T12:17:38.197433631Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\"" Dec 16 12:17:38.203658 containerd[1971]: time="2025-12-16T12:17:38.203498899Z" level=info msg="CreateContainer within sandbox \"caecc70f2907c364a3748809972cf3693dca789785eb9dd95eeeaa3c5ab89eee\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Dec 16 12:17:38.228290 containerd[1971]: time="2025-12-16T12:17:38.228221503Z" level=info msg="Container b74d125ff7b95df5bf4fa34ef892d25e892af41f4bdff6fbda5fd85d21338f5f: CDI devices from CRI Config.CDIDevices: []" Dec 16 12:17:38.235157 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1484032000.mount: Deactivated successfully. Dec 16 12:17:38.244614 containerd[1971]: time="2025-12-16T12:17:38.244461223Z" level=info msg="CreateContainer within sandbox \"caecc70f2907c364a3748809972cf3693dca789785eb9dd95eeeaa3c5ab89eee\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"b74d125ff7b95df5bf4fa34ef892d25e892af41f4bdff6fbda5fd85d21338f5f\"" Dec 16 12:17:38.246567 containerd[1971]: time="2025-12-16T12:17:38.246161503Z" level=info msg="StartContainer for \"b74d125ff7b95df5bf4fa34ef892d25e892af41f4bdff6fbda5fd85d21338f5f\"" Dec 16 12:17:38.248510 containerd[1971]: time="2025-12-16T12:17:38.248372935Z" level=info msg="connecting to shim b74d125ff7b95df5bf4fa34ef892d25e892af41f4bdff6fbda5fd85d21338f5f" address="unix:///run/containerd/s/8822b850440a04f998592980a40b5be9139a4d107f089a0abb02aa56a783d4a6" protocol=ttrpc version=3 Dec 16 12:17:38.292482 systemd[1]: Started cri-containerd-b74d125ff7b95df5bf4fa34ef892d25e892af41f4bdff6fbda5fd85d21338f5f.scope - libcontainer container b74d125ff7b95df5bf4fa34ef892d25e892af41f4bdff6fbda5fd85d21338f5f. Dec 16 12:17:38.317000 audit: BPF prog-id=153 op=LOAD Dec 16 12:17:38.319000 audit: BPF prog-id=154 op=LOAD Dec 16 12:17:38.319000 audit[3800]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000106180 a2=98 a3=0 items=0 ppid=3595 pid=3800 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:17:38.319000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6237346431323566663762393564663562663466613334656638393264 Dec 16 12:17:38.319000 audit: BPF prog-id=154 op=UNLOAD Dec 16 12:17:38.319000 audit[3800]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3595 pid=3800 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:17:38.319000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6237346431323566663762393564663562663466613334656638393264 Dec 16 12:17:38.319000 audit: BPF prog-id=155 op=LOAD Dec 16 12:17:38.319000 audit[3800]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001063e8 a2=98 a3=0 items=0 ppid=3595 pid=3800 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:17:38.319000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6237346431323566663762393564663562663466613334656638393264 Dec 16 12:17:38.319000 audit: BPF prog-id=156 op=LOAD Dec 16 12:17:38.319000 audit[3800]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000106168 a2=98 a3=0 items=0 ppid=3595 pid=3800 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:17:38.319000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6237346431323566663762393564663562663466613334656638393264 Dec 16 12:17:38.319000 audit: BPF prog-id=156 op=UNLOAD Dec 16 12:17:38.319000 audit[3800]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3595 pid=3800 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:17:38.319000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6237346431323566663762393564663562663466613334656638393264 Dec 16 12:17:38.319000 audit: BPF prog-id=155 op=UNLOAD Dec 16 12:17:38.319000 audit[3800]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3595 pid=3800 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:17:38.319000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6237346431323566663762393564663562663466613334656638393264 Dec 16 12:17:38.319000 audit: BPF prog-id=157 op=LOAD Dec 16 12:17:38.319000 audit[3800]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000106648 a2=98 a3=0 items=0 ppid=3595 pid=3800 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:17:38.319000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6237346431323566663762393564663562663466613334656638393264 Dec 16 12:17:38.370931 containerd[1971]: time="2025-12-16T12:17:38.370756928Z" level=info msg="StartContainer for \"b74d125ff7b95df5bf4fa34ef892d25e892af41f4bdff6fbda5fd85d21338f5f\" returns successfully" Dec 16 12:17:40.001093 kubelet[3411]: I1216 12:17:39.999932 3411 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7dcd859c48-sjp2r" podStartSLOduration=2.717270129 podStartE2EDuration="4.999907836s" podCreationTimestamp="2025-12-16 12:17:35 +0000 UTC" firstStartedPulling="2025-12-16 12:17:35.916110308 +0000 UTC m=+5.091080750" lastFinishedPulling="2025-12-16 12:17:38.198748015 +0000 UTC m=+7.373718457" observedRunningTime="2025-12-16 12:17:38.705950602 +0000 UTC m=+7.880921068" watchObservedRunningTime="2025-12-16 12:17:39.999907836 +0000 UTC m=+9.174878266" Dec 16 12:17:47.294023 sudo[2343]: pam_unix(sudo:session): session closed for user root Dec 16 12:17:47.293000 audit[2343]: USER_END pid=2343 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 16 12:17:47.298224 kernel: kauditd_printk_skb: 224 callbacks suppressed Dec 16 12:17:47.298345 kernel: audit: type=1106 audit(1765887467.293:528): pid=2343 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 16 12:17:47.293000 audit[2343]: CRED_DISP pid=2343 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 16 12:17:47.306658 kernel: audit: type=1104 audit(1765887467.293:529): pid=2343 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 16 12:17:47.322799 sshd[2342]: Connection closed by 147.75.109.163 port 47358 Dec 16 12:17:47.324937 sshd-session[2338]: pam_unix(sshd:session): session closed for user core Dec 16 12:17:47.330000 audit[2338]: USER_END pid=2338 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:17:47.342765 systemd[1]: sshd@6-172.31.20.6:22-147.75.109.163:47358.service: Deactivated successfully. Dec 16 12:17:47.330000 audit[2338]: CRED_DISP pid=2338 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:17:47.352302 kernel: audit: type=1106 audit(1765887467.330:530): pid=2338 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:17:47.352393 kernel: audit: type=1104 audit(1765887467.330:531): pid=2338 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:17:47.353822 systemd[1]: session-8.scope: Deactivated successfully. Dec 16 12:17:47.356158 systemd[1]: session-8.scope: Consumed 11.870s CPU time, 222.8M memory peak. Dec 16 12:17:47.342000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-172.31.20.6:22-147.75.109.163:47358 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:17:47.362140 kernel: audit: type=1131 audit(1765887467.342:532): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-172.31.20.6:22-147.75.109.163:47358 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:17:47.365946 systemd-logind[1939]: Session 8 logged out. Waiting for processes to exit. Dec 16 12:17:47.370075 systemd-logind[1939]: Removed session 8. Dec 16 12:17:49.469000 audit[3882]: NETFILTER_CFG table=filter:105 family=2 entries=15 op=nft_register_rule pid=3882 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:17:49.483085 kernel: audit: type=1325 audit(1765887469.469:533): table=filter:105 family=2 entries=15 op=nft_register_rule pid=3882 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:17:49.483244 kernel: audit: type=1300 audit(1765887469.469:533): arch=c00000b7 syscall=211 success=yes exit=5992 a0=3 a1=fffffb5e8460 a2=0 a3=1 items=0 ppid=3639 pid=3882 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:17:49.469000 audit[3882]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5992 a0=3 a1=fffffb5e8460 a2=0 a3=1 items=0 ppid=3639 pid=3882 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:17:49.469000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:17:49.489937 kernel: audit: type=1327 audit(1765887469.469:533): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:17:49.478000 audit[3882]: NETFILTER_CFG table=nat:106 family=2 entries=12 op=nft_register_rule pid=3882 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:17:49.495741 kernel: audit: type=1325 audit(1765887469.478:534): table=nat:106 family=2 entries=12 op=nft_register_rule pid=3882 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:17:49.478000 audit[3882]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=fffffb5e8460 a2=0 a3=1 items=0 ppid=3639 pid=3882 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:17:49.478000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:17:49.503110 kernel: audit: type=1300 audit(1765887469.478:534): arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=fffffb5e8460 a2=0 a3=1 items=0 ppid=3639 pid=3882 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:17:49.518000 audit[3884]: NETFILTER_CFG table=filter:107 family=2 entries=16 op=nft_register_rule pid=3884 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:17:49.518000 audit[3884]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5992 a0=3 a1=fffffda7faf0 a2=0 a3=1 items=0 ppid=3639 pid=3884 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:17:49.518000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:17:49.522000 audit[3884]: NETFILTER_CFG table=nat:108 family=2 entries=12 op=nft_register_rule pid=3884 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:17:49.522000 audit[3884]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=fffffda7faf0 a2=0 a3=1 items=0 ppid=3639 pid=3884 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:17:49.522000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:17:57.135000 audit[3886]: NETFILTER_CFG table=filter:109 family=2 entries=17 op=nft_register_rule pid=3886 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:17:57.142113 kernel: kauditd_printk_skb: 7 callbacks suppressed Dec 16 12:17:57.142227 kernel: audit: type=1325 audit(1765887477.135:537): table=filter:109 family=2 entries=17 op=nft_register_rule pid=3886 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:17:57.135000 audit[3886]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=6736 a0=3 a1=ffffea7e8c20 a2=0 a3=1 items=0 ppid=3639 pid=3886 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:17:57.150503 kernel: audit: type=1300 audit(1765887477.135:537): arch=c00000b7 syscall=211 success=yes exit=6736 a0=3 a1=ffffea7e8c20 a2=0 a3=1 items=0 ppid=3639 pid=3886 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:17:57.135000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:17:57.155513 kernel: audit: type=1327 audit(1765887477.135:537): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:17:57.151000 audit[3886]: NETFILTER_CFG table=nat:110 family=2 entries=12 op=nft_register_rule pid=3886 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:17:57.151000 audit[3886]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffea7e8c20 a2=0 a3=1 items=0 ppid=3639 pid=3886 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:17:57.169446 kernel: audit: type=1325 audit(1765887477.151:538): table=nat:110 family=2 entries=12 op=nft_register_rule pid=3886 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:17:57.169870 kernel: audit: type=1300 audit(1765887477.151:538): arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffea7e8c20 a2=0 a3=1 items=0 ppid=3639 pid=3886 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:17:57.151000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:17:57.174461 kernel: audit: type=1327 audit(1765887477.151:538): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:17:57.188000 audit[3888]: NETFILTER_CFG table=filter:111 family=2 entries=18 op=nft_register_rule pid=3888 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:17:57.188000 audit[3888]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=6736 a0=3 a1=ffffc5db0f10 a2=0 a3=1 items=0 ppid=3639 pid=3888 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:17:57.200512 kernel: audit: type=1325 audit(1765887477.188:539): table=filter:111 family=2 entries=18 op=nft_register_rule pid=3888 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:17:57.200755 kernel: audit: type=1300 audit(1765887477.188:539): arch=c00000b7 syscall=211 success=yes exit=6736 a0=3 a1=ffffc5db0f10 a2=0 a3=1 items=0 ppid=3639 pid=3888 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:17:57.188000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:17:57.206303 kernel: audit: type=1327 audit(1765887477.188:539): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:17:57.206000 audit[3888]: NETFILTER_CFG table=nat:112 family=2 entries=12 op=nft_register_rule pid=3888 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:17:57.210934 kernel: audit: type=1325 audit(1765887477.206:540): table=nat:112 family=2 entries=12 op=nft_register_rule pid=3888 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:17:57.206000 audit[3888]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffc5db0f10 a2=0 a3=1 items=0 ppid=3639 pid=3888 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:17:57.206000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:17:58.935000 audit[3890]: NETFILTER_CFG table=filter:113 family=2 entries=19 op=nft_register_rule pid=3890 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:17:58.935000 audit[3890]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=7480 a0=3 a1=ffffd212f5b0 a2=0 a3=1 items=0 ppid=3639 pid=3890 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:17:58.935000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:17:58.946000 audit[3890]: NETFILTER_CFG table=nat:114 family=2 entries=12 op=nft_register_rule pid=3890 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:17:58.946000 audit[3890]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffd212f5b0 a2=0 a3=1 items=0 ppid=3639 pid=3890 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:17:58.946000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:17:59.985000 audit[3892]: NETFILTER_CFG table=filter:115 family=2 entries=20 op=nft_register_rule pid=3892 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:17:59.985000 audit[3892]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=7480 a0=3 a1=ffffd453e230 a2=0 a3=1 items=0 ppid=3639 pid=3892 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:17:59.985000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:17:59.988000 audit[3892]: NETFILTER_CFG table=nat:116 family=2 entries=12 op=nft_register_rule pid=3892 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:17:59.988000 audit[3892]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffd453e230 a2=0 a3=1 items=0 ppid=3639 pid=3892 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:17:59.988000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:18:01.066000 audit[3894]: NETFILTER_CFG table=filter:117 family=2 entries=21 op=nft_register_rule pid=3894 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:18:01.066000 audit[3894]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=8224 a0=3 a1=ffffcca132c0 a2=0 a3=1 items=0 ppid=3639 pid=3894 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:18:01.066000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:18:01.084000 audit[3894]: NETFILTER_CFG table=nat:118 family=2 entries=12 op=nft_register_rule pid=3894 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:18:01.084000 audit[3894]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffcca132c0 a2=0 a3=1 items=0 ppid=3639 pid=3894 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:18:01.084000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:18:01.131353 systemd[1]: Created slice kubepods-besteffort-pod033a0fef_ca99_4f5d_a42f_51c1c6254d6e.slice - libcontainer container kubepods-besteffort-pod033a0fef_ca99_4f5d_a42f_51c1c6254d6e.slice. Dec 16 12:18:01.181569 kubelet[3411]: I1216 12:18:01.181200 3411 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/033a0fef-ca99-4f5d-a42f-51c1c6254d6e-typha-certs\") pod \"calico-typha-75cfcf997-cnl22\" (UID: \"033a0fef-ca99-4f5d-a42f-51c1c6254d6e\") " pod="calico-system/calico-typha-75cfcf997-cnl22" Dec 16 12:18:01.181569 kubelet[3411]: I1216 12:18:01.181370 3411 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/033a0fef-ca99-4f5d-a42f-51c1c6254d6e-tigera-ca-bundle\") pod \"calico-typha-75cfcf997-cnl22\" (UID: \"033a0fef-ca99-4f5d-a42f-51c1c6254d6e\") " pod="calico-system/calico-typha-75cfcf997-cnl22" Dec 16 12:18:01.181569 kubelet[3411]: I1216 12:18:01.181415 3411 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n55v2\" (UniqueName: \"kubernetes.io/projected/033a0fef-ca99-4f5d-a42f-51c1c6254d6e-kube-api-access-n55v2\") pod \"calico-typha-75cfcf997-cnl22\" (UID: \"033a0fef-ca99-4f5d-a42f-51c1c6254d6e\") " pod="calico-system/calico-typha-75cfcf997-cnl22" Dec 16 12:18:01.207000 audit[3896]: NETFILTER_CFG table=filter:119 family=2 entries=22 op=nft_register_rule pid=3896 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:18:01.207000 audit[3896]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=8224 a0=3 a1=fffff8c7e210 a2=0 a3=1 items=0 ppid=3639 pid=3896 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:18:01.207000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:18:01.212000 audit[3896]: NETFILTER_CFG table=nat:120 family=2 entries=12 op=nft_register_rule pid=3896 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:18:01.212000 audit[3896]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=fffff8c7e210 a2=0 a3=1 items=0 ppid=3639 pid=3896 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:18:01.212000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:18:01.359528 systemd[1]: Created slice kubepods-besteffort-podd321df29_2f84_48b2_86a9_cc0acc5de5f8.slice - libcontainer container kubepods-besteffort-podd321df29_2f84_48b2_86a9_cc0acc5de5f8.slice. Dec 16 12:18:01.383364 kubelet[3411]: I1216 12:18:01.383291 3411 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d321df29-2f84-48b2-86a9-cc0acc5de5f8-lib-modules\") pod \"calico-node-f2pkl\" (UID: \"d321df29-2f84-48b2-86a9-cc0acc5de5f8\") " pod="calico-system/calico-node-f2pkl" Dec 16 12:18:01.383364 kubelet[3411]: I1216 12:18:01.383372 3411 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/d321df29-2f84-48b2-86a9-cc0acc5de5f8-cni-log-dir\") pod \"calico-node-f2pkl\" (UID: \"d321df29-2f84-48b2-86a9-cc0acc5de5f8\") " pod="calico-system/calico-node-f2pkl" Dec 16 12:18:01.385304 kubelet[3411]: I1216 12:18:01.383415 3411 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d321df29-2f84-48b2-86a9-cc0acc5de5f8-tigera-ca-bundle\") pod \"calico-node-f2pkl\" (UID: \"d321df29-2f84-48b2-86a9-cc0acc5de5f8\") " pod="calico-system/calico-node-f2pkl" Dec 16 12:18:01.385304 kubelet[3411]: I1216 12:18:01.383473 3411 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/d321df29-2f84-48b2-86a9-cc0acc5de5f8-var-lib-calico\") pod \"calico-node-f2pkl\" (UID: \"d321df29-2f84-48b2-86a9-cc0acc5de5f8\") " pod="calico-system/calico-node-f2pkl" Dec 16 12:18:01.385304 kubelet[3411]: I1216 12:18:01.383538 3411 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/d321df29-2f84-48b2-86a9-cc0acc5de5f8-policysync\") pod \"calico-node-f2pkl\" (UID: \"d321df29-2f84-48b2-86a9-cc0acc5de5f8\") " pod="calico-system/calico-node-f2pkl" Dec 16 12:18:01.385304 kubelet[3411]: I1216 12:18:01.383599 3411 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/d321df29-2f84-48b2-86a9-cc0acc5de5f8-flexvol-driver-host\") pod \"calico-node-f2pkl\" (UID: \"d321df29-2f84-48b2-86a9-cc0acc5de5f8\") " pod="calico-system/calico-node-f2pkl" Dec 16 12:18:01.385304 kubelet[3411]: I1216 12:18:01.383635 3411 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/d321df29-2f84-48b2-86a9-cc0acc5de5f8-node-certs\") pod \"calico-node-f2pkl\" (UID: \"d321df29-2f84-48b2-86a9-cc0acc5de5f8\") " pod="calico-system/calico-node-f2pkl" Dec 16 12:18:01.385581 kubelet[3411]: I1216 12:18:01.383674 3411 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/d321df29-2f84-48b2-86a9-cc0acc5de5f8-cni-bin-dir\") pod \"calico-node-f2pkl\" (UID: \"d321df29-2f84-48b2-86a9-cc0acc5de5f8\") " pod="calico-system/calico-node-f2pkl" Dec 16 12:18:01.385581 kubelet[3411]: I1216 12:18:01.383709 3411 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d321df29-2f84-48b2-86a9-cc0acc5de5f8-xtables-lock\") pod \"calico-node-f2pkl\" (UID: \"d321df29-2f84-48b2-86a9-cc0acc5de5f8\") " pod="calico-system/calico-node-f2pkl" Dec 16 12:18:01.385581 kubelet[3411]: I1216 12:18:01.383746 3411 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/d321df29-2f84-48b2-86a9-cc0acc5de5f8-cni-net-dir\") pod \"calico-node-f2pkl\" (UID: \"d321df29-2f84-48b2-86a9-cc0acc5de5f8\") " pod="calico-system/calico-node-f2pkl" Dec 16 12:18:01.385581 kubelet[3411]: I1216 12:18:01.383783 3411 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/d321df29-2f84-48b2-86a9-cc0acc5de5f8-var-run-calico\") pod \"calico-node-f2pkl\" (UID: \"d321df29-2f84-48b2-86a9-cc0acc5de5f8\") " pod="calico-system/calico-node-f2pkl" Dec 16 12:18:01.385581 kubelet[3411]: I1216 12:18:01.383818 3411 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mvwkn\" (UniqueName: \"kubernetes.io/projected/d321df29-2f84-48b2-86a9-cc0acc5de5f8-kube-api-access-mvwkn\") pod \"calico-node-f2pkl\" (UID: \"d321df29-2f84-48b2-86a9-cc0acc5de5f8\") " pod="calico-system/calico-node-f2pkl" Dec 16 12:18:01.437705 kubelet[3411]: E1216 12:18:01.437410 3411 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7wp4r" podUID="7f31f51a-3fe7-4796-97ca-d9a3c9b5116f" Dec 16 12:18:01.450243 containerd[1971]: time="2025-12-16T12:18:01.450156163Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-75cfcf997-cnl22,Uid:033a0fef-ca99-4f5d-a42f-51c1c6254d6e,Namespace:calico-system,Attempt:0,}" Dec 16 12:18:01.514747 kubelet[3411]: E1216 12:18:01.514531 3411 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:18:01.515825 kubelet[3411]: W1216 12:18:01.515646 3411 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:18:01.515825 kubelet[3411]: E1216 12:18:01.515768 3411 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:18:01.537273 kubelet[3411]: E1216 12:18:01.533645 3411 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:18:01.538134 kubelet[3411]: W1216 12:18:01.537600 3411 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:18:01.538356 kubelet[3411]: E1216 12:18:01.538323 3411 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:18:01.538856 kubelet[3411]: E1216 12:18:01.538827 3411 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:18:01.541035 kubelet[3411]: W1216 12:18:01.540991 3411 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:18:01.542023 kubelet[3411]: E1216 12:18:01.541854 3411 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:18:01.550992 kubelet[3411]: E1216 12:18:01.550949 3411 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:18:01.554686 kubelet[3411]: W1216 12:18:01.554168 3411 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:18:01.554686 kubelet[3411]: E1216 12:18:01.554274 3411 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:18:01.555258 containerd[1971]: time="2025-12-16T12:18:01.553575463Z" level=info msg="connecting to shim 36b2f497159b7acf911b010b02962449d8e78f5ebe0fa61ce234a158e9c40405" address="unix:///run/containerd/s/97c34d2918b54488c214ab4b7778907769cf0aadec79b9085491af247cd32e42" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:18:01.556397 kubelet[3411]: E1216 12:18:01.556005 3411 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:18:01.562291 kubelet[3411]: W1216 12:18:01.556039 3411 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:18:01.562291 kubelet[3411]: E1216 12:18:01.562161 3411 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:18:01.566519 kubelet[3411]: E1216 12:18:01.566144 3411 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:18:01.566519 kubelet[3411]: W1216 12:18:01.566183 3411 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:18:01.566519 kubelet[3411]: E1216 12:18:01.566216 3411 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:18:01.572868 kubelet[3411]: E1216 12:18:01.569725 3411 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:18:01.572868 kubelet[3411]: W1216 12:18:01.571894 3411 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:18:01.572868 kubelet[3411]: E1216 12:18:01.572129 3411 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:18:01.576306 kubelet[3411]: E1216 12:18:01.573809 3411 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:18:01.576306 kubelet[3411]: W1216 12:18:01.573847 3411 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:18:01.576306 kubelet[3411]: E1216 12:18:01.573881 3411 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:18:01.576306 kubelet[3411]: E1216 12:18:01.575525 3411 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:18:01.576306 kubelet[3411]: W1216 12:18:01.575554 3411 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:18:01.576306 kubelet[3411]: E1216 12:18:01.575609 3411 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:18:01.577812 kubelet[3411]: E1216 12:18:01.577420 3411 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:18:01.577812 kubelet[3411]: W1216 12:18:01.577458 3411 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:18:01.577812 kubelet[3411]: E1216 12:18:01.577491 3411 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:18:01.579389 kubelet[3411]: E1216 12:18:01.579353 3411 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:18:01.580112 kubelet[3411]: W1216 12:18:01.579823 3411 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:18:01.582171 kubelet[3411]: E1216 12:18:01.580746 3411 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:18:01.587396 kubelet[3411]: E1216 12:18:01.587241 3411 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:18:01.591045 kubelet[3411]: W1216 12:18:01.590992 3411 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:18:01.591301 kubelet[3411]: E1216 12:18:01.591275 3411 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:18:01.592475 kubelet[3411]: E1216 12:18:01.592160 3411 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:18:01.592475 kubelet[3411]: W1216 12:18:01.592194 3411 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:18:01.592475 kubelet[3411]: E1216 12:18:01.592224 3411 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:18:01.593934 kubelet[3411]: E1216 12:18:01.593900 3411 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:18:01.594397 kubelet[3411]: W1216 12:18:01.594363 3411 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:18:01.594559 kubelet[3411]: E1216 12:18:01.594534 3411 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:18:01.596743 kubelet[3411]: E1216 12:18:01.596705 3411 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:18:01.597160 kubelet[3411]: W1216 12:18:01.596895 3411 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:18:01.597160 kubelet[3411]: E1216 12:18:01.596959 3411 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:18:01.599150 kubelet[3411]: E1216 12:18:01.598370 3411 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:18:01.600028 kubelet[3411]: W1216 12:18:01.598410 3411 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:18:01.600548 kubelet[3411]: E1216 12:18:01.600101 3411 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:18:01.600548 kubelet[3411]: I1216 12:18:01.600185 3411 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/7f31f51a-3fe7-4796-97ca-d9a3c9b5116f-registration-dir\") pod \"csi-node-driver-7wp4r\" (UID: \"7f31f51a-3fe7-4796-97ca-d9a3c9b5116f\") " pod="calico-system/csi-node-driver-7wp4r" Dec 16 12:18:01.602191 kubelet[3411]: E1216 12:18:01.601812 3411 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:18:01.602191 kubelet[3411]: W1216 12:18:01.601972 3411 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:18:01.602743 kubelet[3411]: E1216 12:18:01.602607 3411 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:18:01.603871 kubelet[3411]: E1216 12:18:01.602608 3411 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:18:01.603871 kubelet[3411]: W1216 12:18:01.603139 3411 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:18:01.603871 kubelet[3411]: E1216 12:18:01.603182 3411 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:18:01.603871 kubelet[3411]: I1216 12:18:01.603227 3411 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7f31f51a-3fe7-4796-97ca-d9a3c9b5116f-kubelet-dir\") pod \"csi-node-driver-7wp4r\" (UID: \"7f31f51a-3fe7-4796-97ca-d9a3c9b5116f\") " pod="calico-system/csi-node-driver-7wp4r" Dec 16 12:18:01.605343 kubelet[3411]: E1216 12:18:01.605303 3411 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:18:01.605973 kubelet[3411]: W1216 12:18:01.605802 3411 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:18:01.606581 kubelet[3411]: E1216 12:18:01.606137 3411 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:18:01.608031 kubelet[3411]: I1216 12:18:01.607202 3411 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/7f31f51a-3fe7-4796-97ca-d9a3c9b5116f-socket-dir\") pod \"csi-node-driver-7wp4r\" (UID: \"7f31f51a-3fe7-4796-97ca-d9a3c9b5116f\") " pod="calico-system/csi-node-driver-7wp4r" Dec 16 12:18:01.608513 kubelet[3411]: E1216 12:18:01.608470 3411 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:18:01.608513 kubelet[3411]: W1216 12:18:01.608509 3411 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:18:01.608723 kubelet[3411]: E1216 12:18:01.608557 3411 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:18:01.610810 kubelet[3411]: E1216 12:18:01.610392 3411 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:18:01.610810 kubelet[3411]: W1216 12:18:01.610431 3411 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:18:01.610810 kubelet[3411]: E1216 12:18:01.610581 3411 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:18:01.613299 kubelet[3411]: E1216 12:18:01.613033 3411 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:18:01.613299 kubelet[3411]: W1216 12:18:01.613099 3411 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:18:01.613995 kubelet[3411]: E1216 12:18:01.613731 3411 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:18:01.616708 kubelet[3411]: E1216 12:18:01.614642 3411 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:18:01.616708 kubelet[3411]: W1216 12:18:01.614678 3411 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:18:01.616708 kubelet[3411]: E1216 12:18:01.614841 3411 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:18:01.621575 kubelet[3411]: E1216 12:18:01.618727 3411 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:18:01.621771 kubelet[3411]: W1216 12:18:01.621740 3411 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:18:01.622171 kubelet[3411]: E1216 12:18:01.622030 3411 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:18:01.623204 kubelet[3411]: E1216 12:18:01.623168 3411 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:18:01.624494 kubelet[3411]: W1216 12:18:01.624436 3411 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:18:01.624952 kubelet[3411]: E1216 12:18:01.624737 3411 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:18:01.625343 kubelet[3411]: E1216 12:18:01.625315 3411 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:18:01.626240 kubelet[3411]: W1216 12:18:01.626175 3411 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:18:01.626428 kubelet[3411]: E1216 12:18:01.626403 3411 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:18:01.628451 kubelet[3411]: E1216 12:18:01.628124 3411 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:18:01.628451 kubelet[3411]: W1216 12:18:01.628382 3411 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:18:01.628668 kubelet[3411]: E1216 12:18:01.628562 3411 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:18:01.630291 kubelet[3411]: E1216 12:18:01.630235 3411 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:18:01.630291 kubelet[3411]: W1216 12:18:01.630276 3411 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:18:01.631199 kubelet[3411]: E1216 12:18:01.631153 3411 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:18:01.631487 kubelet[3411]: E1216 12:18:01.631448 3411 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:18:01.631588 kubelet[3411]: W1216 12:18:01.631507 3411 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:18:01.631588 kubelet[3411]: E1216 12:18:01.631551 3411 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:18:01.633213 kubelet[3411]: E1216 12:18:01.633167 3411 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:18:01.633213 kubelet[3411]: W1216 12:18:01.633203 3411 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:18:01.636094 kubelet[3411]: E1216 12:18:01.634753 3411 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:18:01.636094 kubelet[3411]: E1216 12:18:01.634964 3411 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:18:01.636094 kubelet[3411]: W1216 12:18:01.635008 3411 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:18:01.636094 kubelet[3411]: E1216 12:18:01.635039 3411 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:18:01.636733 kubelet[3411]: E1216 12:18:01.636457 3411 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:18:01.636733 kubelet[3411]: W1216 12:18:01.636493 3411 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:18:01.636733 kubelet[3411]: E1216 12:18:01.636674 3411 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:18:01.652492 systemd[1]: Started cri-containerd-36b2f497159b7acf911b010b02962449d8e78f5ebe0fa61ce234a158e9c40405.scope - libcontainer container 36b2f497159b7acf911b010b02962449d8e78f5ebe0fa61ce234a158e9c40405. Dec 16 12:18:01.673009 containerd[1971]: time="2025-12-16T12:18:01.672772832Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-f2pkl,Uid:d321df29-2f84-48b2-86a9-cc0acc5de5f8,Namespace:calico-system,Attempt:0,}" Dec 16 12:18:01.708000 audit: BPF prog-id=158 op=LOAD Dec 16 12:18:01.711006 kubelet[3411]: E1216 12:18:01.710972 3411 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:18:01.711695 kubelet[3411]: W1216 12:18:01.711095 3411 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:18:01.711695 kubelet[3411]: E1216 12:18:01.711132 3411 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:18:01.711695 kubelet[3411]: I1216 12:18:01.711188 3411 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/7f31f51a-3fe7-4796-97ca-d9a3c9b5116f-varrun\") pod \"csi-node-driver-7wp4r\" (UID: \"7f31f51a-3fe7-4796-97ca-d9a3c9b5116f\") " pod="calico-system/csi-node-driver-7wp4r" Dec 16 12:18:01.712590 kubelet[3411]: E1216 12:18:01.712513 3411 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:18:01.712829 kubelet[3411]: W1216 12:18:01.712673 3411 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:18:01.713009 kubelet[3411]: E1216 12:18:01.712921 3411 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:18:01.713240 kubelet[3411]: E1216 12:18:01.713170 3411 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:18:01.713240 kubelet[3411]: W1216 12:18:01.713189 3411 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:18:01.713240 kubelet[3411]: E1216 12:18:01.713220 3411 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:18:01.714032 kubelet[3411]: E1216 12:18:01.713991 3411 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:18:01.714250 kubelet[3411]: W1216 12:18:01.714048 3411 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:18:01.714446 kubelet[3411]: E1216 12:18:01.714231 3411 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:18:01.713000 audit: BPF prog-id=159 op=LOAD Dec 16 12:18:01.713000 audit[3931]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a0180 a2=98 a3=0 items=0 ppid=3911 pid=3931 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:18:01.713000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3336623266343937313539623761636639313162303130623032393632 Dec 16 12:18:01.713000 audit: BPF prog-id=159 op=UNLOAD Dec 16 12:18:01.713000 audit[3931]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3911 pid=3931 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:18:01.713000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3336623266343937313539623761636639313162303130623032393632 Dec 16 12:18:01.714000 audit: BPF prog-id=160 op=LOAD Dec 16 12:18:01.714000 audit[3931]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a03e8 a2=98 a3=0 items=0 ppid=3911 pid=3931 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:18:01.714000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3336623266343937313539623761636639313162303130623032393632 Dec 16 12:18:01.714000 audit: BPF prog-id=161 op=LOAD Dec 16 12:18:01.714000 audit[3931]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=40001a0168 a2=98 a3=0 items=0 ppid=3911 pid=3931 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:18:01.714000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3336623266343937313539623761636639313162303130623032393632 Dec 16 12:18:01.714000 audit: BPF prog-id=161 op=UNLOAD Dec 16 12:18:01.714000 audit[3931]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3911 pid=3931 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:18:01.714000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3336623266343937313539623761636639313162303130623032393632 Dec 16 12:18:01.714000 audit: BPF prog-id=160 op=UNLOAD Dec 16 12:18:01.714000 audit[3931]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3911 pid=3931 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:18:01.714000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3336623266343937313539623761636639313162303130623032393632 Dec 16 12:18:01.715000 audit: BPF prog-id=162 op=LOAD Dec 16 12:18:01.715000 audit[3931]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a0648 a2=98 a3=0 items=0 ppid=3911 pid=3931 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:18:01.715000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3336623266343937313539623761636639313162303130623032393632 Dec 16 12:18:01.720766 kubelet[3411]: E1216 12:18:01.715175 3411 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:18:01.720766 kubelet[3411]: W1216 12:18:01.715471 3411 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:18:01.720766 kubelet[3411]: E1216 12:18:01.715522 3411 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:18:01.720766 kubelet[3411]: E1216 12:18:01.716291 3411 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:18:01.720766 kubelet[3411]: W1216 12:18:01.716315 3411 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:18:01.720766 kubelet[3411]: E1216 12:18:01.716356 3411 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:18:01.720766 kubelet[3411]: E1216 12:18:01.717597 3411 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:18:01.720766 kubelet[3411]: W1216 12:18:01.717623 3411 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:18:01.720766 kubelet[3411]: E1216 12:18:01.717701 3411 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:18:01.720766 kubelet[3411]: E1216 12:18:01.719519 3411 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:18:01.721970 kubelet[3411]: W1216 12:18:01.719667 3411 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:18:01.721970 kubelet[3411]: E1216 12:18:01.719877 3411 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:18:01.722577 kubelet[3411]: E1216 12:18:01.722531 3411 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:18:01.722577 kubelet[3411]: W1216 12:18:01.722569 3411 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:18:01.722976 kubelet[3411]: E1216 12:18:01.722666 3411 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:18:01.722976 kubelet[3411]: I1216 12:18:01.722731 3411 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-84lph\" (UniqueName: \"kubernetes.io/projected/7f31f51a-3fe7-4796-97ca-d9a3c9b5116f-kube-api-access-84lph\") pod \"csi-node-driver-7wp4r\" (UID: \"7f31f51a-3fe7-4796-97ca-d9a3c9b5116f\") " pod="calico-system/csi-node-driver-7wp4r" Dec 16 12:18:01.725610 kubelet[3411]: E1216 12:18:01.725408 3411 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:18:01.725610 kubelet[3411]: W1216 12:18:01.725450 3411 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:18:01.725610 kubelet[3411]: E1216 12:18:01.725555 3411 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:18:01.726641 kubelet[3411]: E1216 12:18:01.726601 3411 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:18:01.726805 kubelet[3411]: W1216 12:18:01.726764 3411 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:18:01.726805 kubelet[3411]: E1216 12:18:01.726976 3411 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:18:01.728456 kubelet[3411]: E1216 12:18:01.728401 3411 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:18:01.728802 kubelet[3411]: W1216 12:18:01.728443 3411 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:18:01.732067 kubelet[3411]: E1216 12:18:01.731100 3411 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:18:01.732067 kubelet[3411]: W1216 12:18:01.731170 3411 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:18:01.732721 kubelet[3411]: E1216 12:18:01.732673 3411 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:18:01.732721 kubelet[3411]: W1216 12:18:01.732711 3411 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:18:01.733008 kubelet[3411]: E1216 12:18:01.732966 3411 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:18:01.736034 kubelet[3411]: E1216 12:18:01.735709 3411 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:18:01.736034 kubelet[3411]: E1216 12:18:01.735816 3411 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:18:01.736366 kubelet[3411]: E1216 12:18:01.736314 3411 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:18:01.736366 kubelet[3411]: W1216 12:18:01.736354 3411 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:18:01.736970 kubelet[3411]: E1216 12:18:01.736399 3411 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:18:01.738336 kubelet[3411]: E1216 12:18:01.738303 3411 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:18:01.738707 kubelet[3411]: W1216 12:18:01.738516 3411 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:18:01.738707 kubelet[3411]: E1216 12:18:01.738570 3411 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:18:01.739560 kubelet[3411]: E1216 12:18:01.739517 3411 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:18:01.739688 kubelet[3411]: W1216 12:18:01.739553 3411 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:18:01.739688 kubelet[3411]: E1216 12:18:01.739665 3411 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:18:01.740520 kubelet[3411]: E1216 12:18:01.740475 3411 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:18:01.740520 kubelet[3411]: W1216 12:18:01.740511 3411 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:18:01.742082 kubelet[3411]: E1216 12:18:01.740748 3411 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:18:01.742804 kubelet[3411]: E1216 12:18:01.742760 3411 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:18:01.742962 kubelet[3411]: W1216 12:18:01.742922 3411 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:18:01.743033 kubelet[3411]: E1216 12:18:01.742991 3411 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:18:01.743813 kubelet[3411]: E1216 12:18:01.743747 3411 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:18:01.743916 kubelet[3411]: W1216 12:18:01.743892 3411 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:18:01.743989 kubelet[3411]: E1216 12:18:01.743925 3411 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:18:01.744540 kubelet[3411]: E1216 12:18:01.744496 3411 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:18:01.744656 kubelet[3411]: W1216 12:18:01.744526 3411 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:18:01.744656 kubelet[3411]: E1216 12:18:01.744634 3411 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:18:01.748845 containerd[1971]: time="2025-12-16T12:18:01.748773824Z" level=info msg="connecting to shim 65e5545e0f9583bab658b7674e4374935db3f8d40bc4f277b46f58eb2138e0fb" address="unix:///run/containerd/s/1ceff3d1b2edceda2b3ff5e94493c1baf5b3eedf6e7aca46436cb31e9b2d069a" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:18:01.807708 systemd[1]: Started cri-containerd-65e5545e0f9583bab658b7674e4374935db3f8d40bc4f277b46f58eb2138e0fb.scope - libcontainer container 65e5545e0f9583bab658b7674e4374935db3f8d40bc4f277b46f58eb2138e0fb. Dec 16 12:18:01.814336 containerd[1971]: time="2025-12-16T12:18:01.814287489Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-75cfcf997-cnl22,Uid:033a0fef-ca99-4f5d-a42f-51c1c6254d6e,Namespace:calico-system,Attempt:0,} returns sandbox id \"36b2f497159b7acf911b010b02962449d8e78f5ebe0fa61ce234a158e9c40405\"" Dec 16 12:18:01.818876 containerd[1971]: time="2025-12-16T12:18:01.818815725Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Dec 16 12:18:01.837244 kubelet[3411]: E1216 12:18:01.837193 3411 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:18:01.837244 kubelet[3411]: W1216 12:18:01.837236 3411 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:18:01.837443 kubelet[3411]: E1216 12:18:01.837272 3411 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:18:01.838732 kubelet[3411]: E1216 12:18:01.838277 3411 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:18:01.838732 kubelet[3411]: W1216 12:18:01.838315 3411 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:18:01.838732 kubelet[3411]: E1216 12:18:01.838387 3411 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:18:01.838975 kubelet[3411]: E1216 12:18:01.838817 3411 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:18:01.838975 kubelet[3411]: W1216 12:18:01.838838 3411 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:18:01.838975 kubelet[3411]: E1216 12:18:01.838888 3411 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:18:01.839957 kubelet[3411]: E1216 12:18:01.839275 3411 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:18:01.839957 kubelet[3411]: W1216 12:18:01.839294 3411 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:18:01.839957 kubelet[3411]: E1216 12:18:01.839406 3411 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:18:01.839957 kubelet[3411]: E1216 12:18:01.840016 3411 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:18:01.840498 kubelet[3411]: W1216 12:18:01.840046 3411 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:18:01.840498 kubelet[3411]: E1216 12:18:01.840127 3411 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:18:01.840826 kubelet[3411]: E1216 12:18:01.840767 3411 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:18:01.840826 kubelet[3411]: W1216 12:18:01.840799 3411 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:18:01.841181 kubelet[3411]: E1216 12:18:01.840914 3411 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:18:01.841445 kubelet[3411]: E1216 12:18:01.841404 3411 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:18:01.841445 kubelet[3411]: W1216 12:18:01.841436 3411 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:18:01.841550 kubelet[3411]: E1216 12:18:01.841472 3411 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:18:01.842267 kubelet[3411]: E1216 12:18:01.841952 3411 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:18:01.842267 kubelet[3411]: W1216 12:18:01.841983 3411 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:18:01.842267 kubelet[3411]: E1216 12:18:01.842012 3411 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:18:01.842986 kubelet[3411]: E1216 12:18:01.842517 3411 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:18:01.842986 kubelet[3411]: W1216 12:18:01.842554 3411 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:18:01.842986 kubelet[3411]: E1216 12:18:01.842632 3411 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:18:01.843276 kubelet[3411]: E1216 12:18:01.843094 3411 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:18:01.843276 kubelet[3411]: W1216 12:18:01.843117 3411 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:18:01.843276 kubelet[3411]: E1216 12:18:01.843144 3411 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:18:01.847000 audit: BPF prog-id=163 op=LOAD Dec 16 12:18:01.849000 audit: BPF prog-id=164 op=LOAD Dec 16 12:18:01.849000 audit[4013]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130180 a2=98 a3=0 items=0 ppid=4001 pid=4013 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:18:01.849000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3635653535343565306639353833626162363538623736373465343337 Dec 16 12:18:01.850000 audit: BPF prog-id=164 op=UNLOAD Dec 16 12:18:01.850000 audit[4013]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4001 pid=4013 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:18:01.850000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3635653535343565306639353833626162363538623736373465343337 Dec 16 12:18:01.851000 audit: BPF prog-id=165 op=LOAD Dec 16 12:18:01.851000 audit[4013]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001303e8 a2=98 a3=0 items=0 ppid=4001 pid=4013 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:18:01.851000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3635653535343565306639353833626162363538623736373465343337 Dec 16 12:18:01.854000 audit: BPF prog-id=166 op=LOAD Dec 16 12:18:01.854000 audit[4013]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000130168 a2=98 a3=0 items=0 ppid=4001 pid=4013 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:18:01.854000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3635653535343565306639353833626162363538623736373465343337 Dec 16 12:18:01.855000 audit: BPF prog-id=166 op=UNLOAD Dec 16 12:18:01.855000 audit[4013]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4001 pid=4013 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:18:01.855000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3635653535343565306639353833626162363538623736373465343337 Dec 16 12:18:01.856000 audit: BPF prog-id=165 op=UNLOAD Dec 16 12:18:01.856000 audit[4013]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4001 pid=4013 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:18:01.856000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3635653535343565306639353833626162363538623736373465343337 Dec 16 12:18:01.858000 audit: BPF prog-id=167 op=LOAD Dec 16 12:18:01.858000 audit[4013]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130648 a2=98 a3=0 items=0 ppid=4001 pid=4013 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:18:01.858000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3635653535343565306639353833626162363538623736373465343337 Dec 16 12:18:01.867536 kubelet[3411]: E1216 12:18:01.867480 3411 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:18:01.867536 kubelet[3411]: W1216 12:18:01.867524 3411 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:18:01.867710 kubelet[3411]: E1216 12:18:01.867561 3411 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:18:01.907086 containerd[1971]: time="2025-12-16T12:18:01.905851785Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-f2pkl,Uid:d321df29-2f84-48b2-86a9-cc0acc5de5f8,Namespace:calico-system,Attempt:0,} returns sandbox id \"65e5545e0f9583bab658b7674e4374935db3f8d40bc4f277b46f58eb2138e0fb\"" Dec 16 12:18:03.132335 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount662305153.mount: Deactivated successfully. Dec 16 12:18:03.545614 kubelet[3411]: E1216 12:18:03.545326 3411 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7wp4r" podUID="7f31f51a-3fe7-4796-97ca-d9a3c9b5116f" Dec 16 12:18:03.892632 containerd[1971]: time="2025-12-16T12:18:03.892345343Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:18:03.895922 containerd[1971]: time="2025-12-16T12:18:03.895838987Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=33086690" Dec 16 12:18:03.898685 containerd[1971]: time="2025-12-16T12:18:03.898588667Z" level=info msg="ImageCreate event name:\"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:18:03.902887 containerd[1971]: time="2025-12-16T12:18:03.902841143Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:18:03.904783 containerd[1971]: time="2025-12-16T12:18:03.903993743Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"33090541\" in 2.085113734s" Dec 16 12:18:03.904783 containerd[1971]: time="2025-12-16T12:18:03.904068395Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\"" Dec 16 12:18:03.907443 containerd[1971]: time="2025-12-16T12:18:03.907145087Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Dec 16 12:18:03.945414 containerd[1971]: time="2025-12-16T12:18:03.945335339Z" level=info msg="CreateContainer within sandbox \"36b2f497159b7acf911b010b02962449d8e78f5ebe0fa61ce234a158e9c40405\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Dec 16 12:18:03.963637 containerd[1971]: time="2025-12-16T12:18:03.963571823Z" level=info msg="Container 24d552b953b9a79d0e3b5fc582b9abedc644e6a2ba14b5fa08713e22d5722490: CDI devices from CRI Config.CDIDevices: []" Dec 16 12:18:03.971212 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2850625616.mount: Deactivated successfully. Dec 16 12:18:03.990291 containerd[1971]: time="2025-12-16T12:18:03.990216911Z" level=info msg="CreateContainer within sandbox \"36b2f497159b7acf911b010b02962449d8e78f5ebe0fa61ce234a158e9c40405\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"24d552b953b9a79d0e3b5fc582b9abedc644e6a2ba14b5fa08713e22d5722490\"" Dec 16 12:18:03.991599 containerd[1971]: time="2025-12-16T12:18:03.991538699Z" level=info msg="StartContainer for \"24d552b953b9a79d0e3b5fc582b9abedc644e6a2ba14b5fa08713e22d5722490\"" Dec 16 12:18:03.994028 containerd[1971]: time="2025-12-16T12:18:03.993863579Z" level=info msg="connecting to shim 24d552b953b9a79d0e3b5fc582b9abedc644e6a2ba14b5fa08713e22d5722490" address="unix:///run/containerd/s/97c34d2918b54488c214ab4b7778907769cf0aadec79b9085491af247cd32e42" protocol=ttrpc version=3 Dec 16 12:18:04.037414 systemd[1]: Started cri-containerd-24d552b953b9a79d0e3b5fc582b9abedc644e6a2ba14b5fa08713e22d5722490.scope - libcontainer container 24d552b953b9a79d0e3b5fc582b9abedc644e6a2ba14b5fa08713e22d5722490. Dec 16 12:18:04.069000 audit: BPF prog-id=168 op=LOAD Dec 16 12:18:04.072077 kernel: kauditd_printk_skb: 70 callbacks suppressed Dec 16 12:18:04.072139 kernel: audit: type=1334 audit(1765887484.069:565): prog-id=168 op=LOAD Dec 16 12:18:04.073000 audit: BPF prog-id=169 op=LOAD Dec 16 12:18:04.073000 audit[4066]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130180 a2=98 a3=0 items=0 ppid=3911 pid=4066 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:18:04.082725 kernel: audit: type=1334 audit(1765887484.073:566): prog-id=169 op=LOAD Dec 16 12:18:04.082840 kernel: audit: type=1300 audit(1765887484.073:566): arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130180 a2=98 a3=0 items=0 ppid=3911 pid=4066 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:18:04.082889 kernel: audit: type=1327 audit(1765887484.073:566): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3234643535326239353362396137396430653362356663353832623961 Dec 16 12:18:04.073000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3234643535326239353362396137396430653362356663353832623961 Dec 16 12:18:04.073000 audit: BPF prog-id=169 op=UNLOAD Dec 16 12:18:04.089751 kernel: audit: type=1334 audit(1765887484.073:567): prog-id=169 op=UNLOAD Dec 16 12:18:04.089847 kernel: audit: type=1300 audit(1765887484.073:567): arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3911 pid=4066 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:18:04.073000 audit[4066]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3911 pid=4066 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:18:04.073000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3234643535326239353362396137396430653362356663353832623961 Dec 16 12:18:04.101341 kernel: audit: type=1327 audit(1765887484.073:567): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3234643535326239353362396137396430653362356663353832623961 Dec 16 12:18:04.073000 audit: BPF prog-id=170 op=LOAD Dec 16 12:18:04.103196 kernel: audit: type=1334 audit(1765887484.073:568): prog-id=170 op=LOAD Dec 16 12:18:04.073000 audit[4066]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001303e8 a2=98 a3=0 items=0 ppid=3911 pid=4066 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:18:04.111847 kernel: audit: type=1300 audit(1765887484.073:568): arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001303e8 a2=98 a3=0 items=0 ppid=3911 pid=4066 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:18:04.111966 kernel: audit: type=1327 audit(1765887484.073:568): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3234643535326239353362396137396430653362356663353832623961 Dec 16 12:18:04.073000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3234643535326239353362396137396430653362356663353832623961 Dec 16 12:18:04.075000 audit: BPF prog-id=171 op=LOAD Dec 16 12:18:04.075000 audit[4066]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000130168 a2=98 a3=0 items=0 ppid=3911 pid=4066 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:18:04.075000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3234643535326239353362396137396430653362356663353832623961 Dec 16 12:18:04.089000 audit: BPF prog-id=171 op=UNLOAD Dec 16 12:18:04.089000 audit[4066]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3911 pid=4066 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:18:04.089000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3234643535326239353362396137396430653362356663353832623961 Dec 16 12:18:04.089000 audit: BPF prog-id=170 op=UNLOAD Dec 16 12:18:04.089000 audit[4066]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3911 pid=4066 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:18:04.089000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3234643535326239353362396137396430653362356663353832623961 Dec 16 12:18:04.089000 audit: BPF prog-id=172 op=LOAD Dec 16 12:18:04.089000 audit[4066]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130648 a2=98 a3=0 items=0 ppid=3911 pid=4066 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:18:04.089000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3234643535326239353362396137396430653362356663353832623961 Dec 16 12:18:04.199714 containerd[1971]: time="2025-12-16T12:18:04.199542356Z" level=info msg="StartContainer for \"24d552b953b9a79d0e3b5fc582b9abedc644e6a2ba14b5fa08713e22d5722490\" returns successfully" Dec 16 12:18:04.858381 kubelet[3411]: E1216 12:18:04.858169 3411 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:18:04.860486 kubelet[3411]: W1216 12:18:04.859484 3411 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:18:04.860486 kubelet[3411]: E1216 12:18:04.859630 3411 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:18:04.861590 kubelet[3411]: E1216 12:18:04.861505 3411 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:18:04.862527 kubelet[3411]: W1216 12:18:04.862388 3411 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:18:04.862527 kubelet[3411]: E1216 12:18:04.862493 3411 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:18:04.863087 kubelet[3411]: E1216 12:18:04.862980 3411 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:18:04.863087 kubelet[3411]: W1216 12:18:04.863013 3411 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:18:04.863087 kubelet[3411]: E1216 12:18:04.863039 3411 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:18:04.863952 kubelet[3411]: E1216 12:18:04.863868 3411 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:18:04.863952 kubelet[3411]: W1216 12:18:04.863908 3411 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:18:04.863952 kubelet[3411]: E1216 12:18:04.863938 3411 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:18:04.866276 kubelet[3411]: E1216 12:18:04.866222 3411 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:18:04.866276 kubelet[3411]: W1216 12:18:04.866264 3411 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:18:04.866276 kubelet[3411]: E1216 12:18:04.866299 3411 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:18:04.867001 kubelet[3411]: E1216 12:18:04.866786 3411 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:18:04.867001 kubelet[3411]: W1216 12:18:04.866835 3411 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:18:04.867001 kubelet[3411]: E1216 12:18:04.866861 3411 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:18:04.869134 kubelet[3411]: E1216 12:18:04.868097 3411 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:18:04.869480 kubelet[3411]: W1216 12:18:04.869141 3411 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:18:04.869480 kubelet[3411]: E1216 12:18:04.869222 3411 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:18:04.870107 kubelet[3411]: E1216 12:18:04.869763 3411 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:18:04.870107 kubelet[3411]: W1216 12:18:04.869781 3411 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:18:04.870107 kubelet[3411]: E1216 12:18:04.869836 3411 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:18:04.870928 kubelet[3411]: E1216 12:18:04.870885 3411 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:18:04.870928 kubelet[3411]: W1216 12:18:04.870923 3411 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:18:04.871143 kubelet[3411]: E1216 12:18:04.870956 3411 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:18:04.874720 kubelet[3411]: E1216 12:18:04.874554 3411 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:18:04.874720 kubelet[3411]: W1216 12:18:04.874589 3411 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:18:04.874720 kubelet[3411]: E1216 12:18:04.874622 3411 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:18:04.875492 kubelet[3411]: E1216 12:18:04.875335 3411 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:18:04.875492 kubelet[3411]: W1216 12:18:04.875360 3411 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:18:04.875492 kubelet[3411]: E1216 12:18:04.875386 3411 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:18:04.876077 kubelet[3411]: E1216 12:18:04.876031 3411 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:18:04.876210 kubelet[3411]: W1216 12:18:04.876186 3411 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:18:04.876321 kubelet[3411]: E1216 12:18:04.876297 3411 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:18:04.877242 kubelet[3411]: E1216 12:18:04.876994 3411 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:18:04.877242 kubelet[3411]: W1216 12:18:04.877021 3411 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:18:04.877242 kubelet[3411]: E1216 12:18:04.877048 3411 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:18:04.878688 kubelet[3411]: E1216 12:18:04.878523 3411 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:18:04.878688 kubelet[3411]: W1216 12:18:04.878558 3411 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:18:04.878688 kubelet[3411]: E1216 12:18:04.878589 3411 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:18:04.881890 kubelet[3411]: E1216 12:18:04.880168 3411 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:18:04.881890 kubelet[3411]: W1216 12:18:04.880205 3411 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:18:04.881890 kubelet[3411]: E1216 12:18:04.880238 3411 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:18:04.882707 kubelet[3411]: E1216 12:18:04.882673 3411 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:18:04.882925 kubelet[3411]: W1216 12:18:04.882895 3411 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:18:04.883159 kubelet[3411]: E1216 12:18:04.883132 3411 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:18:04.886120 kubelet[3411]: E1216 12:18:04.886049 3411 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:18:04.886395 kubelet[3411]: W1216 12:18:04.886272 3411 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:18:04.886395 kubelet[3411]: E1216 12:18:04.886338 3411 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:18:04.887910 kubelet[3411]: E1216 12:18:04.887666 3411 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:18:04.887910 kubelet[3411]: W1216 12:18:04.887700 3411 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:18:04.887910 kubelet[3411]: E1216 12:18:04.887751 3411 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:18:04.890089 kubelet[3411]: E1216 12:18:04.889952 3411 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:18:04.890089 kubelet[3411]: W1216 12:18:04.889993 3411 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:18:04.890089 kubelet[3411]: E1216 12:18:04.890116 3411 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:18:04.891079 kubelet[3411]: E1216 12:18:04.890946 3411 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:18:04.891079 kubelet[3411]: W1216 12:18:04.890980 3411 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:18:04.891232 kubelet[3411]: E1216 12:18:04.891194 3411 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:18:04.895154 kubelet[3411]: I1216 12:18:04.893624 3411 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-75cfcf997-cnl22" podStartSLOduration=1.805532866 podStartE2EDuration="3.893600724s" podCreationTimestamp="2025-12-16 12:18:01 +0000 UTC" firstStartedPulling="2025-12-16 12:18:01.817727145 +0000 UTC m=+30.992697587" lastFinishedPulling="2025-12-16 12:18:03.905795003 +0000 UTC m=+33.080765445" observedRunningTime="2025-12-16 12:18:04.846115608 +0000 UTC m=+34.021086146" watchObservedRunningTime="2025-12-16 12:18:04.893600724 +0000 UTC m=+34.068571154" Dec 16 12:18:04.898164 kubelet[3411]: E1216 12:18:04.898111 3411 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:18:04.899761 kubelet[3411]: W1216 12:18:04.899714 3411 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:18:04.900702 kubelet[3411]: E1216 12:18:04.900011 3411 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:18:04.901539 kubelet[3411]: E1216 12:18:04.901331 3411 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:18:04.901748 kubelet[3411]: W1216 12:18:04.901705 3411 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:18:04.904105 kubelet[3411]: E1216 12:18:04.901932 3411 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:18:04.904751 kubelet[3411]: E1216 12:18:04.904717 3411 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:18:04.904899 kubelet[3411]: W1216 12:18:04.904873 3411 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:18:04.905156 kubelet[3411]: E1216 12:18:04.905114 3411 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:18:04.905575 kubelet[3411]: E1216 12:18:04.905547 3411 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:18:04.905716 kubelet[3411]: W1216 12:18:04.905690 3411 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:18:04.905885 kubelet[3411]: E1216 12:18:04.905844 3411 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:18:04.907306 kubelet[3411]: E1216 12:18:04.907267 3411 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:18:04.907877 kubelet[3411]: W1216 12:18:04.907594 3411 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:18:04.907877 kubelet[3411]: E1216 12:18:04.907685 3411 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:18:04.908263 kubelet[3411]: E1216 12:18:04.908236 3411 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:18:04.908392 kubelet[3411]: W1216 12:18:04.908367 3411 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:18:04.908570 kubelet[3411]: E1216 12:18:04.908528 3411 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:18:04.910496 kubelet[3411]: E1216 12:18:04.910446 3411 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:18:04.911095 kubelet[3411]: W1216 12:18:04.910667 3411 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:18:04.911095 kubelet[3411]: E1216 12:18:04.910827 3411 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:18:04.913273 kubelet[3411]: E1216 12:18:04.913215 3411 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:18:04.914567 kubelet[3411]: W1216 12:18:04.913377 3411 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:18:04.914567 kubelet[3411]: E1216 12:18:04.913455 3411 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:18:04.914567 kubelet[3411]: E1216 12:18:04.914328 3411 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:18:04.914567 kubelet[3411]: W1216 12:18:04.914353 3411 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:18:04.914567 kubelet[3411]: E1216 12:18:04.914512 3411 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:18:04.916477 kubelet[3411]: E1216 12:18:04.916395 3411 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:18:04.916477 kubelet[3411]: W1216 12:18:04.916430 3411 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:18:04.917399 kubelet[3411]: E1216 12:18:04.916691 3411 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:18:04.918799 kubelet[3411]: E1216 12:18:04.918752 3411 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:18:04.919250 kubelet[3411]: W1216 12:18:04.919217 3411 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:18:04.919827 kubelet[3411]: E1216 12:18:04.919555 3411 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:18:04.921011 kubelet[3411]: E1216 12:18:04.920956 3411 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:18:04.921337 kubelet[3411]: W1216 12:18:04.921135 3411 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:18:04.921337 kubelet[3411]: E1216 12:18:04.921178 3411 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:18:04.921801 kubelet[3411]: E1216 12:18:04.921771 3411 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:18:04.922048 kubelet[3411]: W1216 12:18:04.921918 3411 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:18:04.922048 kubelet[3411]: E1216 12:18:04.921957 3411 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:18:05.005000 audit[4141]: NETFILTER_CFG table=filter:121 family=2 entries=21 op=nft_register_rule pid=4141 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:18:05.005000 audit[4141]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=7480 a0=3 a1=ffffcb4b9640 a2=0 a3=1 items=0 ppid=3639 pid=4141 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:18:05.005000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:18:05.010000 audit[4141]: NETFILTER_CFG table=nat:122 family=2 entries=19 op=nft_register_chain pid=4141 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:18:05.010000 audit[4141]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=6276 a0=3 a1=ffffcb4b9640 a2=0 a3=1 items=0 ppid=3639 pid=4141 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:18:05.010000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:18:05.257904 containerd[1971]: time="2025-12-16T12:18:05.257659066Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:18:05.262408 containerd[1971]: time="2025-12-16T12:18:05.261971170Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=0" Dec 16 12:18:05.264313 containerd[1971]: time="2025-12-16T12:18:05.263686042Z" level=info msg="ImageCreate event name:\"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:18:05.268146 containerd[1971]: time="2025-12-16T12:18:05.268047190Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:18:05.269685 containerd[1971]: time="2025-12-16T12:18:05.269636002Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5636392\" in 1.361738359s" Dec 16 12:18:05.269947 containerd[1971]: time="2025-12-16T12:18:05.269816146Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\"" Dec 16 12:18:05.275400 containerd[1971]: time="2025-12-16T12:18:05.275327014Z" level=info msg="CreateContainer within sandbox \"65e5545e0f9583bab658b7674e4374935db3f8d40bc4f277b46f58eb2138e0fb\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Dec 16 12:18:05.295126 containerd[1971]: time="2025-12-16T12:18:05.295016518Z" level=info msg="Container 85280b59a40a0140c76fe405b20dc2d6382412f720e4976dbad29a197632cfce: CDI devices from CRI Config.CDIDevices: []" Dec 16 12:18:05.306572 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3932630124.mount: Deactivated successfully. Dec 16 12:18:05.321955 containerd[1971]: time="2025-12-16T12:18:05.321718822Z" level=info msg="CreateContainer within sandbox \"65e5545e0f9583bab658b7674e4374935db3f8d40bc4f277b46f58eb2138e0fb\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"85280b59a40a0140c76fe405b20dc2d6382412f720e4976dbad29a197632cfce\"" Dec 16 12:18:05.324974 containerd[1971]: time="2025-12-16T12:18:05.323348242Z" level=info msg="StartContainer for \"85280b59a40a0140c76fe405b20dc2d6382412f720e4976dbad29a197632cfce\"" Dec 16 12:18:05.330680 containerd[1971]: time="2025-12-16T12:18:05.330401926Z" level=info msg="connecting to shim 85280b59a40a0140c76fe405b20dc2d6382412f720e4976dbad29a197632cfce" address="unix:///run/containerd/s/1ceff3d1b2edceda2b3ff5e94493c1baf5b3eedf6e7aca46436cb31e9b2d069a" protocol=ttrpc version=3 Dec 16 12:18:05.391587 systemd[1]: Started cri-containerd-85280b59a40a0140c76fe405b20dc2d6382412f720e4976dbad29a197632cfce.scope - libcontainer container 85280b59a40a0140c76fe405b20dc2d6382412f720e4976dbad29a197632cfce. Dec 16 12:18:05.492000 audit: BPF prog-id=173 op=LOAD Dec 16 12:18:05.492000 audit[4146]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001a83e8 a2=98 a3=0 items=0 ppid=4001 pid=4146 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:18:05.492000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3835323830623539613430613031343063373666653430356232306463 Dec 16 12:18:05.492000 audit: BPF prog-id=174 op=LOAD Dec 16 12:18:05.492000 audit[4146]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=40001a8168 a2=98 a3=0 items=0 ppid=4001 pid=4146 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:18:05.492000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3835323830623539613430613031343063373666653430356232306463 Dec 16 12:18:05.493000 audit: BPF prog-id=174 op=UNLOAD Dec 16 12:18:05.493000 audit[4146]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4001 pid=4146 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:18:05.493000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3835323830623539613430613031343063373666653430356232306463 Dec 16 12:18:05.493000 audit: BPF prog-id=173 op=UNLOAD Dec 16 12:18:05.493000 audit[4146]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4001 pid=4146 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:18:05.493000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3835323830623539613430613031343063373666653430356232306463 Dec 16 12:18:05.493000 audit: BPF prog-id=175 op=LOAD Dec 16 12:18:05.493000 audit[4146]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001a8648 a2=98 a3=0 items=0 ppid=4001 pid=4146 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:18:05.493000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3835323830623539613430613031343063373666653430356232306463 Dec 16 12:18:05.539047 containerd[1971]: time="2025-12-16T12:18:05.538916195Z" level=info msg="StartContainer for \"85280b59a40a0140c76fe405b20dc2d6382412f720e4976dbad29a197632cfce\" returns successfully" Dec 16 12:18:05.546085 kubelet[3411]: E1216 12:18:05.545242 3411 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7wp4r" podUID="7f31f51a-3fe7-4796-97ca-d9a3c9b5116f" Dec 16 12:18:05.577948 systemd[1]: cri-containerd-85280b59a40a0140c76fe405b20dc2d6382412f720e4976dbad29a197632cfce.scope: Deactivated successfully. Dec 16 12:18:05.582000 audit: BPF prog-id=175 op=UNLOAD Dec 16 12:18:05.587512 containerd[1971]: time="2025-12-16T12:18:05.587323463Z" level=info msg="received container exit event container_id:\"85280b59a40a0140c76fe405b20dc2d6382412f720e4976dbad29a197632cfce\" id:\"85280b59a40a0140c76fe405b20dc2d6382412f720e4976dbad29a197632cfce\" pid:4158 exited_at:{seconds:1765887485 nanos:586573139}" Dec 16 12:18:05.633486 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-85280b59a40a0140c76fe405b20dc2d6382412f720e4976dbad29a197632cfce-rootfs.mount: Deactivated successfully. Dec 16 12:18:06.822142 containerd[1971]: time="2025-12-16T12:18:06.822080893Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Dec 16 12:18:07.546649 kubelet[3411]: E1216 12:18:07.546013 3411 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7wp4r" podUID="7f31f51a-3fe7-4796-97ca-d9a3c9b5116f" Dec 16 12:18:09.548585 kubelet[3411]: E1216 12:18:09.545611 3411 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7wp4r" podUID="7f31f51a-3fe7-4796-97ca-d9a3c9b5116f" Dec 16 12:18:09.737999 containerd[1971]: time="2025-12-16T12:18:09.737900452Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:18:09.740089 containerd[1971]: time="2025-12-16T12:18:09.739980268Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=65921248" Dec 16 12:18:09.742594 containerd[1971]: time="2025-12-16T12:18:09.742508488Z" level=info msg="ImageCreate event name:\"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:18:09.746993 containerd[1971]: time="2025-12-16T12:18:09.746888992Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:18:09.749043 containerd[1971]: time="2025-12-16T12:18:09.748217212Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"67295507\" in 2.926075995s" Dec 16 12:18:09.749043 containerd[1971]: time="2025-12-16T12:18:09.748272580Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\"" Dec 16 12:18:09.754127 containerd[1971]: time="2025-12-16T12:18:09.753981016Z" level=info msg="CreateContainer within sandbox \"65e5545e0f9583bab658b7674e4374935db3f8d40bc4f277b46f58eb2138e0fb\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Dec 16 12:18:09.777114 containerd[1971]: time="2025-12-16T12:18:09.777026260Z" level=info msg="Container 0ae8b9598e9325bedfbafc361328d0c1ee8fcbbe6f19f8392bc870e465b27e24: CDI devices from CRI Config.CDIDevices: []" Dec 16 12:18:09.798551 containerd[1971]: time="2025-12-16T12:18:09.798473488Z" level=info msg="CreateContainer within sandbox \"65e5545e0f9583bab658b7674e4374935db3f8d40bc4f277b46f58eb2138e0fb\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"0ae8b9598e9325bedfbafc361328d0c1ee8fcbbe6f19f8392bc870e465b27e24\"" Dec 16 12:18:09.801102 containerd[1971]: time="2025-12-16T12:18:09.800476564Z" level=info msg="StartContainer for \"0ae8b9598e9325bedfbafc361328d0c1ee8fcbbe6f19f8392bc870e465b27e24\"" Dec 16 12:18:09.804170 containerd[1971]: time="2025-12-16T12:18:09.804112504Z" level=info msg="connecting to shim 0ae8b9598e9325bedfbafc361328d0c1ee8fcbbe6f19f8392bc870e465b27e24" address="unix:///run/containerd/s/1ceff3d1b2edceda2b3ff5e94493c1baf5b3eedf6e7aca46436cb31e9b2d069a" protocol=ttrpc version=3 Dec 16 12:18:09.855705 systemd[1]: Started cri-containerd-0ae8b9598e9325bedfbafc361328d0c1ee8fcbbe6f19f8392bc870e465b27e24.scope - libcontainer container 0ae8b9598e9325bedfbafc361328d0c1ee8fcbbe6f19f8392bc870e465b27e24. Dec 16 12:18:09.927000 audit: BPF prog-id=176 op=LOAD Dec 16 12:18:09.929458 kernel: kauditd_printk_skb: 34 callbacks suppressed Dec 16 12:18:09.929555 kernel: audit: type=1334 audit(1765887489.927:581): prog-id=176 op=LOAD Dec 16 12:18:09.927000 audit[4206]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=4001 pid=4206 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:18:09.939030 kernel: audit: type=1300 audit(1765887489.927:581): arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=4001 pid=4206 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:18:09.927000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3061653862393539386539333235626564666261666333363133323864 Dec 16 12:18:09.945896 kernel: audit: type=1327 audit(1765887489.927:581): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3061653862393539386539333235626564666261666333363133323864 Dec 16 12:18:09.930000 audit: BPF prog-id=177 op=LOAD Dec 16 12:18:09.948171 kernel: audit: type=1334 audit(1765887489.930:582): prog-id=177 op=LOAD Dec 16 12:18:09.930000 audit[4206]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=4000178168 a2=98 a3=0 items=0 ppid=4001 pid=4206 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:18:09.954627 kernel: audit: type=1300 audit(1765887489.930:582): arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=4000178168 a2=98 a3=0 items=0 ppid=4001 pid=4206 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:18:09.930000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3061653862393539386539333235626564666261666333363133323864 Dec 16 12:18:09.960932 kernel: audit: type=1327 audit(1765887489.930:582): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3061653862393539386539333235626564666261666333363133323864 Dec 16 12:18:09.930000 audit: BPF prog-id=177 op=UNLOAD Dec 16 12:18:09.963222 kernel: audit: type=1334 audit(1765887489.930:583): prog-id=177 op=UNLOAD Dec 16 12:18:09.930000 audit[4206]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4001 pid=4206 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:18:09.969353 kernel: audit: type=1300 audit(1765887489.930:583): arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4001 pid=4206 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:18:09.930000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3061653862393539386539333235626564666261666333363133323864 Dec 16 12:18:09.975534 kernel: audit: type=1327 audit(1765887489.930:583): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3061653862393539386539333235626564666261666333363133323864 Dec 16 12:18:09.930000 audit: BPF prog-id=176 op=UNLOAD Dec 16 12:18:09.977380 kernel: audit: type=1334 audit(1765887489.930:584): prog-id=176 op=UNLOAD Dec 16 12:18:09.930000 audit[4206]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4001 pid=4206 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:18:09.930000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3061653862393539386539333235626564666261666333363133323864 Dec 16 12:18:09.930000 audit: BPF prog-id=178 op=LOAD Dec 16 12:18:09.930000 audit[4206]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000178648 a2=98 a3=0 items=0 ppid=4001 pid=4206 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:18:09.930000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3061653862393539386539333235626564666261666333363133323864 Dec 16 12:18:10.003345 containerd[1971]: time="2025-12-16T12:18:10.003272449Z" level=info msg="StartContainer for \"0ae8b9598e9325bedfbafc361328d0c1ee8fcbbe6f19f8392bc870e465b27e24\" returns successfully" Dec 16 12:18:10.970586 containerd[1971]: time="2025-12-16T12:18:10.970520850Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 16 12:18:10.975568 systemd[1]: cri-containerd-0ae8b9598e9325bedfbafc361328d0c1ee8fcbbe6f19f8392bc870e465b27e24.scope: Deactivated successfully. Dec 16 12:18:10.976136 systemd[1]: cri-containerd-0ae8b9598e9325bedfbafc361328d0c1ee8fcbbe6f19f8392bc870e465b27e24.scope: Consumed 954ms CPU time, 184M memory peak, 165.9M written to disk. Dec 16 12:18:10.983185 containerd[1971]: time="2025-12-16T12:18:10.983111070Z" level=info msg="received container exit event container_id:\"0ae8b9598e9325bedfbafc361328d0c1ee8fcbbe6f19f8392bc870e465b27e24\" id:\"0ae8b9598e9325bedfbafc361328d0c1ee8fcbbe6f19f8392bc870e465b27e24\" pid:4220 exited_at:{seconds:1765887490 nanos:982776462}" Dec 16 12:18:10.984000 audit: BPF prog-id=178 op=UNLOAD Dec 16 12:18:11.030681 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0ae8b9598e9325bedfbafc361328d0c1ee8fcbbe6f19f8392bc870e465b27e24-rootfs.mount: Deactivated successfully. Dec 16 12:18:11.066695 kubelet[3411]: I1216 12:18:11.064720 3411 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Dec 16 12:18:11.151674 systemd[1]: Created slice kubepods-burstable-poda9a3cc19_451e_4d00_baff_e4e463318465.slice - libcontainer container kubepods-burstable-poda9a3cc19_451e_4d00_baff_e4e463318465.slice. Dec 16 12:18:11.185578 systemd[1]: Created slice kubepods-burstable-poddebc4e05_9014_472c_af09_ca9dd2acb4d3.slice - libcontainer container kubepods-burstable-poddebc4e05_9014_472c_af09_ca9dd2acb4d3.slice. Dec 16 12:18:11.202507 systemd[1]: Created slice kubepods-besteffort-pod9a9f64cf_c939_425f_bc9b_14da143ab498.slice - libcontainer container kubepods-besteffort-pod9a9f64cf_c939_425f_bc9b_14da143ab498.slice. Dec 16 12:18:11.220472 systemd[1]: Created slice kubepods-besteffort-poda0556f5e_184b_4527_b60e_270da372abfb.slice - libcontainer container kubepods-besteffort-poda0556f5e_184b_4527_b60e_270da372abfb.slice. Dec 16 12:18:11.240276 kubelet[3411]: I1216 12:18:11.239303 3411 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9e39aa72-dd6b-4253-877f-1d57a9236239-config\") pod \"goldmane-666569f655-f4x44\" (UID: \"9e39aa72-dd6b-4253-877f-1d57a9236239\") " pod="calico-system/goldmane-666569f655-f4x44" Dec 16 12:18:11.240276 kubelet[3411]: I1216 12:18:11.239403 3411 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9e39aa72-dd6b-4253-877f-1d57a9236239-goldmane-ca-bundle\") pod \"goldmane-666569f655-f4x44\" (UID: \"9e39aa72-dd6b-4253-877f-1d57a9236239\") " pod="calico-system/goldmane-666569f655-f4x44" Dec 16 12:18:11.240276 kubelet[3411]: I1216 12:18:11.239447 3411 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h456d\" (UniqueName: \"kubernetes.io/projected/9e39aa72-dd6b-4253-877f-1d57a9236239-kube-api-access-h456d\") pod \"goldmane-666569f655-f4x44\" (UID: \"9e39aa72-dd6b-4253-877f-1d57a9236239\") " pod="calico-system/goldmane-666569f655-f4x44" Dec 16 12:18:11.240276 kubelet[3411]: I1216 12:18:11.239485 3411 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/e2ce7bb6-1f37-4a7c-9c57-1ede6fc94ef4-whisker-backend-key-pair\") pod \"whisker-fd66cc59f-dqkc9\" (UID: \"e2ce7bb6-1f37-4a7c-9c57-1ede6fc94ef4\") " pod="calico-system/whisker-fd66cc59f-dqkc9" Dec 16 12:18:11.240276 kubelet[3411]: I1216 12:18:11.239527 3411 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/a0556f5e-184b-4527-b60e-270da372abfb-calico-apiserver-certs\") pod \"calico-apiserver-fcb9bdb55-c27r4\" (UID: \"a0556f5e-184b-4527-b60e-270da372abfb\") " pod="calico-apiserver/calico-apiserver-fcb9bdb55-c27r4" Dec 16 12:18:11.240854 kubelet[3411]: I1216 12:18:11.239572 3411 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cjhqm\" (UniqueName: \"kubernetes.io/projected/a9a3cc19-451e-4d00-baff-e4e463318465-kube-api-access-cjhqm\") pod \"coredns-668d6bf9bc-d2s2m\" (UID: \"a9a3cc19-451e-4d00-baff-e4e463318465\") " pod="kube-system/coredns-668d6bf9bc-d2s2m" Dec 16 12:18:11.240854 kubelet[3411]: I1216 12:18:11.239610 3411 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6ntjh\" (UniqueName: \"kubernetes.io/projected/41fe1dec-6478-42fa-9c60-8b697b125498-kube-api-access-6ntjh\") pod \"calico-apiserver-fcb9bdb55-6k77x\" (UID: \"41fe1dec-6478-42fa-9c60-8b697b125498\") " pod="calico-apiserver/calico-apiserver-fcb9bdb55-6k77x" Dec 16 12:18:11.240854 kubelet[3411]: I1216 12:18:11.239648 3411 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-khtn7\" (UniqueName: \"kubernetes.io/projected/debc4e05-9014-472c-af09-ca9dd2acb4d3-kube-api-access-khtn7\") pod \"coredns-668d6bf9bc-wbgzs\" (UID: \"debc4e05-9014-472c-af09-ca9dd2acb4d3\") " pod="kube-system/coredns-668d6bf9bc-wbgzs" Dec 16 12:18:11.240854 kubelet[3411]: I1216 12:18:11.239689 3411 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-88br8\" (UniqueName: \"kubernetes.io/projected/a0556f5e-184b-4527-b60e-270da372abfb-kube-api-access-88br8\") pod \"calico-apiserver-fcb9bdb55-c27r4\" (UID: \"a0556f5e-184b-4527-b60e-270da372abfb\") " pod="calico-apiserver/calico-apiserver-fcb9bdb55-c27r4" Dec 16 12:18:11.240854 kubelet[3411]: I1216 12:18:11.239737 3411 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/9e39aa72-dd6b-4253-877f-1d57a9236239-goldmane-key-pair\") pod \"goldmane-666569f655-f4x44\" (UID: \"9e39aa72-dd6b-4253-877f-1d57a9236239\") " pod="calico-system/goldmane-666569f655-f4x44" Dec 16 12:18:11.241313 kubelet[3411]: I1216 12:18:11.239775 3411 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9a9f64cf-c939-425f-bc9b-14da143ab498-tigera-ca-bundle\") pod \"calico-kube-controllers-5959d55c94-c8546\" (UID: \"9a9f64cf-c939-425f-bc9b-14da143ab498\") " pod="calico-system/calico-kube-controllers-5959d55c94-c8546" Dec 16 12:18:11.241313 kubelet[3411]: I1216 12:18:11.239814 3411 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/debc4e05-9014-472c-af09-ca9dd2acb4d3-config-volume\") pod \"coredns-668d6bf9bc-wbgzs\" (UID: \"debc4e05-9014-472c-af09-ca9dd2acb4d3\") " pod="kube-system/coredns-668d6bf9bc-wbgzs" Dec 16 12:18:11.241313 kubelet[3411]: I1216 12:18:11.239856 3411 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e2ce7bb6-1f37-4a7c-9c57-1ede6fc94ef4-whisker-ca-bundle\") pod \"whisker-fd66cc59f-dqkc9\" (UID: \"e2ce7bb6-1f37-4a7c-9c57-1ede6fc94ef4\") " pod="calico-system/whisker-fd66cc59f-dqkc9" Dec 16 12:18:11.241313 kubelet[3411]: I1216 12:18:11.239902 3411 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a9a3cc19-451e-4d00-baff-e4e463318465-config-volume\") pod \"coredns-668d6bf9bc-d2s2m\" (UID: \"a9a3cc19-451e-4d00-baff-e4e463318465\") " pod="kube-system/coredns-668d6bf9bc-d2s2m" Dec 16 12:18:11.241313 kubelet[3411]: I1216 12:18:11.239944 3411 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wpl9q\" (UniqueName: \"kubernetes.io/projected/e2ce7bb6-1f37-4a7c-9c57-1ede6fc94ef4-kube-api-access-wpl9q\") pod \"whisker-fd66cc59f-dqkc9\" (UID: \"e2ce7bb6-1f37-4a7c-9c57-1ede6fc94ef4\") " pod="calico-system/whisker-fd66cc59f-dqkc9" Dec 16 12:18:11.243977 kubelet[3411]: I1216 12:18:11.239981 3411 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/41fe1dec-6478-42fa-9c60-8b697b125498-calico-apiserver-certs\") pod \"calico-apiserver-fcb9bdb55-6k77x\" (UID: \"41fe1dec-6478-42fa-9c60-8b697b125498\") " pod="calico-apiserver/calico-apiserver-fcb9bdb55-6k77x" Dec 16 12:18:11.243977 kubelet[3411]: I1216 12:18:11.240019 3411 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8z56p\" (UniqueName: \"kubernetes.io/projected/9a9f64cf-c939-425f-bc9b-14da143ab498-kube-api-access-8z56p\") pod \"calico-kube-controllers-5959d55c94-c8546\" (UID: \"9a9f64cf-c939-425f-bc9b-14da143ab498\") " pod="calico-system/calico-kube-controllers-5959d55c94-c8546" Dec 16 12:18:11.244978 systemd[1]: Created slice kubepods-besteffort-pode2ce7bb6_1f37_4a7c_9c57_1ede6fc94ef4.slice - libcontainer container kubepods-besteffort-pode2ce7bb6_1f37_4a7c_9c57_1ede6fc94ef4.slice. Dec 16 12:18:11.279400 systemd[1]: Created slice kubepods-besteffort-pod41fe1dec_6478_42fa_9c60_8b697b125498.slice - libcontainer container kubepods-besteffort-pod41fe1dec_6478_42fa_9c60_8b697b125498.slice. Dec 16 12:18:11.288027 systemd[1]: Created slice kubepods-besteffort-pod9e39aa72_dd6b_4253_877f_1d57a9236239.slice - libcontainer container kubepods-besteffort-pod9e39aa72_dd6b_4253_877f_1d57a9236239.slice. Dec 16 12:18:11.497471 containerd[1971]: time="2025-12-16T12:18:11.496547789Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-wbgzs,Uid:debc4e05-9014-472c-af09-ca9dd2acb4d3,Namespace:kube-system,Attempt:0,}" Dec 16 12:18:11.515444 containerd[1971]: time="2025-12-16T12:18:11.515272541Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5959d55c94-c8546,Uid:9a9f64cf-c939-425f-bc9b-14da143ab498,Namespace:calico-system,Attempt:0,}" Dec 16 12:18:11.531258 containerd[1971]: time="2025-12-16T12:18:11.531206849Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-fcb9bdb55-c27r4,Uid:a0556f5e-184b-4527-b60e-270da372abfb,Namespace:calico-apiserver,Attempt:0,}" Dec 16 12:18:11.564820 containerd[1971]: time="2025-12-16T12:18:11.564394433Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-fd66cc59f-dqkc9,Uid:e2ce7bb6-1f37-4a7c-9c57-1ede6fc94ef4,Namespace:calico-system,Attempt:0,}" Dec 16 12:18:11.564891 systemd[1]: Created slice kubepods-besteffort-pod7f31f51a_3fe7_4796_97ca_d9a3c9b5116f.slice - libcontainer container kubepods-besteffort-pod7f31f51a_3fe7_4796_97ca_d9a3c9b5116f.slice. Dec 16 12:18:11.570809 containerd[1971]: time="2025-12-16T12:18:11.570759293Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-7wp4r,Uid:7f31f51a-3fe7-4796-97ca-d9a3c9b5116f,Namespace:calico-system,Attempt:0,}" Dec 16 12:18:11.595425 containerd[1971]: time="2025-12-16T12:18:11.595354373Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-fcb9bdb55-6k77x,Uid:41fe1dec-6478-42fa-9c60-8b697b125498,Namespace:calico-apiserver,Attempt:0,}" Dec 16 12:18:11.597774 containerd[1971]: time="2025-12-16T12:18:11.597691229Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-f4x44,Uid:9e39aa72-dd6b-4253-877f-1d57a9236239,Namespace:calico-system,Attempt:0,}" Dec 16 12:18:11.772824 containerd[1971]: time="2025-12-16T12:18:11.772401126Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-d2s2m,Uid:a9a3cc19-451e-4d00-baff-e4e463318465,Namespace:kube-system,Attempt:0,}" Dec 16 12:18:11.968001 containerd[1971]: time="2025-12-16T12:18:11.967939375Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Dec 16 12:18:12.177081 containerd[1971]: time="2025-12-16T12:18:12.176845888Z" level=error msg="Failed to destroy network for sandbox \"ce4ca7e4bf2c37abd336a2a358cfa3cf138dcd08607288bc23c2f1ebf50a0747\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:18:12.185605 systemd[1]: run-netns-cni\x2dd5f3a763\x2ded9b\x2de491\x2d682d\x2de8ebdb4a7f7a.mount: Deactivated successfully. Dec 16 12:18:12.193985 containerd[1971]: time="2025-12-16T12:18:12.193923088Z" level=error msg="Failed to destroy network for sandbox \"884f40896740de3eaf8f213d3a6d043eef4941cb32a23eb73c7f634d78c6ab2b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:18:12.201399 systemd[1]: run-netns-cni\x2d6cd3f664\x2deea0\x2d5f02\x2d5a9e\x2d2a1860c41a71.mount: Deactivated successfully. Dec 16 12:18:12.206325 containerd[1971]: time="2025-12-16T12:18:12.206242480Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-wbgzs,Uid:debc4e05-9014-472c-af09-ca9dd2acb4d3,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"ce4ca7e4bf2c37abd336a2a358cfa3cf138dcd08607288bc23c2f1ebf50a0747\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:18:12.207640 kubelet[3411]: E1216 12:18:12.207443 3411 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ce4ca7e4bf2c37abd336a2a358cfa3cf138dcd08607288bc23c2f1ebf50a0747\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:18:12.207640 kubelet[3411]: E1216 12:18:12.207566 3411 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ce4ca7e4bf2c37abd336a2a358cfa3cf138dcd08607288bc23c2f1ebf50a0747\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-wbgzs" Dec 16 12:18:12.207640 kubelet[3411]: E1216 12:18:12.207602 3411 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ce4ca7e4bf2c37abd336a2a358cfa3cf138dcd08607288bc23c2f1ebf50a0747\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-wbgzs" Dec 16 12:18:12.209909 kubelet[3411]: E1216 12:18:12.207682 3411 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-wbgzs_kube-system(debc4e05-9014-472c-af09-ca9dd2acb4d3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-wbgzs_kube-system(debc4e05-9014-472c-af09-ca9dd2acb4d3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ce4ca7e4bf2c37abd336a2a358cfa3cf138dcd08607288bc23c2f1ebf50a0747\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-wbgzs" podUID="debc4e05-9014-472c-af09-ca9dd2acb4d3" Dec 16 12:18:12.211723 containerd[1971]: time="2025-12-16T12:18:12.211250872Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-7wp4r,Uid:7f31f51a-3fe7-4796-97ca-d9a3c9b5116f,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"884f40896740de3eaf8f213d3a6d043eef4941cb32a23eb73c7f634d78c6ab2b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:18:12.212160 kubelet[3411]: E1216 12:18:12.211608 3411 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"884f40896740de3eaf8f213d3a6d043eef4941cb32a23eb73c7f634d78c6ab2b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:18:12.212160 kubelet[3411]: E1216 12:18:12.211692 3411 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"884f40896740de3eaf8f213d3a6d043eef4941cb32a23eb73c7f634d78c6ab2b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-7wp4r" Dec 16 12:18:12.212160 kubelet[3411]: E1216 12:18:12.211726 3411 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"884f40896740de3eaf8f213d3a6d043eef4941cb32a23eb73c7f634d78c6ab2b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-7wp4r" Dec 16 12:18:12.213654 kubelet[3411]: E1216 12:18:12.211802 3411 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-7wp4r_calico-system(7f31f51a-3fe7-4796-97ca-d9a3c9b5116f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-7wp4r_calico-system(7f31f51a-3fe7-4796-97ca-d9a3c9b5116f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"884f40896740de3eaf8f213d3a6d043eef4941cb32a23eb73c7f634d78c6ab2b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-7wp4r" podUID="7f31f51a-3fe7-4796-97ca-d9a3c9b5116f" Dec 16 12:18:12.270673 containerd[1971]: time="2025-12-16T12:18:12.270607204Z" level=error msg="Failed to destroy network for sandbox \"29462f43b11ef988ead56793609fa10393a92d5ec8ae2229c496d46d24d9c3be\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:18:12.277291 containerd[1971]: time="2025-12-16T12:18:12.275668252Z" level=error msg="Failed to destroy network for sandbox \"ed1c0983402b1a19469230123bb08e3e00cb1d85b2d6dd9b4182534acb569459\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:18:12.280916 systemd[1]: run-netns-cni\x2d7f62cb53\x2df84c\x2d97fc\x2d658e\x2d68a0fc5fec79.mount: Deactivated successfully. Dec 16 12:18:12.283385 containerd[1971]: time="2025-12-16T12:18:12.283206485Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-fcb9bdb55-c27r4,Uid:a0556f5e-184b-4527-b60e-270da372abfb,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"29462f43b11ef988ead56793609fa10393a92d5ec8ae2229c496d46d24d9c3be\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:18:12.284049 kubelet[3411]: E1216 12:18:12.283880 3411 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"29462f43b11ef988ead56793609fa10393a92d5ec8ae2229c496d46d24d9c3be\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:18:12.284049 kubelet[3411]: E1216 12:18:12.284195 3411 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"29462f43b11ef988ead56793609fa10393a92d5ec8ae2229c496d46d24d9c3be\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-fcb9bdb55-c27r4" Dec 16 12:18:12.284049 kubelet[3411]: E1216 12:18:12.284355 3411 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"29462f43b11ef988ead56793609fa10393a92d5ec8ae2229c496d46d24d9c3be\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-fcb9bdb55-c27r4" Dec 16 12:18:12.285174 kubelet[3411]: E1216 12:18:12.284723 3411 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-fcb9bdb55-c27r4_calico-apiserver(a0556f5e-184b-4527-b60e-270da372abfb)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-fcb9bdb55-c27r4_calico-apiserver(a0556f5e-184b-4527-b60e-270da372abfb)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"29462f43b11ef988ead56793609fa10393a92d5ec8ae2229c496d46d24d9c3be\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-fcb9bdb55-c27r4" podUID="a0556f5e-184b-4527-b60e-270da372abfb" Dec 16 12:18:12.287851 containerd[1971]: time="2025-12-16T12:18:12.287419805Z" level=error msg="Failed to destroy network for sandbox \"0b2879a6487e2fccf6ac4faf6c51fdfdd6e26f0a098a2003a2022bef35b9f0c5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:18:12.293881 systemd[1]: run-netns-cni\x2dfbb2ed05\x2df6a3\x2df604\x2d8e56\x2d7422b670534c.mount: Deactivated successfully. Dec 16 12:18:12.306904 containerd[1971]: time="2025-12-16T12:18:12.306800261Z" level=error msg="Failed to destroy network for sandbox \"5f9fd880553f8c2490aa3e9ec2f126e7698a755f7440fa24673a42073555cffb\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:18:12.310716 containerd[1971]: time="2025-12-16T12:18:12.310565189Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-f4x44,Uid:9e39aa72-dd6b-4253-877f-1d57a9236239,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"0b2879a6487e2fccf6ac4faf6c51fdfdd6e26f0a098a2003a2022bef35b9f0c5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:18:12.311601 kubelet[3411]: E1216 12:18:12.311485 3411 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0b2879a6487e2fccf6ac4faf6c51fdfdd6e26f0a098a2003a2022bef35b9f0c5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:18:12.311601 kubelet[3411]: E1216 12:18:12.311570 3411 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0b2879a6487e2fccf6ac4faf6c51fdfdd6e26f0a098a2003a2022bef35b9f0c5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-f4x44" Dec 16 12:18:12.311908 kubelet[3411]: E1216 12:18:12.311603 3411 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0b2879a6487e2fccf6ac4faf6c51fdfdd6e26f0a098a2003a2022bef35b9f0c5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-f4x44" Dec 16 12:18:12.311908 kubelet[3411]: E1216 12:18:12.311664 3411 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-f4x44_calico-system(9e39aa72-dd6b-4253-877f-1d57a9236239)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-f4x44_calico-system(9e39aa72-dd6b-4253-877f-1d57a9236239)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0b2879a6487e2fccf6ac4faf6c51fdfdd6e26f0a098a2003a2022bef35b9f0c5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-f4x44" podUID="9e39aa72-dd6b-4253-877f-1d57a9236239" Dec 16 12:18:12.318146 containerd[1971]: time="2025-12-16T12:18:12.317293913Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-d2s2m,Uid:a9a3cc19-451e-4d00-baff-e4e463318465,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"ed1c0983402b1a19469230123bb08e3e00cb1d85b2d6dd9b4182534acb569459\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:18:12.319587 containerd[1971]: time="2025-12-16T12:18:12.319430429Z" level=error msg="Failed to destroy network for sandbox \"b680b02df4b21f81dfe0f83c7e68be7f3d19d0b33d23acda73ef16e349270618\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:18:12.323602 kubelet[3411]: E1216 12:18:12.323495 3411 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ed1c0983402b1a19469230123bb08e3e00cb1d85b2d6dd9b4182534acb569459\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:18:12.323602 kubelet[3411]: E1216 12:18:12.323587 3411 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ed1c0983402b1a19469230123bb08e3e00cb1d85b2d6dd9b4182534acb569459\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-d2s2m" Dec 16 12:18:12.324233 kubelet[3411]: E1216 12:18:12.323625 3411 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ed1c0983402b1a19469230123bb08e3e00cb1d85b2d6dd9b4182534acb569459\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-d2s2m" Dec 16 12:18:12.324233 kubelet[3411]: E1216 12:18:12.323700 3411 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-d2s2m_kube-system(a9a3cc19-451e-4d00-baff-e4e463318465)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-d2s2m_kube-system(a9a3cc19-451e-4d00-baff-e4e463318465)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ed1c0983402b1a19469230123bb08e3e00cb1d85b2d6dd9b4182534acb569459\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-d2s2m" podUID="a9a3cc19-451e-4d00-baff-e4e463318465" Dec 16 12:18:12.331892 containerd[1971]: time="2025-12-16T12:18:12.331684145Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5959d55c94-c8546,Uid:9a9f64cf-c939-425f-bc9b-14da143ab498,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"5f9fd880553f8c2490aa3e9ec2f126e7698a755f7440fa24673a42073555cffb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:18:12.333007 kubelet[3411]: E1216 12:18:12.332516 3411 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5f9fd880553f8c2490aa3e9ec2f126e7698a755f7440fa24673a42073555cffb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:18:12.333007 kubelet[3411]: E1216 12:18:12.332588 3411 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5f9fd880553f8c2490aa3e9ec2f126e7698a755f7440fa24673a42073555cffb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5959d55c94-c8546" Dec 16 12:18:12.333007 kubelet[3411]: E1216 12:18:12.332628 3411 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5f9fd880553f8c2490aa3e9ec2f126e7698a755f7440fa24673a42073555cffb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5959d55c94-c8546" Dec 16 12:18:12.333440 kubelet[3411]: E1216 12:18:12.332688 3411 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-5959d55c94-c8546_calico-system(9a9f64cf-c939-425f-bc9b-14da143ab498)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-5959d55c94-c8546_calico-system(9a9f64cf-c939-425f-bc9b-14da143ab498)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5f9fd880553f8c2490aa3e9ec2f126e7698a755f7440fa24673a42073555cffb\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5959d55c94-c8546" podUID="9a9f64cf-c939-425f-bc9b-14da143ab498" Dec 16 12:18:12.334276 containerd[1971]: time="2025-12-16T12:18:12.333696785Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-fcb9bdb55-6k77x,Uid:41fe1dec-6478-42fa-9c60-8b697b125498,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"b680b02df4b21f81dfe0f83c7e68be7f3d19d0b33d23acda73ef16e349270618\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:18:12.336809 kubelet[3411]: E1216 12:18:12.336426 3411 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b680b02df4b21f81dfe0f83c7e68be7f3d19d0b33d23acda73ef16e349270618\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:18:12.336809 kubelet[3411]: E1216 12:18:12.336749 3411 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b680b02df4b21f81dfe0f83c7e68be7f3d19d0b33d23acda73ef16e349270618\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-fcb9bdb55-6k77x" Dec 16 12:18:12.337581 kubelet[3411]: E1216 12:18:12.337491 3411 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b680b02df4b21f81dfe0f83c7e68be7f3d19d0b33d23acda73ef16e349270618\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-fcb9bdb55-6k77x" Dec 16 12:18:12.338168 kubelet[3411]: E1216 12:18:12.338032 3411 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-fcb9bdb55-6k77x_calico-apiserver(41fe1dec-6478-42fa-9c60-8b697b125498)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-fcb9bdb55-6k77x_calico-apiserver(41fe1dec-6478-42fa-9c60-8b697b125498)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b680b02df4b21f81dfe0f83c7e68be7f3d19d0b33d23acda73ef16e349270618\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-fcb9bdb55-6k77x" podUID="41fe1dec-6478-42fa-9c60-8b697b125498" Dec 16 12:18:12.342101 containerd[1971]: time="2025-12-16T12:18:12.341961617Z" level=error msg="Failed to destroy network for sandbox \"25a5b86bf1a80199b52f4fe5dfbd53a1436b48bf05a462139f13e52df8bf44d3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:18:12.348857 containerd[1971]: time="2025-12-16T12:18:12.348416477Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-fd66cc59f-dqkc9,Uid:e2ce7bb6-1f37-4a7c-9c57-1ede6fc94ef4,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"25a5b86bf1a80199b52f4fe5dfbd53a1436b48bf05a462139f13e52df8bf44d3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:18:12.350032 kubelet[3411]: E1216 12:18:12.349989 3411 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"25a5b86bf1a80199b52f4fe5dfbd53a1436b48bf05a462139f13e52df8bf44d3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:18:12.350253 kubelet[3411]: E1216 12:18:12.350207 3411 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"25a5b86bf1a80199b52f4fe5dfbd53a1436b48bf05a462139f13e52df8bf44d3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-fd66cc59f-dqkc9" Dec 16 12:18:12.350436 kubelet[3411]: E1216 12:18:12.350362 3411 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"25a5b86bf1a80199b52f4fe5dfbd53a1436b48bf05a462139f13e52df8bf44d3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-fd66cc59f-dqkc9" Dec 16 12:18:12.351411 kubelet[3411]: E1216 12:18:12.350561 3411 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-fd66cc59f-dqkc9_calico-system(e2ce7bb6-1f37-4a7c-9c57-1ede6fc94ef4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-fd66cc59f-dqkc9_calico-system(e2ce7bb6-1f37-4a7c-9c57-1ede6fc94ef4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"25a5b86bf1a80199b52f4fe5dfbd53a1436b48bf05a462139f13e52df8bf44d3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-fd66cc59f-dqkc9" podUID="e2ce7bb6-1f37-4a7c-9c57-1ede6fc94ef4" Dec 16 12:18:13.029022 systemd[1]: run-netns-cni\x2d6406e623\x2d648d\x2df85b\x2d036d\x2d7766fbb7ff1e.mount: Deactivated successfully. Dec 16 12:18:13.029563 systemd[1]: run-netns-cni\x2dc57540c0\x2db537\x2d789f\x2d58df\x2d5042c8e2ee5d.mount: Deactivated successfully. Dec 16 12:18:13.029800 systemd[1]: run-netns-cni\x2df10144fb\x2d4e5f\x2de1f1\x2df65f\x2d015f14732a33.mount: Deactivated successfully. Dec 16 12:18:13.030026 systemd[1]: run-netns-cni\x2d8d33fd5a\x2d4ecb\x2d4641\x2d2f89\x2d25a4f6034377.mount: Deactivated successfully. Dec 16 12:18:18.322754 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2717021379.mount: Deactivated successfully. Dec 16 12:18:18.375460 containerd[1971]: time="2025-12-16T12:18:18.375283571Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:18:18.377652 containerd[1971]: time="2025-12-16T12:18:18.377574371Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=150930912" Dec 16 12:18:18.379499 containerd[1971]: time="2025-12-16T12:18:18.379423415Z" level=info msg="ImageCreate event name:\"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:18:18.383753 containerd[1971]: time="2025-12-16T12:18:18.383684135Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:18:18.384759 containerd[1971]: time="2025-12-16T12:18:18.384695147Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"150934424\" in 6.41669054s" Dec 16 12:18:18.384970 containerd[1971]: time="2025-12-16T12:18:18.384757799Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\"" Dec 16 12:18:18.414348 containerd[1971]: time="2025-12-16T12:18:18.413795675Z" level=info msg="CreateContainer within sandbox \"65e5545e0f9583bab658b7674e4374935db3f8d40bc4f277b46f58eb2138e0fb\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Dec 16 12:18:18.458004 containerd[1971]: time="2025-12-16T12:18:18.457942079Z" level=info msg="Container edb3ac8b8b44543f98de25347181346685e1f88179aa52b86cca8af20e9ab82b: CDI devices from CRI Config.CDIDevices: []" Dec 16 12:18:18.479394 containerd[1971]: time="2025-12-16T12:18:18.479309663Z" level=info msg="CreateContainer within sandbox \"65e5545e0f9583bab658b7674e4374935db3f8d40bc4f277b46f58eb2138e0fb\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"edb3ac8b8b44543f98de25347181346685e1f88179aa52b86cca8af20e9ab82b\"" Dec 16 12:18:18.480461 containerd[1971]: time="2025-12-16T12:18:18.480380615Z" level=info msg="StartContainer for \"edb3ac8b8b44543f98de25347181346685e1f88179aa52b86cca8af20e9ab82b\"" Dec 16 12:18:18.485267 containerd[1971]: time="2025-12-16T12:18:18.485195783Z" level=info msg="connecting to shim edb3ac8b8b44543f98de25347181346685e1f88179aa52b86cca8af20e9ab82b" address="unix:///run/containerd/s/1ceff3d1b2edceda2b3ff5e94493c1baf5b3eedf6e7aca46436cb31e9b2d069a" protocol=ttrpc version=3 Dec 16 12:18:18.558418 systemd[1]: Started cri-containerd-edb3ac8b8b44543f98de25347181346685e1f88179aa52b86cca8af20e9ab82b.scope - libcontainer container edb3ac8b8b44543f98de25347181346685e1f88179aa52b86cca8af20e9ab82b. Dec 16 12:18:18.632000 audit: BPF prog-id=179 op=LOAD Dec 16 12:18:18.635495 kernel: kauditd_printk_skb: 6 callbacks suppressed Dec 16 12:18:18.635618 kernel: audit: type=1334 audit(1765887498.632:587): prog-id=179 op=LOAD Dec 16 12:18:18.632000 audit[4473]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001383e8 a2=98 a3=0 items=0 ppid=4001 pid=4473 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:18:18.642946 kernel: audit: type=1300 audit(1765887498.632:587): arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001383e8 a2=98 a3=0 items=0 ppid=4001 pid=4473 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:18:18.643049 kernel: audit: type=1327 audit(1765887498.632:587): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6564623361633862386234343534336639386465323533343731383133 Dec 16 12:18:18.632000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6564623361633862386234343534336639386465323533343731383133 Dec 16 12:18:18.633000 audit: BPF prog-id=180 op=LOAD Dec 16 12:18:18.650165 kernel: audit: type=1334 audit(1765887498.633:588): prog-id=180 op=LOAD Dec 16 12:18:18.633000 audit[4473]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=4000138168 a2=98 a3=0 items=0 ppid=4001 pid=4473 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:18:18.656900 kernel: audit: type=1300 audit(1765887498.633:588): arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=4000138168 a2=98 a3=0 items=0 ppid=4001 pid=4473 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:18:18.656966 kernel: audit: type=1327 audit(1765887498.633:588): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6564623361633862386234343534336639386465323533343731383133 Dec 16 12:18:18.633000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6564623361633862386234343534336639386465323533343731383133 Dec 16 12:18:18.637000 audit: BPF prog-id=180 op=UNLOAD Dec 16 12:18:18.664454 kernel: audit: type=1334 audit(1765887498.637:589): prog-id=180 op=UNLOAD Dec 16 12:18:18.637000 audit[4473]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4001 pid=4473 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:18:18.671106 kernel: audit: type=1300 audit(1765887498.637:589): arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4001 pid=4473 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:18:18.637000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6564623361633862386234343534336639386465323533343731383133 Dec 16 12:18:18.677163 kernel: audit: type=1327 audit(1765887498.637:589): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6564623361633862386234343534336639386465323533343731383133 Dec 16 12:18:18.637000 audit: BPF prog-id=179 op=UNLOAD Dec 16 12:18:18.679349 kernel: audit: type=1334 audit(1765887498.637:590): prog-id=179 op=UNLOAD Dec 16 12:18:18.637000 audit[4473]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4001 pid=4473 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:18:18.637000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6564623361633862386234343534336639386465323533343731383133 Dec 16 12:18:18.637000 audit: BPF prog-id=181 op=LOAD Dec 16 12:18:18.637000 audit[4473]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000138648 a2=98 a3=0 items=0 ppid=4001 pid=4473 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:18:18.637000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6564623361633862386234343534336639386465323533343731383133 Dec 16 12:18:18.719533 containerd[1971]: time="2025-12-16T12:18:18.719356164Z" level=info msg="StartContainer for \"edb3ac8b8b44543f98de25347181346685e1f88179aa52b86cca8af20e9ab82b\" returns successfully" Dec 16 12:18:18.979555 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Dec 16 12:18:18.979689 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Dec 16 12:18:19.058673 kubelet[3411]: I1216 12:18:19.058557 3411 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-f2pkl" podStartSLOduration=1.584044832 podStartE2EDuration="18.058523998s" podCreationTimestamp="2025-12-16 12:18:01 +0000 UTC" firstStartedPulling="2025-12-16 12:18:01.913008621 +0000 UTC m=+31.087979063" lastFinishedPulling="2025-12-16 12:18:18.387487787 +0000 UTC m=+47.562458229" observedRunningTime="2025-12-16 12:18:19.057341434 +0000 UTC m=+48.232311984" watchObservedRunningTime="2025-12-16 12:18:19.058523998 +0000 UTC m=+48.233494428" Dec 16 12:18:19.510888 kubelet[3411]: I1216 12:18:19.510741 3411 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/e2ce7bb6-1f37-4a7c-9c57-1ede6fc94ef4-whisker-backend-key-pair\") pod \"e2ce7bb6-1f37-4a7c-9c57-1ede6fc94ef4\" (UID: \"e2ce7bb6-1f37-4a7c-9c57-1ede6fc94ef4\") " Dec 16 12:18:19.510888 kubelet[3411]: I1216 12:18:19.510821 3411 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wpl9q\" (UniqueName: \"kubernetes.io/projected/e2ce7bb6-1f37-4a7c-9c57-1ede6fc94ef4-kube-api-access-wpl9q\") pod \"e2ce7bb6-1f37-4a7c-9c57-1ede6fc94ef4\" (UID: \"e2ce7bb6-1f37-4a7c-9c57-1ede6fc94ef4\") " Dec 16 12:18:19.510888 kubelet[3411]: I1216 12:18:19.510864 3411 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e2ce7bb6-1f37-4a7c-9c57-1ede6fc94ef4-whisker-ca-bundle\") pod \"e2ce7bb6-1f37-4a7c-9c57-1ede6fc94ef4\" (UID: \"e2ce7bb6-1f37-4a7c-9c57-1ede6fc94ef4\") " Dec 16 12:18:19.511673 kubelet[3411]: I1216 12:18:19.511622 3411 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e2ce7bb6-1f37-4a7c-9c57-1ede6fc94ef4-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "e2ce7bb6-1f37-4a7c-9c57-1ede6fc94ef4" (UID: "e2ce7bb6-1f37-4a7c-9c57-1ede6fc94ef4"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 16 12:18:19.526004 kubelet[3411]: I1216 12:18:19.525905 3411 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e2ce7bb6-1f37-4a7c-9c57-1ede6fc94ef4-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "e2ce7bb6-1f37-4a7c-9c57-1ede6fc94ef4" (UID: "e2ce7bb6-1f37-4a7c-9c57-1ede6fc94ef4"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 16 12:18:19.525946 systemd[1]: var-lib-kubelet-pods-e2ce7bb6\x2d1f37\x2d4a7c\x2d9c57\x2d1ede6fc94ef4-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Dec 16 12:18:19.534749 kubelet[3411]: I1216 12:18:19.534637 3411 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e2ce7bb6-1f37-4a7c-9c57-1ede6fc94ef4-kube-api-access-wpl9q" (OuterVolumeSpecName: "kube-api-access-wpl9q") pod "e2ce7bb6-1f37-4a7c-9c57-1ede6fc94ef4" (UID: "e2ce7bb6-1f37-4a7c-9c57-1ede6fc94ef4"). InnerVolumeSpecName "kube-api-access-wpl9q". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 16 12:18:19.536384 systemd[1]: var-lib-kubelet-pods-e2ce7bb6\x2d1f37\x2d4a7c\x2d9c57\x2d1ede6fc94ef4-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dwpl9q.mount: Deactivated successfully. Dec 16 12:18:19.573639 systemd[1]: Removed slice kubepods-besteffort-pode2ce7bb6_1f37_4a7c_9c57_1ede6fc94ef4.slice - libcontainer container kubepods-besteffort-pode2ce7bb6_1f37_4a7c_9c57_1ede6fc94ef4.slice. Dec 16 12:18:19.611616 kubelet[3411]: I1216 12:18:19.611269 3411 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/e2ce7bb6-1f37-4a7c-9c57-1ede6fc94ef4-whisker-backend-key-pair\") on node \"ip-172-31-20-6\" DevicePath \"\"" Dec 16 12:18:19.612242 kubelet[3411]: I1216 12:18:19.611926 3411 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-wpl9q\" (UniqueName: \"kubernetes.io/projected/e2ce7bb6-1f37-4a7c-9c57-1ede6fc94ef4-kube-api-access-wpl9q\") on node \"ip-172-31-20-6\" DevicePath \"\"" Dec 16 12:18:19.612730 kubelet[3411]: I1216 12:18:19.612686 3411 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e2ce7bb6-1f37-4a7c-9c57-1ede6fc94ef4-whisker-ca-bundle\") on node \"ip-172-31-20-6\" DevicePath \"\"" Dec 16 12:18:20.117032 systemd[1]: Created slice kubepods-besteffort-pod41dcedc9_f0d1_4389_a970_074857eabb8a.slice - libcontainer container kubepods-besteffort-pod41dcedc9_f0d1_4389_a970_074857eabb8a.slice. Dec 16 12:18:20.120001 kubelet[3411]: I1216 12:18:20.119077 3411 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/41dcedc9-f0d1-4389-a970-074857eabb8a-whisker-ca-bundle\") pod \"whisker-54cb69c56c-bnxxh\" (UID: \"41dcedc9-f0d1-4389-a970-074857eabb8a\") " pod="calico-system/whisker-54cb69c56c-bnxxh" Dec 16 12:18:20.120001 kubelet[3411]: I1216 12:18:20.119157 3411 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dclhd\" (UniqueName: \"kubernetes.io/projected/41dcedc9-f0d1-4389-a970-074857eabb8a-kube-api-access-dclhd\") pod \"whisker-54cb69c56c-bnxxh\" (UID: \"41dcedc9-f0d1-4389-a970-074857eabb8a\") " pod="calico-system/whisker-54cb69c56c-bnxxh" Dec 16 12:18:20.120001 kubelet[3411]: I1216 12:18:20.119206 3411 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/41dcedc9-f0d1-4389-a970-074857eabb8a-whisker-backend-key-pair\") pod \"whisker-54cb69c56c-bnxxh\" (UID: \"41dcedc9-f0d1-4389-a970-074857eabb8a\") " pod="calico-system/whisker-54cb69c56c-bnxxh" Dec 16 12:18:20.429704 containerd[1971]: time="2025-12-16T12:18:20.429357817Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-54cb69c56c-bnxxh,Uid:41dcedc9-f0d1-4389-a970-074857eabb8a,Namespace:calico-system,Attempt:0,}" Dec 16 12:18:20.735034 (udev-worker)[4512]: Network interface NamePolicy= disabled on kernel command line. Dec 16 12:18:20.739223 systemd-networkd[1564]: cali91c6949d5f9: Link UP Dec 16 12:18:20.741852 systemd-networkd[1564]: cali91c6949d5f9: Gained carrier Dec 16 12:18:20.783600 containerd[1971]: 2025-12-16 12:18:20.486 [INFO][4590] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Dec 16 12:18:20.783600 containerd[1971]: 2025-12-16 12:18:20.559 [INFO][4590] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--20--6-k8s-whisker--54cb69c56c--bnxxh-eth0 whisker-54cb69c56c- calico-system 41dcedc9-f0d1-4389-a970-074857eabb8a 905 0 2025-12-16 12:18:20 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:54cb69c56c projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ip-172-31-20-6 whisker-54cb69c56c-bnxxh eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali91c6949d5f9 [] [] }} ContainerID="280faf414f4e8a3bc3e8a2da3b4a66b3b1d832ee3342c471cc2ebe29c983cdd0" Namespace="calico-system" Pod="whisker-54cb69c56c-bnxxh" WorkloadEndpoint="ip--172--31--20--6-k8s-whisker--54cb69c56c--bnxxh-" Dec 16 12:18:20.783600 containerd[1971]: 2025-12-16 12:18:20.559 [INFO][4590] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="280faf414f4e8a3bc3e8a2da3b4a66b3b1d832ee3342c471cc2ebe29c983cdd0" Namespace="calico-system" Pod="whisker-54cb69c56c-bnxxh" WorkloadEndpoint="ip--172--31--20--6-k8s-whisker--54cb69c56c--bnxxh-eth0" Dec 16 12:18:20.783600 containerd[1971]: 2025-12-16 12:18:20.647 [INFO][4602] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="280faf414f4e8a3bc3e8a2da3b4a66b3b1d832ee3342c471cc2ebe29c983cdd0" HandleID="k8s-pod-network.280faf414f4e8a3bc3e8a2da3b4a66b3b1d832ee3342c471cc2ebe29c983cdd0" Workload="ip--172--31--20--6-k8s-whisker--54cb69c56c--bnxxh-eth0" Dec 16 12:18:20.783994 containerd[1971]: 2025-12-16 12:18:20.647 [INFO][4602] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="280faf414f4e8a3bc3e8a2da3b4a66b3b1d832ee3342c471cc2ebe29c983cdd0" HandleID="k8s-pod-network.280faf414f4e8a3bc3e8a2da3b4a66b3b1d832ee3342c471cc2ebe29c983cdd0" Workload="ip--172--31--20--6-k8s-whisker--54cb69c56c--bnxxh-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000103220), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-20-6", "pod":"whisker-54cb69c56c-bnxxh", "timestamp":"2025-12-16 12:18:20.647086382 +0000 UTC"}, Hostname:"ip-172-31-20-6", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 12:18:20.783994 containerd[1971]: 2025-12-16 12:18:20.647 [INFO][4602] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 12:18:20.783994 containerd[1971]: 2025-12-16 12:18:20.647 [INFO][4602] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 12:18:20.783994 containerd[1971]: 2025-12-16 12:18:20.647 [INFO][4602] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-20-6' Dec 16 12:18:20.783994 containerd[1971]: 2025-12-16 12:18:20.662 [INFO][4602] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.280faf414f4e8a3bc3e8a2da3b4a66b3b1d832ee3342c471cc2ebe29c983cdd0" host="ip-172-31-20-6" Dec 16 12:18:20.783994 containerd[1971]: 2025-12-16 12:18:20.671 [INFO][4602] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-20-6" Dec 16 12:18:20.783994 containerd[1971]: 2025-12-16 12:18:20.678 [INFO][4602] ipam/ipam.go 511: Trying affinity for 192.168.13.0/26 host="ip-172-31-20-6" Dec 16 12:18:20.783994 containerd[1971]: 2025-12-16 12:18:20.681 [INFO][4602] ipam/ipam.go 158: Attempting to load block cidr=192.168.13.0/26 host="ip-172-31-20-6" Dec 16 12:18:20.783994 containerd[1971]: 2025-12-16 12:18:20.685 [INFO][4602] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.13.0/26 host="ip-172-31-20-6" Dec 16 12:18:20.783994 containerd[1971]: 2025-12-16 12:18:20.685 [INFO][4602] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.13.0/26 handle="k8s-pod-network.280faf414f4e8a3bc3e8a2da3b4a66b3b1d832ee3342c471cc2ebe29c983cdd0" host="ip-172-31-20-6" Dec 16 12:18:20.784675 containerd[1971]: 2025-12-16 12:18:20.688 [INFO][4602] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.280faf414f4e8a3bc3e8a2da3b4a66b3b1d832ee3342c471cc2ebe29c983cdd0 Dec 16 12:18:20.784675 containerd[1971]: 2025-12-16 12:18:20.697 [INFO][4602] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.13.0/26 handle="k8s-pod-network.280faf414f4e8a3bc3e8a2da3b4a66b3b1d832ee3342c471cc2ebe29c983cdd0" host="ip-172-31-20-6" Dec 16 12:18:20.784675 containerd[1971]: 2025-12-16 12:18:20.711 [INFO][4602] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.13.1/26] block=192.168.13.0/26 handle="k8s-pod-network.280faf414f4e8a3bc3e8a2da3b4a66b3b1d832ee3342c471cc2ebe29c983cdd0" host="ip-172-31-20-6" Dec 16 12:18:20.784675 containerd[1971]: 2025-12-16 12:18:20.711 [INFO][4602] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.13.1/26] handle="k8s-pod-network.280faf414f4e8a3bc3e8a2da3b4a66b3b1d832ee3342c471cc2ebe29c983cdd0" host="ip-172-31-20-6" Dec 16 12:18:20.784675 containerd[1971]: 2025-12-16 12:18:20.711 [INFO][4602] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 12:18:20.784675 containerd[1971]: 2025-12-16 12:18:20.711 [INFO][4602] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.13.1/26] IPv6=[] ContainerID="280faf414f4e8a3bc3e8a2da3b4a66b3b1d832ee3342c471cc2ebe29c983cdd0" HandleID="k8s-pod-network.280faf414f4e8a3bc3e8a2da3b4a66b3b1d832ee3342c471cc2ebe29c983cdd0" Workload="ip--172--31--20--6-k8s-whisker--54cb69c56c--bnxxh-eth0" Dec 16 12:18:20.784948 containerd[1971]: 2025-12-16 12:18:20.720 [INFO][4590] cni-plugin/k8s.go 418: Populated endpoint ContainerID="280faf414f4e8a3bc3e8a2da3b4a66b3b1d832ee3342c471cc2ebe29c983cdd0" Namespace="calico-system" Pod="whisker-54cb69c56c-bnxxh" WorkloadEndpoint="ip--172--31--20--6-k8s-whisker--54cb69c56c--bnxxh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--20--6-k8s-whisker--54cb69c56c--bnxxh-eth0", GenerateName:"whisker-54cb69c56c-", Namespace:"calico-system", SelfLink:"", UID:"41dcedc9-f0d1-4389-a970-074857eabb8a", ResourceVersion:"905", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 12, 18, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"54cb69c56c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-20-6", ContainerID:"", Pod:"whisker-54cb69c56c-bnxxh", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.13.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali91c6949d5f9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 12:18:20.784948 containerd[1971]: 2025-12-16 12:18:20.721 [INFO][4590] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.13.1/32] ContainerID="280faf414f4e8a3bc3e8a2da3b4a66b3b1d832ee3342c471cc2ebe29c983cdd0" Namespace="calico-system" Pod="whisker-54cb69c56c-bnxxh" WorkloadEndpoint="ip--172--31--20--6-k8s-whisker--54cb69c56c--bnxxh-eth0" Dec 16 12:18:20.785810 containerd[1971]: 2025-12-16 12:18:20.721 [INFO][4590] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali91c6949d5f9 ContainerID="280faf414f4e8a3bc3e8a2da3b4a66b3b1d832ee3342c471cc2ebe29c983cdd0" Namespace="calico-system" Pod="whisker-54cb69c56c-bnxxh" WorkloadEndpoint="ip--172--31--20--6-k8s-whisker--54cb69c56c--bnxxh-eth0" Dec 16 12:18:20.785810 containerd[1971]: 2025-12-16 12:18:20.744 [INFO][4590] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="280faf414f4e8a3bc3e8a2da3b4a66b3b1d832ee3342c471cc2ebe29c983cdd0" Namespace="calico-system" Pod="whisker-54cb69c56c-bnxxh" WorkloadEndpoint="ip--172--31--20--6-k8s-whisker--54cb69c56c--bnxxh-eth0" Dec 16 12:18:20.786110 containerd[1971]: 2025-12-16 12:18:20.748 [INFO][4590] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="280faf414f4e8a3bc3e8a2da3b4a66b3b1d832ee3342c471cc2ebe29c983cdd0" Namespace="calico-system" Pod="whisker-54cb69c56c-bnxxh" WorkloadEndpoint="ip--172--31--20--6-k8s-whisker--54cb69c56c--bnxxh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--20--6-k8s-whisker--54cb69c56c--bnxxh-eth0", GenerateName:"whisker-54cb69c56c-", Namespace:"calico-system", SelfLink:"", UID:"41dcedc9-f0d1-4389-a970-074857eabb8a", ResourceVersion:"905", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 12, 18, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"54cb69c56c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-20-6", ContainerID:"280faf414f4e8a3bc3e8a2da3b4a66b3b1d832ee3342c471cc2ebe29c983cdd0", Pod:"whisker-54cb69c56c-bnxxh", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.13.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali91c6949d5f9", MAC:"0e:b2:10:69:60:ab", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 12:18:20.786342 containerd[1971]: 2025-12-16 12:18:20.778 [INFO][4590] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="280faf414f4e8a3bc3e8a2da3b4a66b3b1d832ee3342c471cc2ebe29c983cdd0" Namespace="calico-system" Pod="whisker-54cb69c56c-bnxxh" WorkloadEndpoint="ip--172--31--20--6-k8s-whisker--54cb69c56c--bnxxh-eth0" Dec 16 12:18:20.904534 containerd[1971]: time="2025-12-16T12:18:20.904334559Z" level=info msg="connecting to shim 280faf414f4e8a3bc3e8a2da3b4a66b3b1d832ee3342c471cc2ebe29c983cdd0" address="unix:///run/containerd/s/599475ac85dd09743e68e628998d37c8461f380bf96a244ac522dc582ce26e5e" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:18:21.173155 systemd[1]: Started cri-containerd-280faf414f4e8a3bc3e8a2da3b4a66b3b1d832ee3342c471cc2ebe29c983cdd0.scope - libcontainer container 280faf414f4e8a3bc3e8a2da3b4a66b3b1d832ee3342c471cc2ebe29c983cdd0. Dec 16 12:18:21.231000 audit: BPF prog-id=182 op=LOAD Dec 16 12:18:21.234000 audit: BPF prog-id=183 op=LOAD Dec 16 12:18:21.234000 audit[4670]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000178180 a2=98 a3=0 items=0 ppid=4655 pid=4670 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:18:21.234000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3238306661663431346634653861336263336538613264613362346136 Dec 16 12:18:21.235000 audit: BPF prog-id=183 op=UNLOAD Dec 16 12:18:21.235000 audit[4670]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4655 pid=4670 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:18:21.235000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3238306661663431346634653861336263336538613264613362346136 Dec 16 12:18:21.238000 audit: BPF prog-id=184 op=LOAD Dec 16 12:18:21.238000 audit[4670]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=4655 pid=4670 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:18:21.238000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3238306661663431346634653861336263336538613264613362346136 Dec 16 12:18:21.239000 audit: BPF prog-id=185 op=LOAD Dec 16 12:18:21.239000 audit[4670]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=4000178168 a2=98 a3=0 items=0 ppid=4655 pid=4670 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:18:21.239000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3238306661663431346634653861336263336538613264613362346136 Dec 16 12:18:21.240000 audit: BPF prog-id=185 op=UNLOAD Dec 16 12:18:21.240000 audit[4670]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4655 pid=4670 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:18:21.240000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3238306661663431346634653861336263336538613264613362346136 Dec 16 12:18:21.240000 audit: BPF prog-id=184 op=UNLOAD Dec 16 12:18:21.240000 audit[4670]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4655 pid=4670 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:18:21.240000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3238306661663431346634653861336263336538613264613362346136 Dec 16 12:18:21.241000 audit: BPF prog-id=186 op=LOAD Dec 16 12:18:21.241000 audit[4670]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000178648 a2=98 a3=0 items=0 ppid=4655 pid=4670 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:18:21.241000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3238306661663431346634653861336263336538613264613362346136 Dec 16 12:18:21.353142 containerd[1971]: time="2025-12-16T12:18:21.352997054Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-54cb69c56c-bnxxh,Uid:41dcedc9-f0d1-4389-a970-074857eabb8a,Namespace:calico-system,Attempt:0,} returns sandbox id \"280faf414f4e8a3bc3e8a2da3b4a66b3b1d832ee3342c471cc2ebe29c983cdd0\"" Dec 16 12:18:21.360174 containerd[1971]: time="2025-12-16T12:18:21.360107378Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Dec 16 12:18:21.555456 kubelet[3411]: I1216 12:18:21.555395 3411 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e2ce7bb6-1f37-4a7c-9c57-1ede6fc94ef4" path="/var/lib/kubelet/pods/e2ce7bb6-1f37-4a7c-9c57-1ede6fc94ef4/volumes" Dec 16 12:18:21.682181 containerd[1971]: time="2025-12-16T12:18:21.681997887Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:18:21.685498 containerd[1971]: time="2025-12-16T12:18:21.685273767Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Dec 16 12:18:21.685498 containerd[1971]: time="2025-12-16T12:18:21.685278267Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Dec 16 12:18:21.686233 kubelet[3411]: E1216 12:18:21.685970 3411 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 12:18:21.686233 kubelet[3411]: E1216 12:18:21.686032 3411 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 12:18:21.692900 kubelet[3411]: E1216 12:18:21.692787 3411 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:f43187b9eb34459fb9682ed9d785cfc9,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-dclhd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-54cb69c56c-bnxxh_calico-system(41dcedc9-f0d1-4389-a970-074857eabb8a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Dec 16 12:18:21.698357 containerd[1971]: time="2025-12-16T12:18:21.697755123Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Dec 16 12:18:22.002406 containerd[1971]: time="2025-12-16T12:18:22.002250229Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:18:22.005034 containerd[1971]: time="2025-12-16T12:18:22.004635433Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Dec 16 12:18:22.005292 containerd[1971]: time="2025-12-16T12:18:22.004701013Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Dec 16 12:18:22.005997 kubelet[3411]: E1216 12:18:22.005776 3411 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 12:18:22.006303 kubelet[3411]: E1216 12:18:22.006048 3411 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 12:18:22.006846 kubelet[3411]: E1216 12:18:22.006704 3411 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dclhd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-54cb69c56c-bnxxh_calico-system(41dcedc9-f0d1-4389-a970-074857eabb8a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Dec 16 12:18:22.008364 kubelet[3411]: E1216 12:18:22.007975 3411 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-54cb69c56c-bnxxh" podUID="41dcedc9-f0d1-4389-a970-074857eabb8a" Dec 16 12:18:22.115000 audit: BPF prog-id=187 op=LOAD Dec 16 12:18:22.115000 audit[4807]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffcba76c28 a2=98 a3=ffffcba76c18 items=0 ppid=4637 pid=4807 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:18:22.115000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Dec 16 12:18:22.115000 audit: BPF prog-id=187 op=UNLOAD Dec 16 12:18:22.115000 audit[4807]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=ffffcba76bf8 a3=0 items=0 ppid=4637 pid=4807 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:18:22.115000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Dec 16 12:18:22.116000 audit: BPF prog-id=188 op=LOAD Dec 16 12:18:22.116000 audit[4807]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffcba76ad8 a2=74 a3=95 items=0 ppid=4637 pid=4807 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:18:22.116000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Dec 16 12:18:22.116000 audit: BPF prog-id=188 op=UNLOAD Dec 16 12:18:22.116000 audit[4807]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=74 a3=95 items=0 ppid=4637 pid=4807 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:18:22.116000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Dec 16 12:18:22.116000 audit: BPF prog-id=189 op=LOAD Dec 16 12:18:22.116000 audit[4807]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffcba76b08 a2=40 a3=ffffcba76b38 items=0 ppid=4637 pid=4807 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:18:22.116000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Dec 16 12:18:22.117000 audit: BPF prog-id=189 op=UNLOAD Dec 16 12:18:22.117000 audit[4807]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=40 a3=ffffcba76b38 items=0 ppid=4637 pid=4807 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:18:22.117000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Dec 16 12:18:22.125000 audit: BPF prog-id=190 op=LOAD Dec 16 12:18:22.125000 audit[4808]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=fffff7d76c58 a2=98 a3=fffff7d76c48 items=0 ppid=4637 pid=4808 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:18:22.125000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 12:18:22.125000 audit: BPF prog-id=190 op=UNLOAD Dec 16 12:18:22.125000 audit[4808]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=fffff7d76c28 a3=0 items=0 ppid=4637 pid=4808 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:18:22.125000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 12:18:22.125000 audit: BPF prog-id=191 op=LOAD Dec 16 12:18:22.125000 audit[4808]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=fffff7d768e8 a2=74 a3=95 items=0 ppid=4637 pid=4808 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:18:22.125000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 12:18:22.125000 audit: BPF prog-id=191 op=UNLOAD Dec 16 12:18:22.125000 audit[4808]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=4 a1=57156c a2=74 a3=95 items=0 ppid=4637 pid=4808 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:18:22.125000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 12:18:22.125000 audit: BPF prog-id=192 op=LOAD Dec 16 12:18:22.125000 audit[4808]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=fffff7d76948 a2=94 a3=2 items=0 ppid=4637 pid=4808 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:18:22.125000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 12:18:22.125000 audit: BPF prog-id=192 op=UNLOAD Dec 16 12:18:22.125000 audit[4808]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=4 a1=57156c a2=70 a3=2 items=0 ppid=4637 pid=4808 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:18:22.125000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 12:18:22.410000 audit: BPF prog-id=193 op=LOAD Dec 16 12:18:22.410000 audit[4808]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=fffff7d76908 a2=40 a3=fffff7d76938 items=0 ppid=4637 pid=4808 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:18:22.410000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 12:18:22.410000 audit: BPF prog-id=193 op=UNLOAD Dec 16 12:18:22.410000 audit[4808]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=4 a1=57156c a2=40 a3=fffff7d76938 items=0 ppid=4637 pid=4808 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:18:22.410000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 12:18:22.428000 audit: BPF prog-id=194 op=LOAD Dec 16 12:18:22.428000 audit[4808]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=5 a1=fffff7d76918 a2=94 a3=4 items=0 ppid=4637 pid=4808 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:18:22.428000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 12:18:22.428000 audit: BPF prog-id=194 op=UNLOAD Dec 16 12:18:22.428000 audit[4808]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=5 a1=57156c a2=70 a3=4 items=0 ppid=4637 pid=4808 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:18:22.428000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 12:18:22.429000 audit: BPF prog-id=195 op=LOAD Dec 16 12:18:22.429000 audit[4808]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=fffff7d76758 a2=94 a3=5 items=0 ppid=4637 pid=4808 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:18:22.429000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 12:18:22.429000 audit: BPF prog-id=195 op=UNLOAD Dec 16 12:18:22.429000 audit[4808]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=6 a1=57156c a2=70 a3=5 items=0 ppid=4637 pid=4808 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:18:22.429000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 12:18:22.429000 audit: BPF prog-id=196 op=LOAD Dec 16 12:18:22.429000 audit[4808]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=5 a1=fffff7d76988 a2=94 a3=6 items=0 ppid=4637 pid=4808 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:18:22.429000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 12:18:22.429000 audit: BPF prog-id=196 op=UNLOAD Dec 16 12:18:22.429000 audit[4808]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=5 a1=57156c a2=70 a3=6 items=0 ppid=4637 pid=4808 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:18:22.429000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 12:18:22.430000 audit: BPF prog-id=197 op=LOAD Dec 16 12:18:22.430000 audit[4808]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=5 a1=fffff7d76158 a2=94 a3=83 items=0 ppid=4637 pid=4808 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:18:22.430000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 12:18:22.431000 audit: BPF prog-id=198 op=LOAD Dec 16 12:18:22.431000 audit[4808]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=7 a0=5 a1=fffff7d75f18 a2=94 a3=2 items=0 ppid=4637 pid=4808 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:18:22.431000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 12:18:22.431000 audit: BPF prog-id=198 op=UNLOAD Dec 16 12:18:22.431000 audit[4808]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=7 a1=57156c a2=c a3=0 items=0 ppid=4637 pid=4808 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:18:22.431000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 12:18:22.432000 audit: BPF prog-id=197 op=UNLOAD Dec 16 12:18:22.432000 audit[4808]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=5 a1=57156c a2=13874620 a3=13867b00 items=0 ppid=4637 pid=4808 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:18:22.432000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 12:18:22.437350 systemd-networkd[1564]: cali91c6949d5f9: Gained IPv6LL Dec 16 12:18:22.454000 audit: BPF prog-id=199 op=LOAD Dec 16 12:18:22.454000 audit[4813]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffe79d1f18 a2=98 a3=ffffe79d1f08 items=0 ppid=4637 pid=4813 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:18:22.454000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 16 12:18:22.454000 audit: BPF prog-id=199 op=UNLOAD Dec 16 12:18:22.454000 audit[4813]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=ffffe79d1ee8 a3=0 items=0 ppid=4637 pid=4813 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:18:22.454000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 16 12:18:22.454000 audit: BPF prog-id=200 op=LOAD Dec 16 12:18:22.454000 audit[4813]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffe79d1dc8 a2=74 a3=95 items=0 ppid=4637 pid=4813 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:18:22.454000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 16 12:18:22.454000 audit: BPF prog-id=200 op=UNLOAD Dec 16 12:18:22.454000 audit[4813]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=74 a3=95 items=0 ppid=4637 pid=4813 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:18:22.454000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 16 12:18:22.454000 audit: BPF prog-id=201 op=LOAD Dec 16 12:18:22.454000 audit[4813]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffe79d1df8 a2=40 a3=ffffe79d1e28 items=0 ppid=4637 pid=4813 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:18:22.454000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 16 12:18:22.454000 audit: BPF prog-id=201 op=UNLOAD Dec 16 12:18:22.454000 audit[4813]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=40 a3=ffffe79d1e28 items=0 ppid=4637 pid=4813 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:18:22.454000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 16 12:18:22.575371 (udev-worker)[4513]: Network interface NamePolicy= disabled on kernel command line. Dec 16 12:18:22.577121 systemd-networkd[1564]: vxlan.calico: Link UP Dec 16 12:18:22.577135 systemd-networkd[1564]: vxlan.calico: Gained carrier Dec 16 12:18:22.619000 audit: BPF prog-id=202 op=LOAD Dec 16 12:18:22.619000 audit[4837]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=fffff8c4d438 a2=98 a3=fffff8c4d428 items=0 ppid=4637 pid=4837 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:18:22.619000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 12:18:22.620000 audit: BPF prog-id=202 op=UNLOAD Dec 16 12:18:22.620000 audit[4837]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=fffff8c4d408 a3=0 items=0 ppid=4637 pid=4837 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:18:22.620000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 12:18:22.620000 audit: BPF prog-id=203 op=LOAD Dec 16 12:18:22.620000 audit[4837]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=fffff8c4d118 a2=74 a3=95 items=0 ppid=4637 pid=4837 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:18:22.620000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 12:18:22.620000 audit: BPF prog-id=203 op=UNLOAD Dec 16 12:18:22.620000 audit[4837]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=74 a3=95 items=0 ppid=4637 pid=4837 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:18:22.620000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 12:18:22.620000 audit: BPF prog-id=204 op=LOAD Dec 16 12:18:22.620000 audit[4837]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=fffff8c4d178 a2=94 a3=2 items=0 ppid=4637 pid=4837 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:18:22.620000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 12:18:22.620000 audit: BPF prog-id=204 op=UNLOAD Dec 16 12:18:22.620000 audit[4837]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=70 a3=2 items=0 ppid=4637 pid=4837 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:18:22.620000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 12:18:22.620000 audit: BPF prog-id=205 op=LOAD Dec 16 12:18:22.620000 audit[4837]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=fffff8c4cff8 a2=40 a3=fffff8c4d028 items=0 ppid=4637 pid=4837 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:18:22.620000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 12:18:22.621000 audit: BPF prog-id=205 op=UNLOAD Dec 16 12:18:22.621000 audit[4837]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=6 a1=57156c a2=40 a3=fffff8c4d028 items=0 ppid=4637 pid=4837 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:18:22.621000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 12:18:22.621000 audit: BPF prog-id=206 op=LOAD Dec 16 12:18:22.621000 audit[4837]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=fffff8c4d148 a2=94 a3=b7 items=0 ppid=4637 pid=4837 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:18:22.621000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 12:18:22.621000 audit: BPF prog-id=206 op=UNLOAD Dec 16 12:18:22.621000 audit[4837]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=6 a1=57156c a2=70 a3=b7 items=0 ppid=4637 pid=4837 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:18:22.621000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 12:18:22.623000 audit: BPF prog-id=207 op=LOAD Dec 16 12:18:22.623000 audit[4837]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=fffff8c4c7f8 a2=94 a3=2 items=0 ppid=4637 pid=4837 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:18:22.623000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 12:18:22.623000 audit: BPF prog-id=207 op=UNLOAD Dec 16 12:18:22.623000 audit[4837]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=6 a1=57156c a2=70 a3=2 items=0 ppid=4637 pid=4837 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:18:22.623000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 12:18:22.623000 audit: BPF prog-id=208 op=LOAD Dec 16 12:18:22.623000 audit[4837]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=fffff8c4c988 a2=94 a3=30 items=0 ppid=4637 pid=4837 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:18:22.623000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 12:18:22.634000 audit: BPF prog-id=209 op=LOAD Dec 16 12:18:22.634000 audit[4841]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffe8a871d8 a2=98 a3=ffffe8a871c8 items=0 ppid=4637 pid=4841 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:18:22.634000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 12:18:22.636000 audit: BPF prog-id=209 op=UNLOAD Dec 16 12:18:22.636000 audit[4841]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=ffffe8a871a8 a3=0 items=0 ppid=4637 pid=4841 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:18:22.636000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 12:18:22.636000 audit: BPF prog-id=210 op=LOAD Dec 16 12:18:22.636000 audit[4841]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=ffffe8a86e68 a2=74 a3=95 items=0 ppid=4637 pid=4841 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:18:22.636000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 12:18:22.636000 audit: BPF prog-id=210 op=UNLOAD Dec 16 12:18:22.636000 audit[4841]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=4 a1=57156c a2=74 a3=95 items=0 ppid=4637 pid=4841 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:18:22.636000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 12:18:22.636000 audit: BPF prog-id=211 op=LOAD Dec 16 12:18:22.636000 audit[4841]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=ffffe8a86ec8 a2=94 a3=2 items=0 ppid=4637 pid=4841 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:18:22.636000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 12:18:22.636000 audit: BPF prog-id=211 op=UNLOAD Dec 16 12:18:22.636000 audit[4841]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=4 a1=57156c a2=70 a3=2 items=0 ppid=4637 pid=4841 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:18:22.636000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 12:18:22.841000 audit: BPF prog-id=212 op=LOAD Dec 16 12:18:22.841000 audit[4841]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=ffffe8a86e88 a2=40 a3=ffffe8a86eb8 items=0 ppid=4637 pid=4841 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:18:22.841000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 12:18:22.842000 audit: BPF prog-id=212 op=UNLOAD Dec 16 12:18:22.842000 audit[4841]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=4 a1=57156c a2=40 a3=ffffe8a86eb8 items=0 ppid=4637 pid=4841 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:18:22.842000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 12:18:22.860000 audit: BPF prog-id=213 op=LOAD Dec 16 12:18:22.860000 audit[4841]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=5 a1=ffffe8a86e98 a2=94 a3=4 items=0 ppid=4637 pid=4841 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:18:22.860000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 12:18:22.861000 audit: BPF prog-id=213 op=UNLOAD Dec 16 12:18:22.861000 audit[4841]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=5 a1=57156c a2=70 a3=4 items=0 ppid=4637 pid=4841 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:18:22.861000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 12:18:22.862000 audit: BPF prog-id=214 op=LOAD Dec 16 12:18:22.862000 audit[4841]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=ffffe8a86cd8 a2=94 a3=5 items=0 ppid=4637 pid=4841 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:18:22.862000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 12:18:22.862000 audit: BPF prog-id=214 op=UNLOAD Dec 16 12:18:22.862000 audit[4841]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=6 a1=57156c a2=70 a3=5 items=0 ppid=4637 pid=4841 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:18:22.862000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 12:18:22.862000 audit: BPF prog-id=215 op=LOAD Dec 16 12:18:22.862000 audit[4841]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=5 a1=ffffe8a86f08 a2=94 a3=6 items=0 ppid=4637 pid=4841 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:18:22.862000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 12:18:22.862000 audit: BPF prog-id=215 op=UNLOAD Dec 16 12:18:22.862000 audit[4841]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=5 a1=57156c a2=70 a3=6 items=0 ppid=4637 pid=4841 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:18:22.862000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 12:18:22.863000 audit: BPF prog-id=216 op=LOAD Dec 16 12:18:22.863000 audit[4841]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=5 a1=ffffe8a866d8 a2=94 a3=83 items=0 ppid=4637 pid=4841 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:18:22.863000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 12:18:22.864000 audit: BPF prog-id=217 op=LOAD Dec 16 12:18:22.864000 audit[4841]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=7 a0=5 a1=ffffe8a86498 a2=94 a3=2 items=0 ppid=4637 pid=4841 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:18:22.864000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 12:18:22.865000 audit: BPF prog-id=217 op=UNLOAD Dec 16 12:18:22.865000 audit[4841]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=7 a1=57156c a2=c a3=0 items=0 ppid=4637 pid=4841 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:18:22.865000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 12:18:22.866000 audit: BPF prog-id=216 op=UNLOAD Dec 16 12:18:22.866000 audit[4841]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=5 a1=57156c a2=dcd620 a3=dc0b00 items=0 ppid=4637 pid=4841 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:18:22.866000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 12:18:22.877000 audit: BPF prog-id=208 op=UNLOAD Dec 16 12:18:22.877000 audit[4637]: SYSCALL arch=c00000b7 syscall=35 success=yes exit=0 a0=ffffffffffffff9c a1=4001093b80 a2=0 a3=0 items=0 ppid=4615 pid=4637 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="calico-node" exe="/usr/bin/calico-node" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:18:22.877000 audit: PROCTITLE proctitle=63616C69636F2D6E6F6465002D66656C6978 Dec 16 12:18:22.976000 audit[4864]: NETFILTER_CFG table=nat:123 family=2 entries=15 op=nft_register_chain pid=4864 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 16 12:18:22.976000 audit[4864]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5084 a0=3 a1=ffffc268ae20 a2=0 a3=ffffa3550fa8 items=0 ppid=4637 pid=4864 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:18:22.976000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 16 12:18:22.984000 audit[4866]: NETFILTER_CFG table=mangle:124 family=2 entries=16 op=nft_register_chain pid=4866 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 16 12:18:22.984000 audit[4866]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=6868 a0=3 a1=ffffcf5081c0 a2=0 a3=ffffb392dfa8 items=0 ppid=4637 pid=4866 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:18:22.984000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 16 12:18:22.987000 audit[4863]: NETFILTER_CFG table=raw:125 family=2 entries=21 op=nft_register_chain pid=4863 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 16 12:18:22.987000 audit[4863]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=8452 a0=3 a1=ffffcfa52a20 a2=0 a3=ffff8a896fa8 items=0 ppid=4637 pid=4863 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:18:22.987000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 16 12:18:23.007830 kubelet[3411]: E1216 12:18:23.007701 3411 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-54cb69c56c-bnxxh" podUID="41dcedc9-f0d1-4389-a970-074857eabb8a" Dec 16 12:18:23.012000 audit[4869]: NETFILTER_CFG table=filter:126 family=2 entries=94 op=nft_register_chain pid=4869 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 16 12:18:23.012000 audit[4869]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=53116 a0=3 a1=fffff62fd680 a2=0 a3=ffffaf1b8fa8 items=0 ppid=4637 pid=4869 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:18:23.012000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 16 12:18:23.079000 audit[4878]: NETFILTER_CFG table=filter:127 family=2 entries=20 op=nft_register_rule pid=4878 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:18:23.079000 audit[4878]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=7480 a0=3 a1=fffff7a31b80 a2=0 a3=1 items=0 ppid=3639 pid=4878 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:18:23.079000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:18:23.085000 audit[4878]: NETFILTER_CFG table=nat:128 family=2 entries=14 op=nft_register_rule pid=4878 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:18:23.085000 audit[4878]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=3468 a0=3 a1=fffff7a31b80 a2=0 a3=1 items=0 ppid=3639 pid=4878 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:18:23.085000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:18:23.548088 containerd[1971]: time="2025-12-16T12:18:23.547347004Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-fcb9bdb55-c27r4,Uid:a0556f5e-184b-4527-b60e-270da372abfb,Namespace:calico-apiserver,Attempt:0,}" Dec 16 12:18:23.549107 containerd[1971]: time="2025-12-16T12:18:23.548971000Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-fcb9bdb55-6k77x,Uid:41fe1dec-6478-42fa-9c60-8b697b125498,Namespace:calico-apiserver,Attempt:0,}" Dec 16 12:18:23.550143 containerd[1971]: time="2025-12-16T12:18:23.549662080Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-f4x44,Uid:9e39aa72-dd6b-4253-877f-1d57a9236239,Namespace:calico-system,Attempt:0,}" Dec 16 12:18:23.784596 systemd-networkd[1564]: vxlan.calico: Gained IPv6LL Dec 16 12:18:23.924420 systemd-networkd[1564]: cali01a2e1d9fb5: Link UP Dec 16 12:18:23.928115 systemd-networkd[1564]: cali01a2e1d9fb5: Gained carrier Dec 16 12:18:23.975950 containerd[1971]: 2025-12-16 12:18:23.677 [INFO][4893] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--20--6-k8s-goldmane--666569f655--f4x44-eth0 goldmane-666569f655- calico-system 9e39aa72-dd6b-4253-877f-1d57a9236239 829 0 2025-12-16 12:17:58 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:666569f655 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ip-172-31-20-6 goldmane-666569f655-f4x44 eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali01a2e1d9fb5 [] [] }} ContainerID="9be5e2f7a38fb00175afb079bb2340368c01b13e982b1dc47c7185aac452e50d" Namespace="calico-system" Pod="goldmane-666569f655-f4x44" WorkloadEndpoint="ip--172--31--20--6-k8s-goldmane--666569f655--f4x44-" Dec 16 12:18:23.975950 containerd[1971]: 2025-12-16 12:18:23.678 [INFO][4893] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="9be5e2f7a38fb00175afb079bb2340368c01b13e982b1dc47c7185aac452e50d" Namespace="calico-system" Pod="goldmane-666569f655-f4x44" WorkloadEndpoint="ip--172--31--20--6-k8s-goldmane--666569f655--f4x44-eth0" Dec 16 12:18:23.975950 containerd[1971]: 2025-12-16 12:18:23.791 [INFO][4914] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9be5e2f7a38fb00175afb079bb2340368c01b13e982b1dc47c7185aac452e50d" HandleID="k8s-pod-network.9be5e2f7a38fb00175afb079bb2340368c01b13e982b1dc47c7185aac452e50d" Workload="ip--172--31--20--6-k8s-goldmane--666569f655--f4x44-eth0" Dec 16 12:18:23.976633 containerd[1971]: 2025-12-16 12:18:23.791 [INFO][4914] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="9be5e2f7a38fb00175afb079bb2340368c01b13e982b1dc47c7185aac452e50d" HandleID="k8s-pod-network.9be5e2f7a38fb00175afb079bb2340368c01b13e982b1dc47c7185aac452e50d" Workload="ip--172--31--20--6-k8s-goldmane--666569f655--f4x44-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000347ed0), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-20-6", "pod":"goldmane-666569f655-f4x44", "timestamp":"2025-12-16 12:18:23.791374374 +0000 UTC"}, Hostname:"ip-172-31-20-6", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 12:18:23.976633 containerd[1971]: 2025-12-16 12:18:23.791 [INFO][4914] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 12:18:23.976633 containerd[1971]: 2025-12-16 12:18:23.792 [INFO][4914] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 12:18:23.976633 containerd[1971]: 2025-12-16 12:18:23.792 [INFO][4914] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-20-6' Dec 16 12:18:23.976633 containerd[1971]: 2025-12-16 12:18:23.816 [INFO][4914] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.9be5e2f7a38fb00175afb079bb2340368c01b13e982b1dc47c7185aac452e50d" host="ip-172-31-20-6" Dec 16 12:18:23.976633 containerd[1971]: 2025-12-16 12:18:23.838 [INFO][4914] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-20-6" Dec 16 12:18:23.976633 containerd[1971]: 2025-12-16 12:18:23.851 [INFO][4914] ipam/ipam.go 511: Trying affinity for 192.168.13.0/26 host="ip-172-31-20-6" Dec 16 12:18:23.976633 containerd[1971]: 2025-12-16 12:18:23.858 [INFO][4914] ipam/ipam.go 158: Attempting to load block cidr=192.168.13.0/26 host="ip-172-31-20-6" Dec 16 12:18:23.976633 containerd[1971]: 2025-12-16 12:18:23.864 [INFO][4914] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.13.0/26 host="ip-172-31-20-6" Dec 16 12:18:23.976633 containerd[1971]: 2025-12-16 12:18:23.864 [INFO][4914] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.13.0/26 handle="k8s-pod-network.9be5e2f7a38fb00175afb079bb2340368c01b13e982b1dc47c7185aac452e50d" host="ip-172-31-20-6" Dec 16 12:18:23.977186 containerd[1971]: 2025-12-16 12:18:23.870 [INFO][4914] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.9be5e2f7a38fb00175afb079bb2340368c01b13e982b1dc47c7185aac452e50d Dec 16 12:18:23.977186 containerd[1971]: 2025-12-16 12:18:23.881 [INFO][4914] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.13.0/26 handle="k8s-pod-network.9be5e2f7a38fb00175afb079bb2340368c01b13e982b1dc47c7185aac452e50d" host="ip-172-31-20-6" Dec 16 12:18:23.977186 containerd[1971]: 2025-12-16 12:18:23.896 [INFO][4914] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.13.2/26] block=192.168.13.0/26 handle="k8s-pod-network.9be5e2f7a38fb00175afb079bb2340368c01b13e982b1dc47c7185aac452e50d" host="ip-172-31-20-6" Dec 16 12:18:23.977186 containerd[1971]: 2025-12-16 12:18:23.896 [INFO][4914] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.13.2/26] handle="k8s-pod-network.9be5e2f7a38fb00175afb079bb2340368c01b13e982b1dc47c7185aac452e50d" host="ip-172-31-20-6" Dec 16 12:18:23.977186 containerd[1971]: 2025-12-16 12:18:23.896 [INFO][4914] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 12:18:23.977186 containerd[1971]: 2025-12-16 12:18:23.896 [INFO][4914] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.13.2/26] IPv6=[] ContainerID="9be5e2f7a38fb00175afb079bb2340368c01b13e982b1dc47c7185aac452e50d" HandleID="k8s-pod-network.9be5e2f7a38fb00175afb079bb2340368c01b13e982b1dc47c7185aac452e50d" Workload="ip--172--31--20--6-k8s-goldmane--666569f655--f4x44-eth0" Dec 16 12:18:23.977466 containerd[1971]: 2025-12-16 12:18:23.909 [INFO][4893] cni-plugin/k8s.go 418: Populated endpoint ContainerID="9be5e2f7a38fb00175afb079bb2340368c01b13e982b1dc47c7185aac452e50d" Namespace="calico-system" Pod="goldmane-666569f655-f4x44" WorkloadEndpoint="ip--172--31--20--6-k8s-goldmane--666569f655--f4x44-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--20--6-k8s-goldmane--666569f655--f4x44-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"9e39aa72-dd6b-4253-877f-1d57a9236239", ResourceVersion:"829", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 12, 17, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-20-6", ContainerID:"", Pod:"goldmane-666569f655-f4x44", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.13.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali01a2e1d9fb5", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 12:18:23.977466 containerd[1971]: 2025-12-16 12:18:23.909 [INFO][4893] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.13.2/32] ContainerID="9be5e2f7a38fb00175afb079bb2340368c01b13e982b1dc47c7185aac452e50d" Namespace="calico-system" Pod="goldmane-666569f655-f4x44" WorkloadEndpoint="ip--172--31--20--6-k8s-goldmane--666569f655--f4x44-eth0" Dec 16 12:18:23.977681 containerd[1971]: 2025-12-16 12:18:23.910 [INFO][4893] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali01a2e1d9fb5 ContainerID="9be5e2f7a38fb00175afb079bb2340368c01b13e982b1dc47c7185aac452e50d" Namespace="calico-system" Pod="goldmane-666569f655-f4x44" WorkloadEndpoint="ip--172--31--20--6-k8s-goldmane--666569f655--f4x44-eth0" Dec 16 12:18:23.977681 containerd[1971]: 2025-12-16 12:18:23.931 [INFO][4893] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9be5e2f7a38fb00175afb079bb2340368c01b13e982b1dc47c7185aac452e50d" Namespace="calico-system" Pod="goldmane-666569f655-f4x44" WorkloadEndpoint="ip--172--31--20--6-k8s-goldmane--666569f655--f4x44-eth0" Dec 16 12:18:23.977790 containerd[1971]: 2025-12-16 12:18:23.931 [INFO][4893] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="9be5e2f7a38fb00175afb079bb2340368c01b13e982b1dc47c7185aac452e50d" Namespace="calico-system" Pod="goldmane-666569f655-f4x44" WorkloadEndpoint="ip--172--31--20--6-k8s-goldmane--666569f655--f4x44-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--20--6-k8s-goldmane--666569f655--f4x44-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"9e39aa72-dd6b-4253-877f-1d57a9236239", ResourceVersion:"829", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 12, 17, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-20-6", ContainerID:"9be5e2f7a38fb00175afb079bb2340368c01b13e982b1dc47c7185aac452e50d", Pod:"goldmane-666569f655-f4x44", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.13.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali01a2e1d9fb5", MAC:"1a:99:83:6f:af:0e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 12:18:23.977908 containerd[1971]: 2025-12-16 12:18:23.968 [INFO][4893] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="9be5e2f7a38fb00175afb079bb2340368c01b13e982b1dc47c7185aac452e50d" Namespace="calico-system" Pod="goldmane-666569f655-f4x44" WorkloadEndpoint="ip--172--31--20--6-k8s-goldmane--666569f655--f4x44-eth0" Dec 16 12:18:24.059172 systemd-networkd[1564]: calid737a80e72b: Link UP Dec 16 12:18:24.063710 systemd-networkd[1564]: calid737a80e72b: Gained carrier Dec 16 12:18:24.093193 containerd[1971]: time="2025-12-16T12:18:24.093112947Z" level=info msg="connecting to shim 9be5e2f7a38fb00175afb079bb2340368c01b13e982b1dc47c7185aac452e50d" address="unix:///run/containerd/s/571a58e574396472fc50a4aa3e29466438cd1513a10d8ed6d03594644fb51c1a" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:18:24.142328 containerd[1971]: 2025-12-16 12:18:23.741 [INFO][4881] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--20--6-k8s-calico--apiserver--fcb9bdb55--c27r4-eth0 calico-apiserver-fcb9bdb55- calico-apiserver a0556f5e-184b-4527-b60e-270da372abfb 822 0 2025-12-16 12:17:49 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:fcb9bdb55 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-20-6 calico-apiserver-fcb9bdb55-c27r4 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calid737a80e72b [] [] }} ContainerID="00e0feb9bd91787c0600d7c8a06743e63fd415e2669b0000991f8d3ce6c8c6de" Namespace="calico-apiserver" Pod="calico-apiserver-fcb9bdb55-c27r4" WorkloadEndpoint="ip--172--31--20--6-k8s-calico--apiserver--fcb9bdb55--c27r4-" Dec 16 12:18:24.142328 containerd[1971]: 2025-12-16 12:18:23.742 [INFO][4881] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="00e0feb9bd91787c0600d7c8a06743e63fd415e2669b0000991f8d3ce6c8c6de" Namespace="calico-apiserver" Pod="calico-apiserver-fcb9bdb55-c27r4" WorkloadEndpoint="ip--172--31--20--6-k8s-calico--apiserver--fcb9bdb55--c27r4-eth0" Dec 16 12:18:24.142328 containerd[1971]: 2025-12-16 12:18:23.850 [INFO][4923] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="00e0feb9bd91787c0600d7c8a06743e63fd415e2669b0000991f8d3ce6c8c6de" HandleID="k8s-pod-network.00e0feb9bd91787c0600d7c8a06743e63fd415e2669b0000991f8d3ce6c8c6de" Workload="ip--172--31--20--6-k8s-calico--apiserver--fcb9bdb55--c27r4-eth0" Dec 16 12:18:24.143009 containerd[1971]: 2025-12-16 12:18:23.851 [INFO][4923] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="00e0feb9bd91787c0600d7c8a06743e63fd415e2669b0000991f8d3ce6c8c6de" HandleID="k8s-pod-network.00e0feb9bd91787c0600d7c8a06743e63fd415e2669b0000991f8d3ce6c8c6de" Workload="ip--172--31--20--6-k8s-calico--apiserver--fcb9bdb55--c27r4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002d3910), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ip-172-31-20-6", "pod":"calico-apiserver-fcb9bdb55-c27r4", "timestamp":"2025-12-16 12:18:23.850621266 +0000 UTC"}, Hostname:"ip-172-31-20-6", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 12:18:24.143009 containerd[1971]: 2025-12-16 12:18:23.853 [INFO][4923] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 12:18:24.143009 containerd[1971]: 2025-12-16 12:18:23.896 [INFO][4923] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 12:18:24.143009 containerd[1971]: 2025-12-16 12:18:23.896 [INFO][4923] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-20-6' Dec 16 12:18:24.143009 containerd[1971]: 2025-12-16 12:18:23.930 [INFO][4923] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.00e0feb9bd91787c0600d7c8a06743e63fd415e2669b0000991f8d3ce6c8c6de" host="ip-172-31-20-6" Dec 16 12:18:24.143009 containerd[1971]: 2025-12-16 12:18:23.948 [INFO][4923] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-20-6" Dec 16 12:18:24.143009 containerd[1971]: 2025-12-16 12:18:23.966 [INFO][4923] ipam/ipam.go 511: Trying affinity for 192.168.13.0/26 host="ip-172-31-20-6" Dec 16 12:18:24.143009 containerd[1971]: 2025-12-16 12:18:23.973 [INFO][4923] ipam/ipam.go 158: Attempting to load block cidr=192.168.13.0/26 host="ip-172-31-20-6" Dec 16 12:18:24.143009 containerd[1971]: 2025-12-16 12:18:23.978 [INFO][4923] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.13.0/26 host="ip-172-31-20-6" Dec 16 12:18:24.145286 containerd[1971]: 2025-12-16 12:18:23.978 [INFO][4923] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.13.0/26 handle="k8s-pod-network.00e0feb9bd91787c0600d7c8a06743e63fd415e2669b0000991f8d3ce6c8c6de" host="ip-172-31-20-6" Dec 16 12:18:24.145286 containerd[1971]: 2025-12-16 12:18:23.983 [INFO][4923] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.00e0feb9bd91787c0600d7c8a06743e63fd415e2669b0000991f8d3ce6c8c6de Dec 16 12:18:24.145286 containerd[1971]: 2025-12-16 12:18:23.998 [INFO][4923] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.13.0/26 handle="k8s-pod-network.00e0feb9bd91787c0600d7c8a06743e63fd415e2669b0000991f8d3ce6c8c6de" host="ip-172-31-20-6" Dec 16 12:18:24.145286 containerd[1971]: 2025-12-16 12:18:24.018 [INFO][4923] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.13.3/26] block=192.168.13.0/26 handle="k8s-pod-network.00e0feb9bd91787c0600d7c8a06743e63fd415e2669b0000991f8d3ce6c8c6de" host="ip-172-31-20-6" Dec 16 12:18:24.145286 containerd[1971]: 2025-12-16 12:18:24.020 [INFO][4923] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.13.3/26] handle="k8s-pod-network.00e0feb9bd91787c0600d7c8a06743e63fd415e2669b0000991f8d3ce6c8c6de" host="ip-172-31-20-6" Dec 16 12:18:24.145286 containerd[1971]: 2025-12-16 12:18:24.020 [INFO][4923] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 12:18:24.145286 containerd[1971]: 2025-12-16 12:18:24.020 [INFO][4923] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.13.3/26] IPv6=[] ContainerID="00e0feb9bd91787c0600d7c8a06743e63fd415e2669b0000991f8d3ce6c8c6de" HandleID="k8s-pod-network.00e0feb9bd91787c0600d7c8a06743e63fd415e2669b0000991f8d3ce6c8c6de" Workload="ip--172--31--20--6-k8s-calico--apiserver--fcb9bdb55--c27r4-eth0" Dec 16 12:18:24.145614 containerd[1971]: 2025-12-16 12:18:24.045 [INFO][4881] cni-plugin/k8s.go 418: Populated endpoint ContainerID="00e0feb9bd91787c0600d7c8a06743e63fd415e2669b0000991f8d3ce6c8c6de" Namespace="calico-apiserver" Pod="calico-apiserver-fcb9bdb55-c27r4" WorkloadEndpoint="ip--172--31--20--6-k8s-calico--apiserver--fcb9bdb55--c27r4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--20--6-k8s-calico--apiserver--fcb9bdb55--c27r4-eth0", GenerateName:"calico-apiserver-fcb9bdb55-", Namespace:"calico-apiserver", SelfLink:"", UID:"a0556f5e-184b-4527-b60e-270da372abfb", ResourceVersion:"822", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 12, 17, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"fcb9bdb55", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-20-6", ContainerID:"", Pod:"calico-apiserver-fcb9bdb55-c27r4", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.13.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calid737a80e72b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 12:18:24.145749 containerd[1971]: 2025-12-16 12:18:24.045 [INFO][4881] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.13.3/32] ContainerID="00e0feb9bd91787c0600d7c8a06743e63fd415e2669b0000991f8d3ce6c8c6de" Namespace="calico-apiserver" Pod="calico-apiserver-fcb9bdb55-c27r4" WorkloadEndpoint="ip--172--31--20--6-k8s-calico--apiserver--fcb9bdb55--c27r4-eth0" Dec 16 12:18:24.145749 containerd[1971]: 2025-12-16 12:18:24.045 [INFO][4881] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid737a80e72b ContainerID="00e0feb9bd91787c0600d7c8a06743e63fd415e2669b0000991f8d3ce6c8c6de" Namespace="calico-apiserver" Pod="calico-apiserver-fcb9bdb55-c27r4" WorkloadEndpoint="ip--172--31--20--6-k8s-calico--apiserver--fcb9bdb55--c27r4-eth0" Dec 16 12:18:24.145749 containerd[1971]: 2025-12-16 12:18:24.073 [INFO][4881] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="00e0feb9bd91787c0600d7c8a06743e63fd415e2669b0000991f8d3ce6c8c6de" Namespace="calico-apiserver" Pod="calico-apiserver-fcb9bdb55-c27r4" WorkloadEndpoint="ip--172--31--20--6-k8s-calico--apiserver--fcb9bdb55--c27r4-eth0" Dec 16 12:18:24.145898 containerd[1971]: 2025-12-16 12:18:24.080 [INFO][4881] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="00e0feb9bd91787c0600d7c8a06743e63fd415e2669b0000991f8d3ce6c8c6de" Namespace="calico-apiserver" Pod="calico-apiserver-fcb9bdb55-c27r4" WorkloadEndpoint="ip--172--31--20--6-k8s-calico--apiserver--fcb9bdb55--c27r4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--20--6-k8s-calico--apiserver--fcb9bdb55--c27r4-eth0", GenerateName:"calico-apiserver-fcb9bdb55-", Namespace:"calico-apiserver", SelfLink:"", UID:"a0556f5e-184b-4527-b60e-270da372abfb", ResourceVersion:"822", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 12, 17, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"fcb9bdb55", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-20-6", ContainerID:"00e0feb9bd91787c0600d7c8a06743e63fd415e2669b0000991f8d3ce6c8c6de", Pod:"calico-apiserver-fcb9bdb55-c27r4", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.13.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calid737a80e72b", MAC:"e6:8b:8f:d9:2e:3a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 12:18:24.147003 containerd[1971]: 2025-12-16 12:18:24.122 [INFO][4881] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="00e0feb9bd91787c0600d7c8a06743e63fd415e2669b0000991f8d3ce6c8c6de" Namespace="calico-apiserver" Pod="calico-apiserver-fcb9bdb55-c27r4" WorkloadEndpoint="ip--172--31--20--6-k8s-calico--apiserver--fcb9bdb55--c27r4-eth0" Dec 16 12:18:24.163446 kernel: kauditd_printk_skb: 231 callbacks suppressed Dec 16 12:18:24.163584 kernel: audit: type=1325 audit(1765887504.156:668): table=filter:129 family=2 entries=44 op=nft_register_chain pid=4974 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 16 12:18:24.156000 audit[4974]: NETFILTER_CFG table=filter:129 family=2 entries=44 op=nft_register_chain pid=4974 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 16 12:18:24.175030 kernel: audit: type=1300 audit(1765887504.156:668): arch=c00000b7 syscall=211 success=yes exit=25180 a0=3 a1=ffffc691d180 a2=0 a3=ffff9d174fa8 items=0 ppid=4637 pid=4974 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:18:24.156000 audit[4974]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=25180 a0=3 a1=ffffc691d180 a2=0 a3=ffff9d174fa8 items=0 ppid=4637 pid=4974 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:18:24.156000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 16 12:18:24.180937 kernel: audit: type=1327 audit(1765887504.156:668): proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 16 12:18:24.210000 systemd[1]: Started cri-containerd-9be5e2f7a38fb00175afb079bb2340368c01b13e982b1dc47c7185aac452e50d.scope - libcontainer container 9be5e2f7a38fb00175afb079bb2340368c01b13e982b1dc47c7185aac452e50d. Dec 16 12:18:24.247000 audit[4986]: NETFILTER_CFG table=filter:130 family=2 entries=54 op=nft_register_chain pid=4986 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 16 12:18:24.258106 kernel: audit: type=1325 audit(1765887504.247:669): table=filter:130 family=2 entries=54 op=nft_register_chain pid=4986 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 16 12:18:24.247000 audit[4986]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=29396 a0=3 a1=ffffe3d6d260 a2=0 a3=ffff93ca1fa8 items=0 ppid=4637 pid=4986 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:18:24.247000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 16 12:18:24.274672 kernel: audit: type=1300 audit(1765887504.247:669): arch=c00000b7 syscall=211 success=yes exit=29396 a0=3 a1=ffffe3d6d260 a2=0 a3=ffff93ca1fa8 items=0 ppid=4637 pid=4986 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:18:24.274795 kernel: audit: type=1327 audit(1765887504.247:669): proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 16 12:18:24.289575 containerd[1971]: time="2025-12-16T12:18:24.289317148Z" level=info msg="connecting to shim 00e0feb9bd91787c0600d7c8a06743e63fd415e2669b0000991f8d3ce6c8c6de" address="unix:///run/containerd/s/369ad756787b6bafa129c0b947843ac2f81e37ac494c8346ac9a77e18bfa589b" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:18:24.330844 systemd-networkd[1564]: cali0829b09dc9b: Link UP Dec 16 12:18:24.340164 systemd-networkd[1564]: cali0829b09dc9b: Gained carrier Dec 16 12:18:24.384906 containerd[1971]: 2025-12-16 12:18:23.748 [INFO][4895] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--20--6-k8s-calico--apiserver--fcb9bdb55--6k77x-eth0 calico-apiserver-fcb9bdb55- calico-apiserver 41fe1dec-6478-42fa-9c60-8b697b125498 828 0 2025-12-16 12:17:49 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:fcb9bdb55 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-20-6 calico-apiserver-fcb9bdb55-6k77x eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali0829b09dc9b [] [] }} ContainerID="c115eb7de810a99ea52a113368effeee4f45875a266402a15ca7df929c4c5b55" Namespace="calico-apiserver" Pod="calico-apiserver-fcb9bdb55-6k77x" WorkloadEndpoint="ip--172--31--20--6-k8s-calico--apiserver--fcb9bdb55--6k77x-" Dec 16 12:18:24.384906 containerd[1971]: 2025-12-16 12:18:23.748 [INFO][4895] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c115eb7de810a99ea52a113368effeee4f45875a266402a15ca7df929c4c5b55" Namespace="calico-apiserver" Pod="calico-apiserver-fcb9bdb55-6k77x" WorkloadEndpoint="ip--172--31--20--6-k8s-calico--apiserver--fcb9bdb55--6k77x-eth0" Dec 16 12:18:24.384906 containerd[1971]: 2025-12-16 12:18:23.873 [INFO][4928] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c115eb7de810a99ea52a113368effeee4f45875a266402a15ca7df929c4c5b55" HandleID="k8s-pod-network.c115eb7de810a99ea52a113368effeee4f45875a266402a15ca7df929c4c5b55" Workload="ip--172--31--20--6-k8s-calico--apiserver--fcb9bdb55--6k77x-eth0" Dec 16 12:18:24.385253 containerd[1971]: 2025-12-16 12:18:23.873 [INFO][4928] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="c115eb7de810a99ea52a113368effeee4f45875a266402a15ca7df929c4c5b55" HandleID="k8s-pod-network.c115eb7de810a99ea52a113368effeee4f45875a266402a15ca7df929c4c5b55" Workload="ip--172--31--20--6-k8s-calico--apiserver--fcb9bdb55--6k77x-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003b8d10), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ip-172-31-20-6", "pod":"calico-apiserver-fcb9bdb55-6k77x", "timestamp":"2025-12-16 12:18:23.873426642 +0000 UTC"}, Hostname:"ip-172-31-20-6", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 12:18:24.385253 containerd[1971]: 2025-12-16 12:18:23.874 [INFO][4928] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 12:18:24.385253 containerd[1971]: 2025-12-16 12:18:24.021 [INFO][4928] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 12:18:24.385253 containerd[1971]: 2025-12-16 12:18:24.022 [INFO][4928] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-20-6' Dec 16 12:18:24.385253 containerd[1971]: 2025-12-16 12:18:24.090 [INFO][4928] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.c115eb7de810a99ea52a113368effeee4f45875a266402a15ca7df929c4c5b55" host="ip-172-31-20-6" Dec 16 12:18:24.385253 containerd[1971]: 2025-12-16 12:18:24.135 [INFO][4928] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-20-6" Dec 16 12:18:24.385253 containerd[1971]: 2025-12-16 12:18:24.180 [INFO][4928] ipam/ipam.go 511: Trying affinity for 192.168.13.0/26 host="ip-172-31-20-6" Dec 16 12:18:24.385253 containerd[1971]: 2025-12-16 12:18:24.207 [INFO][4928] ipam/ipam.go 158: Attempting to load block cidr=192.168.13.0/26 host="ip-172-31-20-6" Dec 16 12:18:24.385253 containerd[1971]: 2025-12-16 12:18:24.232 [INFO][4928] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.13.0/26 host="ip-172-31-20-6" Dec 16 12:18:24.386791 containerd[1971]: 2025-12-16 12:18:24.234 [INFO][4928] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.13.0/26 handle="k8s-pod-network.c115eb7de810a99ea52a113368effeee4f45875a266402a15ca7df929c4c5b55" host="ip-172-31-20-6" Dec 16 12:18:24.386791 containerd[1971]: 2025-12-16 12:18:24.246 [INFO][4928] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.c115eb7de810a99ea52a113368effeee4f45875a266402a15ca7df929c4c5b55 Dec 16 12:18:24.386791 containerd[1971]: 2025-12-16 12:18:24.270 [INFO][4928] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.13.0/26 handle="k8s-pod-network.c115eb7de810a99ea52a113368effeee4f45875a266402a15ca7df929c4c5b55" host="ip-172-31-20-6" Dec 16 12:18:24.386791 containerd[1971]: 2025-12-16 12:18:24.293 [INFO][4928] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.13.4/26] block=192.168.13.0/26 handle="k8s-pod-network.c115eb7de810a99ea52a113368effeee4f45875a266402a15ca7df929c4c5b55" host="ip-172-31-20-6" Dec 16 12:18:24.386791 containerd[1971]: 2025-12-16 12:18:24.294 [INFO][4928] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.13.4/26] handle="k8s-pod-network.c115eb7de810a99ea52a113368effeee4f45875a266402a15ca7df929c4c5b55" host="ip-172-31-20-6" Dec 16 12:18:24.386791 containerd[1971]: 2025-12-16 12:18:24.294 [INFO][4928] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 12:18:24.386791 containerd[1971]: 2025-12-16 12:18:24.294 [INFO][4928] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.13.4/26] IPv6=[] ContainerID="c115eb7de810a99ea52a113368effeee4f45875a266402a15ca7df929c4c5b55" HandleID="k8s-pod-network.c115eb7de810a99ea52a113368effeee4f45875a266402a15ca7df929c4c5b55" Workload="ip--172--31--20--6-k8s-calico--apiserver--fcb9bdb55--6k77x-eth0" Dec 16 12:18:24.388227 containerd[1971]: 2025-12-16 12:18:24.312 [INFO][4895] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c115eb7de810a99ea52a113368effeee4f45875a266402a15ca7df929c4c5b55" Namespace="calico-apiserver" Pod="calico-apiserver-fcb9bdb55-6k77x" WorkloadEndpoint="ip--172--31--20--6-k8s-calico--apiserver--fcb9bdb55--6k77x-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--20--6-k8s-calico--apiserver--fcb9bdb55--6k77x-eth0", GenerateName:"calico-apiserver-fcb9bdb55-", Namespace:"calico-apiserver", SelfLink:"", UID:"41fe1dec-6478-42fa-9c60-8b697b125498", ResourceVersion:"828", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 12, 17, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"fcb9bdb55", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-20-6", ContainerID:"", Pod:"calico-apiserver-fcb9bdb55-6k77x", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.13.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali0829b09dc9b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 12:18:24.388383 containerd[1971]: 2025-12-16 12:18:24.314 [INFO][4895] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.13.4/32] ContainerID="c115eb7de810a99ea52a113368effeee4f45875a266402a15ca7df929c4c5b55" Namespace="calico-apiserver" Pod="calico-apiserver-fcb9bdb55-6k77x" WorkloadEndpoint="ip--172--31--20--6-k8s-calico--apiserver--fcb9bdb55--6k77x-eth0" Dec 16 12:18:24.388383 containerd[1971]: 2025-12-16 12:18:24.314 [INFO][4895] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali0829b09dc9b ContainerID="c115eb7de810a99ea52a113368effeee4f45875a266402a15ca7df929c4c5b55" Namespace="calico-apiserver" Pod="calico-apiserver-fcb9bdb55-6k77x" WorkloadEndpoint="ip--172--31--20--6-k8s-calico--apiserver--fcb9bdb55--6k77x-eth0" Dec 16 12:18:24.388383 containerd[1971]: 2025-12-16 12:18:24.337 [INFO][4895] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c115eb7de810a99ea52a113368effeee4f45875a266402a15ca7df929c4c5b55" Namespace="calico-apiserver" Pod="calico-apiserver-fcb9bdb55-6k77x" WorkloadEndpoint="ip--172--31--20--6-k8s-calico--apiserver--fcb9bdb55--6k77x-eth0" Dec 16 12:18:24.388803 containerd[1971]: 2025-12-16 12:18:24.340 [INFO][4895] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c115eb7de810a99ea52a113368effeee4f45875a266402a15ca7df929c4c5b55" Namespace="calico-apiserver" Pod="calico-apiserver-fcb9bdb55-6k77x" WorkloadEndpoint="ip--172--31--20--6-k8s-calico--apiserver--fcb9bdb55--6k77x-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--20--6-k8s-calico--apiserver--fcb9bdb55--6k77x-eth0", GenerateName:"calico-apiserver-fcb9bdb55-", Namespace:"calico-apiserver", SelfLink:"", UID:"41fe1dec-6478-42fa-9c60-8b697b125498", ResourceVersion:"828", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 12, 17, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"fcb9bdb55", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-20-6", ContainerID:"c115eb7de810a99ea52a113368effeee4f45875a266402a15ca7df929c4c5b55", Pod:"calico-apiserver-fcb9bdb55-6k77x", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.13.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali0829b09dc9b", MAC:"82:e7:9e:6f:3a:70", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 12:18:24.388953 containerd[1971]: 2025-12-16 12:18:24.372 [INFO][4895] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c115eb7de810a99ea52a113368effeee4f45875a266402a15ca7df929c4c5b55" Namespace="calico-apiserver" Pod="calico-apiserver-fcb9bdb55-6k77x" WorkloadEndpoint="ip--172--31--20--6-k8s-calico--apiserver--fcb9bdb55--6k77x-eth0" Dec 16 12:18:24.406579 systemd[1]: Started cri-containerd-00e0feb9bd91787c0600d7c8a06743e63fd415e2669b0000991f8d3ce6c8c6de.scope - libcontainer container 00e0feb9bd91787c0600d7c8a06743e63fd415e2669b0000991f8d3ce6c8c6de. Dec 16 12:18:24.463601 containerd[1971]: time="2025-12-16T12:18:24.463279421Z" level=info msg="connecting to shim c115eb7de810a99ea52a113368effeee4f45875a266402a15ca7df929c4c5b55" address="unix:///run/containerd/s/b9b44db2269716563dce19e2f2dd52d625a37c89cbaf120b90799a7b2ebe73e9" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:18:24.508000 audit: BPF prog-id=218 op=LOAD Dec 16 12:18:24.513203 kernel: audit: type=1334 audit(1765887504.508:670): prog-id=218 op=LOAD Dec 16 12:18:24.516000 audit: BPF prog-id=219 op=LOAD Dec 16 12:18:24.526858 kernel: audit: type=1334 audit(1765887504.516:671): prog-id=219 op=LOAD Dec 16 12:18:24.527131 kernel: audit: type=1300 audit(1765887504.516:671): arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130180 a2=98 a3=0 items=0 ppid=4957 pid=4968 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:18:24.516000 audit[4968]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130180 a2=98 a3=0 items=0 ppid=4957 pid=4968 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:18:24.516000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3962653565326637613338666230303137356166623037396262323334 Dec 16 12:18:24.539305 kernel: audit: type=1327 audit(1765887504.516:671): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3962653565326637613338666230303137356166623037396262323334 Dec 16 12:18:24.518000 audit: BPF prog-id=219 op=UNLOAD Dec 16 12:18:24.518000 audit[4968]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4957 pid=4968 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:18:24.518000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3962653565326637613338666230303137356166623037396262323334 Dec 16 12:18:24.518000 audit: BPF prog-id=220 op=LOAD Dec 16 12:18:24.518000 audit[4968]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001303e8 a2=98 a3=0 items=0 ppid=4957 pid=4968 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:18:24.518000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3962653565326637613338666230303137356166623037396262323334 Dec 16 12:18:24.519000 audit: BPF prog-id=221 op=LOAD Dec 16 12:18:24.519000 audit[4968]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000130168 a2=98 a3=0 items=0 ppid=4957 pid=4968 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:18:24.519000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3962653565326637613338666230303137356166623037396262323334 Dec 16 12:18:24.526000 audit: BPF prog-id=221 op=UNLOAD Dec 16 12:18:24.526000 audit[4968]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4957 pid=4968 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:18:24.526000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3962653565326637613338666230303137356166623037396262323334 Dec 16 12:18:24.526000 audit: BPF prog-id=220 op=UNLOAD Dec 16 12:18:24.526000 audit[4968]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4957 pid=4968 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:18:24.526000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3962653565326637613338666230303137356166623037396262323334 Dec 16 12:18:24.527000 audit: BPF prog-id=222 op=LOAD Dec 16 12:18:24.527000 audit[4968]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130648 a2=98 a3=0 items=0 ppid=4957 pid=4968 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:18:24.527000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3962653565326637613338666230303137356166623037396262323334 Dec 16 12:18:24.547566 containerd[1971]: time="2025-12-16T12:18:24.547493105Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-d2s2m,Uid:a9a3cc19-451e-4d00-baff-e4e463318465,Namespace:kube-system,Attempt:0,}" Dec 16 12:18:24.640497 systemd[1]: Started cri-containerd-c115eb7de810a99ea52a113368effeee4f45875a266402a15ca7df929c4c5b55.scope - libcontainer container c115eb7de810a99ea52a113368effeee4f45875a266402a15ca7df929c4c5b55. Dec 16 12:18:24.651000 audit: BPF prog-id=223 op=LOAD Dec 16 12:18:24.653000 audit: BPF prog-id=224 op=LOAD Dec 16 12:18:24.653000 audit[5015]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130180 a2=98 a3=0 items=0 ppid=5002 pid=5015 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:18:24.653000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3030653066656239626439313738376330363030643763386130363734 Dec 16 12:18:24.653000 audit: BPF prog-id=224 op=UNLOAD Dec 16 12:18:24.653000 audit[5015]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5002 pid=5015 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:18:24.653000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3030653066656239626439313738376330363030643763386130363734 Dec 16 12:18:24.655000 audit: BPF prog-id=225 op=LOAD Dec 16 12:18:24.655000 audit[5015]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001303e8 a2=98 a3=0 items=0 ppid=5002 pid=5015 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:18:24.655000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3030653066656239626439313738376330363030643763386130363734 Dec 16 12:18:24.656000 audit: BPF prog-id=226 op=LOAD Dec 16 12:18:24.656000 audit[5015]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000130168 a2=98 a3=0 items=0 ppid=5002 pid=5015 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:18:24.656000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3030653066656239626439313738376330363030643763386130363734 Dec 16 12:18:24.656000 audit: BPF prog-id=226 op=UNLOAD Dec 16 12:18:24.656000 audit[5015]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5002 pid=5015 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:18:24.656000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3030653066656239626439313738376330363030643763386130363734 Dec 16 12:18:24.657000 audit: BPF prog-id=225 op=UNLOAD Dec 16 12:18:24.657000 audit[5015]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5002 pid=5015 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:18:24.657000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3030653066656239626439313738376330363030643763386130363734 Dec 16 12:18:24.657000 audit: BPF prog-id=227 op=LOAD Dec 16 12:18:24.657000 audit[5015]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130648 a2=98 a3=0 items=0 ppid=5002 pid=5015 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:18:24.657000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3030653066656239626439313738376330363030643763386130363734 Dec 16 12:18:24.772000 audit[5090]: NETFILTER_CFG table=filter:131 family=2 entries=51 op=nft_register_chain pid=5090 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 16 12:18:24.772000 audit[5090]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=27116 a0=3 a1=ffffe5dc45a0 a2=0 a3=ffffa7f1cfa8 items=0 ppid=4637 pid=5090 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:18:24.772000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 16 12:18:24.849000 audit: BPF prog-id=228 op=LOAD Dec 16 12:18:24.850000 audit: BPF prog-id=229 op=LOAD Dec 16 12:18:24.850000 audit[5061]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a0180 a2=98 a3=0 items=0 ppid=5049 pid=5061 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:18:24.850000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6331313565623764653831306139396561353261313133333638656666 Dec 16 12:18:24.851000 audit: BPF prog-id=229 op=UNLOAD Dec 16 12:18:24.851000 audit[5061]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5049 pid=5061 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:18:24.851000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6331313565623764653831306139396561353261313133333638656666 Dec 16 12:18:24.852000 audit: BPF prog-id=230 op=LOAD Dec 16 12:18:24.852000 audit[5061]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a03e8 a2=98 a3=0 items=0 ppid=5049 pid=5061 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:18:24.852000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6331313565623764653831306139396561353261313133333638656666 Dec 16 12:18:24.853000 audit: BPF prog-id=231 op=LOAD Dec 16 12:18:24.853000 audit[5061]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=40001a0168 a2=98 a3=0 items=0 ppid=5049 pid=5061 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:18:24.853000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6331313565623764653831306139396561353261313133333638656666 Dec 16 12:18:24.854000 audit: BPF prog-id=231 op=UNLOAD Dec 16 12:18:24.854000 audit[5061]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5049 pid=5061 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:18:24.854000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6331313565623764653831306139396561353261313133333638656666 Dec 16 12:18:24.854000 audit: BPF prog-id=230 op=UNLOAD Dec 16 12:18:24.854000 audit[5061]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5049 pid=5061 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:18:24.854000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6331313565623764653831306139396561353261313133333638656666 Dec 16 12:18:24.855000 audit: BPF prog-id=232 op=LOAD Dec 16 12:18:24.855000 audit[5061]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a0648 a2=98 a3=0 items=0 ppid=5049 pid=5061 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:18:24.855000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6331313565623764653831306139396561353261313133333638656666 Dec 16 12:18:24.933284 containerd[1971]: time="2025-12-16T12:18:24.933005575Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-f4x44,Uid:9e39aa72-dd6b-4253-877f-1d57a9236239,Namespace:calico-system,Attempt:0,} returns sandbox id \"9be5e2f7a38fb00175afb079bb2340368c01b13e982b1dc47c7185aac452e50d\"" Dec 16 12:18:24.940382 containerd[1971]: time="2025-12-16T12:18:24.940131475Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Dec 16 12:18:24.956258 systemd-networkd[1564]: cali4c864395248: Link UP Dec 16 12:18:24.958044 systemd-networkd[1564]: cali4c864395248: Gained carrier Dec 16 12:18:25.025599 containerd[1971]: 2025-12-16 12:18:24.725 [INFO][5075] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--20--6-k8s-coredns--668d6bf9bc--d2s2m-eth0 coredns-668d6bf9bc- kube-system a9a3cc19-451e-4d00-baff-e4e463318465 818 0 2025-12-16 12:17:35 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-20-6 coredns-668d6bf9bc-d2s2m eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali4c864395248 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="37f26a2e8574de4e77e63f778e3a2bc72694b3c7e0064b1c3b1b4660818089af" Namespace="kube-system" Pod="coredns-668d6bf9bc-d2s2m" WorkloadEndpoint="ip--172--31--20--6-k8s-coredns--668d6bf9bc--d2s2m-" Dec 16 12:18:25.025599 containerd[1971]: 2025-12-16 12:18:24.726 [INFO][5075] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="37f26a2e8574de4e77e63f778e3a2bc72694b3c7e0064b1c3b1b4660818089af" Namespace="kube-system" Pod="coredns-668d6bf9bc-d2s2m" WorkloadEndpoint="ip--172--31--20--6-k8s-coredns--668d6bf9bc--d2s2m-eth0" Dec 16 12:18:25.025599 containerd[1971]: 2025-12-16 12:18:24.811 [INFO][5098] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="37f26a2e8574de4e77e63f778e3a2bc72694b3c7e0064b1c3b1b4660818089af" HandleID="k8s-pod-network.37f26a2e8574de4e77e63f778e3a2bc72694b3c7e0064b1c3b1b4660818089af" Workload="ip--172--31--20--6-k8s-coredns--668d6bf9bc--d2s2m-eth0" Dec 16 12:18:25.025950 containerd[1971]: 2025-12-16 12:18:24.813 [INFO][5098] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="37f26a2e8574de4e77e63f778e3a2bc72694b3c7e0064b1c3b1b4660818089af" HandleID="k8s-pod-network.37f26a2e8574de4e77e63f778e3a2bc72694b3c7e0064b1c3b1b4660818089af" Workload="ip--172--31--20--6-k8s-coredns--668d6bf9bc--d2s2m-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400024bab0), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-20-6", "pod":"coredns-668d6bf9bc-d2s2m", "timestamp":"2025-12-16 12:18:24.811746883 +0000 UTC"}, Hostname:"ip-172-31-20-6", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 12:18:25.025950 containerd[1971]: 2025-12-16 12:18:24.813 [INFO][5098] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 12:18:25.025950 containerd[1971]: 2025-12-16 12:18:24.814 [INFO][5098] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 12:18:25.025950 containerd[1971]: 2025-12-16 12:18:24.814 [INFO][5098] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-20-6' Dec 16 12:18:25.025950 containerd[1971]: 2025-12-16 12:18:24.840 [INFO][5098] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.37f26a2e8574de4e77e63f778e3a2bc72694b3c7e0064b1c3b1b4660818089af" host="ip-172-31-20-6" Dec 16 12:18:25.025950 containerd[1971]: 2025-12-16 12:18:24.856 [INFO][5098] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-20-6" Dec 16 12:18:25.025950 containerd[1971]: 2025-12-16 12:18:24.871 [INFO][5098] ipam/ipam.go 511: Trying affinity for 192.168.13.0/26 host="ip-172-31-20-6" Dec 16 12:18:25.025950 containerd[1971]: 2025-12-16 12:18:24.878 [INFO][5098] ipam/ipam.go 158: Attempting to load block cidr=192.168.13.0/26 host="ip-172-31-20-6" Dec 16 12:18:25.025950 containerd[1971]: 2025-12-16 12:18:24.884 [INFO][5098] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.13.0/26 host="ip-172-31-20-6" Dec 16 12:18:25.025950 containerd[1971]: 2025-12-16 12:18:24.884 [INFO][5098] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.13.0/26 handle="k8s-pod-network.37f26a2e8574de4e77e63f778e3a2bc72694b3c7e0064b1c3b1b4660818089af" host="ip-172-31-20-6" Dec 16 12:18:25.028152 containerd[1971]: 2025-12-16 12:18:24.897 [INFO][5098] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.37f26a2e8574de4e77e63f778e3a2bc72694b3c7e0064b1c3b1b4660818089af Dec 16 12:18:25.028152 containerd[1971]: 2025-12-16 12:18:24.915 [INFO][5098] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.13.0/26 handle="k8s-pod-network.37f26a2e8574de4e77e63f778e3a2bc72694b3c7e0064b1c3b1b4660818089af" host="ip-172-31-20-6" Dec 16 12:18:25.028152 containerd[1971]: 2025-12-16 12:18:24.933 [INFO][5098] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.13.5/26] block=192.168.13.0/26 handle="k8s-pod-network.37f26a2e8574de4e77e63f778e3a2bc72694b3c7e0064b1c3b1b4660818089af" host="ip-172-31-20-6" Dec 16 12:18:25.028152 containerd[1971]: 2025-12-16 12:18:24.933 [INFO][5098] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.13.5/26] handle="k8s-pod-network.37f26a2e8574de4e77e63f778e3a2bc72694b3c7e0064b1c3b1b4660818089af" host="ip-172-31-20-6" Dec 16 12:18:25.028152 containerd[1971]: 2025-12-16 12:18:24.934 [INFO][5098] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 12:18:25.028152 containerd[1971]: 2025-12-16 12:18:24.934 [INFO][5098] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.13.5/26] IPv6=[] ContainerID="37f26a2e8574de4e77e63f778e3a2bc72694b3c7e0064b1c3b1b4660818089af" HandleID="k8s-pod-network.37f26a2e8574de4e77e63f778e3a2bc72694b3c7e0064b1c3b1b4660818089af" Workload="ip--172--31--20--6-k8s-coredns--668d6bf9bc--d2s2m-eth0" Dec 16 12:18:25.029289 containerd[1971]: 2025-12-16 12:18:24.944 [INFO][5075] cni-plugin/k8s.go 418: Populated endpoint ContainerID="37f26a2e8574de4e77e63f778e3a2bc72694b3c7e0064b1c3b1b4660818089af" Namespace="kube-system" Pod="coredns-668d6bf9bc-d2s2m" WorkloadEndpoint="ip--172--31--20--6-k8s-coredns--668d6bf9bc--d2s2m-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--20--6-k8s-coredns--668d6bf9bc--d2s2m-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"a9a3cc19-451e-4d00-baff-e4e463318465", ResourceVersion:"818", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 12, 17, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-20-6", ContainerID:"", Pod:"coredns-668d6bf9bc-d2s2m", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.13.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali4c864395248", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 12:18:25.029289 containerd[1971]: 2025-12-16 12:18:24.944 [INFO][5075] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.13.5/32] ContainerID="37f26a2e8574de4e77e63f778e3a2bc72694b3c7e0064b1c3b1b4660818089af" Namespace="kube-system" Pod="coredns-668d6bf9bc-d2s2m" WorkloadEndpoint="ip--172--31--20--6-k8s-coredns--668d6bf9bc--d2s2m-eth0" Dec 16 12:18:25.029289 containerd[1971]: 2025-12-16 12:18:24.945 [INFO][5075] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4c864395248 ContainerID="37f26a2e8574de4e77e63f778e3a2bc72694b3c7e0064b1c3b1b4660818089af" Namespace="kube-system" Pod="coredns-668d6bf9bc-d2s2m" WorkloadEndpoint="ip--172--31--20--6-k8s-coredns--668d6bf9bc--d2s2m-eth0" Dec 16 12:18:25.029289 containerd[1971]: 2025-12-16 12:18:24.970 [INFO][5075] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="37f26a2e8574de4e77e63f778e3a2bc72694b3c7e0064b1c3b1b4660818089af" Namespace="kube-system" Pod="coredns-668d6bf9bc-d2s2m" WorkloadEndpoint="ip--172--31--20--6-k8s-coredns--668d6bf9bc--d2s2m-eth0" Dec 16 12:18:25.029289 containerd[1971]: 2025-12-16 12:18:24.980 [INFO][5075] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="37f26a2e8574de4e77e63f778e3a2bc72694b3c7e0064b1c3b1b4660818089af" Namespace="kube-system" Pod="coredns-668d6bf9bc-d2s2m" WorkloadEndpoint="ip--172--31--20--6-k8s-coredns--668d6bf9bc--d2s2m-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--20--6-k8s-coredns--668d6bf9bc--d2s2m-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"a9a3cc19-451e-4d00-baff-e4e463318465", ResourceVersion:"818", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 12, 17, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-20-6", ContainerID:"37f26a2e8574de4e77e63f778e3a2bc72694b3c7e0064b1c3b1b4660818089af", Pod:"coredns-668d6bf9bc-d2s2m", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.13.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali4c864395248", MAC:"0a:3d:21:08:a9:14", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 12:18:25.029289 containerd[1971]: 2025-12-16 12:18:25.019 [INFO][5075] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="37f26a2e8574de4e77e63f778e3a2bc72694b3c7e0064b1c3b1b4660818089af" Namespace="kube-system" Pod="coredns-668d6bf9bc-d2s2m" WorkloadEndpoint="ip--172--31--20--6-k8s-coredns--668d6bf9bc--d2s2m-eth0" Dec 16 12:18:25.070697 containerd[1971]: time="2025-12-16T12:18:25.070452184Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-fcb9bdb55-c27r4,Uid:a0556f5e-184b-4527-b60e-270da372abfb,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"00e0feb9bd91787c0600d7c8a06743e63fd415e2669b0000991f8d3ce6c8c6de\"" Dec 16 12:18:25.125710 systemd-networkd[1564]: calid737a80e72b: Gained IPv6LL Dec 16 12:18:25.128585 containerd[1971]: time="2025-12-16T12:18:25.128532772Z" level=info msg="connecting to shim 37f26a2e8574de4e77e63f778e3a2bc72694b3c7e0064b1c3b1b4660818089af" address="unix:///run/containerd/s/c516421360b3ddd8017671929e777d93fb9e05a03ce3163406e19e812b61712d" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:18:25.193014 systemd-networkd[1564]: cali01a2e1d9fb5: Gained IPv6LL Dec 16 12:18:25.241000 audit[5147]: NETFILTER_CFG table=filter:132 family=2 entries=50 op=nft_register_chain pid=5147 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 16 12:18:25.241000 audit[5147]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=24912 a0=3 a1=ffffdde5d120 a2=0 a3=ffffb7ab0fa8 items=0 ppid=4637 pid=5147 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:18:25.241000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 16 12:18:25.245846 containerd[1971]: time="2025-12-16T12:18:25.245537933Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-fcb9bdb55-6k77x,Uid:41fe1dec-6478-42fa-9c60-8b697b125498,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"c115eb7de810a99ea52a113368effeee4f45875a266402a15ca7df929c4c5b55\"" Dec 16 12:18:25.282195 containerd[1971]: time="2025-12-16T12:18:25.281762861Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:18:25.294352 containerd[1971]: time="2025-12-16T12:18:25.294286601Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Dec 16 12:18:25.295427 containerd[1971]: time="2025-12-16T12:18:25.294549653Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Dec 16 12:18:25.296641 kubelet[3411]: E1216 12:18:25.296270 3411 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 16 12:18:25.296641 kubelet[3411]: E1216 12:18:25.296344 3411 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 16 12:18:25.300432 kubelet[3411]: E1216 12:18:25.297338 3411 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-h456d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-f4x44_calico-system(9e39aa72-dd6b-4253-877f-1d57a9236239): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Dec 16 12:18:25.300432 kubelet[3411]: E1216 12:18:25.299102 3411 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-f4x44" podUID="9e39aa72-dd6b-4253-877f-1d57a9236239" Dec 16 12:18:25.299096 systemd[1]: Started cri-containerd-37f26a2e8574de4e77e63f778e3a2bc72694b3c7e0064b1c3b1b4660818089af.scope - libcontainer container 37f26a2e8574de4e77e63f778e3a2bc72694b3c7e0064b1c3b1b4660818089af. Dec 16 12:18:25.305556 containerd[1971]: time="2025-12-16T12:18:25.305160857Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 12:18:25.357000 audit: BPF prog-id=233 op=LOAD Dec 16 12:18:25.360000 audit: BPF prog-id=234 op=LOAD Dec 16 12:18:25.360000 audit[5146]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40000c8180 a2=98 a3=0 items=0 ppid=5134 pid=5146 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:18:25.360000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3337663236613265383537346465346537376536336637373865336132 Dec 16 12:18:25.360000 audit: BPF prog-id=234 op=UNLOAD Dec 16 12:18:25.360000 audit[5146]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5134 pid=5146 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:18:25.360000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3337663236613265383537346465346537376536336637373865336132 Dec 16 12:18:25.361000 audit: BPF prog-id=235 op=LOAD Dec 16 12:18:25.361000 audit[5146]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40000c83e8 a2=98 a3=0 items=0 ppid=5134 pid=5146 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:18:25.361000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3337663236613265383537346465346537376536336637373865336132 Dec 16 12:18:25.361000 audit: BPF prog-id=236 op=LOAD Dec 16 12:18:25.361000 audit[5146]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=40000c8168 a2=98 a3=0 items=0 ppid=5134 pid=5146 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:18:25.361000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3337663236613265383537346465346537376536336637373865336132 Dec 16 12:18:25.361000 audit: BPF prog-id=236 op=UNLOAD Dec 16 12:18:25.361000 audit[5146]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5134 pid=5146 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:18:25.361000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3337663236613265383537346465346537376536336637373865336132 Dec 16 12:18:25.361000 audit: BPF prog-id=235 op=UNLOAD Dec 16 12:18:25.361000 audit[5146]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5134 pid=5146 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:18:25.361000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3337663236613265383537346465346537376536336637373865336132 Dec 16 12:18:25.361000 audit: BPF prog-id=237 op=LOAD Dec 16 12:18:25.361000 audit[5146]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40000c8648 a2=98 a3=0 items=0 ppid=5134 pid=5146 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:18:25.361000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3337663236613265383537346465346537376536336637373865336132 Dec 16 12:18:25.416933 containerd[1971]: time="2025-12-16T12:18:25.416824554Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-d2s2m,Uid:a9a3cc19-451e-4d00-baff-e4e463318465,Namespace:kube-system,Attempt:0,} returns sandbox id \"37f26a2e8574de4e77e63f778e3a2bc72694b3c7e0064b1c3b1b4660818089af\"" Dec 16 12:18:25.425117 containerd[1971]: time="2025-12-16T12:18:25.424765590Z" level=info msg="CreateContainer within sandbox \"37f26a2e8574de4e77e63f778e3a2bc72694b3c7e0064b1c3b1b4660818089af\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 16 12:18:25.483428 containerd[1971]: time="2025-12-16T12:18:25.483372174Z" level=info msg="Container acc397c23afb099039e507b26e16f4c24bb4fa6741bc1b34f133d4f1ad203815: CDI devices from CRI Config.CDIDevices: []" Dec 16 12:18:25.504510 containerd[1971]: time="2025-12-16T12:18:25.504438990Z" level=info msg="CreateContainer within sandbox \"37f26a2e8574de4e77e63f778e3a2bc72694b3c7e0064b1c3b1b4660818089af\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"acc397c23afb099039e507b26e16f4c24bb4fa6741bc1b34f133d4f1ad203815\"" Dec 16 12:18:25.505999 containerd[1971]: time="2025-12-16T12:18:25.505860654Z" level=info msg="StartContainer for \"acc397c23afb099039e507b26e16f4c24bb4fa6741bc1b34f133d4f1ad203815\"" Dec 16 12:18:25.509082 containerd[1971]: time="2025-12-16T12:18:25.508934610Z" level=info msg="connecting to shim acc397c23afb099039e507b26e16f4c24bb4fa6741bc1b34f133d4f1ad203815" address="unix:///run/containerd/s/c516421360b3ddd8017671929e777d93fb9e05a03ce3163406e19e812b61712d" protocol=ttrpc version=3 Dec 16 12:18:25.546951 containerd[1971]: time="2025-12-16T12:18:25.546812586Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-7wp4r,Uid:7f31f51a-3fe7-4796-97ca-d9a3c9b5116f,Namespace:calico-system,Attempt:0,}" Dec 16 12:18:25.548394 containerd[1971]: time="2025-12-16T12:18:25.548274150Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5959d55c94-c8546,Uid:9a9f64cf-c939-425f-bc9b-14da143ab498,Namespace:calico-system,Attempt:0,}" Dec 16 12:18:25.548766 systemd[1]: Started cri-containerd-acc397c23afb099039e507b26e16f4c24bb4fa6741bc1b34f133d4f1ad203815.scope - libcontainer container acc397c23afb099039e507b26e16f4c24bb4fa6741bc1b34f133d4f1ad203815. Dec 16 12:18:25.566356 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1054807857.mount: Deactivated successfully. Dec 16 12:18:25.581467 containerd[1971]: time="2025-12-16T12:18:25.581338447Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:18:25.592857 containerd[1971]: time="2025-12-16T12:18:25.592712467Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 12:18:25.594141 containerd[1971]: time="2025-12-16T12:18:25.592841947Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 16 12:18:25.594141 containerd[1971]: time="2025-12-16T12:18:25.593627491Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 12:18:25.594318 kubelet[3411]: E1216 12:18:25.593111 3411 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 12:18:25.594318 kubelet[3411]: E1216 12:18:25.593179 3411 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 12:18:25.594670 kubelet[3411]: E1216 12:18:25.594529 3411 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-88br8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-fcb9bdb55-c27r4_calico-apiserver(a0556f5e-184b-4527-b60e-270da372abfb): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 12:18:25.596818 kubelet[3411]: E1216 12:18:25.596725 3411 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-fcb9bdb55-c27r4" podUID="a0556f5e-184b-4527-b60e-270da372abfb" Dec 16 12:18:25.641000 audit: BPF prog-id=238 op=LOAD Dec 16 12:18:25.643000 audit: BPF prog-id=239 op=LOAD Dec 16 12:18:25.643000 audit[5177]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178180 a2=98 a3=0 items=0 ppid=5134 pid=5177 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:18:25.643000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6163633339376332336166623039393033396535303762323665313666 Dec 16 12:18:25.645000 audit: BPF prog-id=239 op=UNLOAD Dec 16 12:18:25.645000 audit[5177]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5134 pid=5177 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:18:25.645000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6163633339376332336166623039393033396535303762323665313666 Dec 16 12:18:25.646000 audit: BPF prog-id=240 op=LOAD Dec 16 12:18:25.646000 audit[5177]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=5134 pid=5177 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:18:25.646000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6163633339376332336166623039393033396535303762323665313666 Dec 16 12:18:25.647000 audit: BPF prog-id=241 op=LOAD Dec 16 12:18:25.647000 audit[5177]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000178168 a2=98 a3=0 items=0 ppid=5134 pid=5177 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:18:25.647000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6163633339376332336166623039393033396535303762323665313666 Dec 16 12:18:25.647000 audit: BPF prog-id=241 op=UNLOAD Dec 16 12:18:25.647000 audit[5177]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5134 pid=5177 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:18:25.647000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6163633339376332336166623039393033396535303762323665313666 Dec 16 12:18:25.647000 audit: BPF prog-id=240 op=UNLOAD Dec 16 12:18:25.647000 audit[5177]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5134 pid=5177 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:18:25.647000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6163633339376332336166623039393033396535303762323665313666 Dec 16 12:18:25.648000 audit: BPF prog-id=242 op=LOAD Dec 16 12:18:25.648000 audit[5177]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178648 a2=98 a3=0 items=0 ppid=5134 pid=5177 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:18:25.648000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6163633339376332336166623039393033396535303762323665313666 Dec 16 12:18:25.715270 containerd[1971]: time="2025-12-16T12:18:25.714564343Z" level=info msg="StartContainer for \"acc397c23afb099039e507b26e16f4c24bb4fa6741bc1b34f133d4f1ad203815\" returns successfully" Dec 16 12:18:25.875030 containerd[1971]: time="2025-12-16T12:18:25.874957772Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:18:25.877500 containerd[1971]: time="2025-12-16T12:18:25.877305440Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 12:18:25.877500 containerd[1971]: time="2025-12-16T12:18:25.877378880Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 16 12:18:25.878255 kubelet[3411]: E1216 12:18:25.877633 3411 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 12:18:25.878255 kubelet[3411]: E1216 12:18:25.877722 3411 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 12:18:25.878255 kubelet[3411]: E1216 12:18:25.877902 3411 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6ntjh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-fcb9bdb55-6k77x_calico-apiserver(41fe1dec-6478-42fa-9c60-8b697b125498): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 12:18:25.880181 kubelet[3411]: E1216 12:18:25.880107 3411 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-fcb9bdb55-6k77x" podUID="41fe1dec-6478-42fa-9c60-8b697b125498" Dec 16 12:18:25.910578 systemd-networkd[1564]: califcb24684bad: Link UP Dec 16 12:18:25.914341 systemd-networkd[1564]: califcb24684bad: Gained carrier Dec 16 12:18:25.959346 containerd[1971]: 2025-12-16 12:18:25.721 [INFO][5198] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--20--6-k8s-csi--node--driver--7wp4r-eth0 csi-node-driver- calico-system 7f31f51a-3fe7-4796-97ca-d9a3c9b5116f 724 0 2025-12-16 12:18:01 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:857b56db8f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ip-172-31-20-6 csi-node-driver-7wp4r eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] califcb24684bad [] [] }} ContainerID="7bb200e90adaf01e99e1a8bc0cde0082359e54bae5a940d89d8a53bb6dc6250f" Namespace="calico-system" Pod="csi-node-driver-7wp4r" WorkloadEndpoint="ip--172--31--20--6-k8s-csi--node--driver--7wp4r-" Dec 16 12:18:25.959346 containerd[1971]: 2025-12-16 12:18:25.721 [INFO][5198] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="7bb200e90adaf01e99e1a8bc0cde0082359e54bae5a940d89d8a53bb6dc6250f" Namespace="calico-system" Pod="csi-node-driver-7wp4r" WorkloadEndpoint="ip--172--31--20--6-k8s-csi--node--driver--7wp4r-eth0" Dec 16 12:18:25.959346 containerd[1971]: 2025-12-16 12:18:25.790 [INFO][5231] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7bb200e90adaf01e99e1a8bc0cde0082359e54bae5a940d89d8a53bb6dc6250f" HandleID="k8s-pod-network.7bb200e90adaf01e99e1a8bc0cde0082359e54bae5a940d89d8a53bb6dc6250f" Workload="ip--172--31--20--6-k8s-csi--node--driver--7wp4r-eth0" Dec 16 12:18:25.959346 containerd[1971]: 2025-12-16 12:18:25.790 [INFO][5231] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="7bb200e90adaf01e99e1a8bc0cde0082359e54bae5a940d89d8a53bb6dc6250f" HandleID="k8s-pod-network.7bb200e90adaf01e99e1a8bc0cde0082359e54bae5a940d89d8a53bb6dc6250f" Workload="ip--172--31--20--6-k8s-csi--node--driver--7wp4r-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002dd740), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-20-6", "pod":"csi-node-driver-7wp4r", "timestamp":"2025-12-16 12:18:25.790170032 +0000 UTC"}, Hostname:"ip-172-31-20-6", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 12:18:25.959346 containerd[1971]: 2025-12-16 12:18:25.791 [INFO][5231] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 12:18:25.959346 containerd[1971]: 2025-12-16 12:18:25.791 [INFO][5231] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 12:18:25.959346 containerd[1971]: 2025-12-16 12:18:25.791 [INFO][5231] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-20-6' Dec 16 12:18:25.959346 containerd[1971]: 2025-12-16 12:18:25.811 [INFO][5231] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.7bb200e90adaf01e99e1a8bc0cde0082359e54bae5a940d89d8a53bb6dc6250f" host="ip-172-31-20-6" Dec 16 12:18:25.959346 containerd[1971]: 2025-12-16 12:18:25.826 [INFO][5231] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-20-6" Dec 16 12:18:25.959346 containerd[1971]: 2025-12-16 12:18:25.841 [INFO][5231] ipam/ipam.go 511: Trying affinity for 192.168.13.0/26 host="ip-172-31-20-6" Dec 16 12:18:25.959346 containerd[1971]: 2025-12-16 12:18:25.848 [INFO][5231] ipam/ipam.go 158: Attempting to load block cidr=192.168.13.0/26 host="ip-172-31-20-6" Dec 16 12:18:25.959346 containerd[1971]: 2025-12-16 12:18:25.853 [INFO][5231] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.13.0/26 host="ip-172-31-20-6" Dec 16 12:18:25.959346 containerd[1971]: 2025-12-16 12:18:25.855 [INFO][5231] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.13.0/26 handle="k8s-pod-network.7bb200e90adaf01e99e1a8bc0cde0082359e54bae5a940d89d8a53bb6dc6250f" host="ip-172-31-20-6" Dec 16 12:18:25.959346 containerd[1971]: 2025-12-16 12:18:25.860 [INFO][5231] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.7bb200e90adaf01e99e1a8bc0cde0082359e54bae5a940d89d8a53bb6dc6250f Dec 16 12:18:25.959346 containerd[1971]: 2025-12-16 12:18:25.872 [INFO][5231] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.13.0/26 handle="k8s-pod-network.7bb200e90adaf01e99e1a8bc0cde0082359e54bae5a940d89d8a53bb6dc6250f" host="ip-172-31-20-6" Dec 16 12:18:25.959346 containerd[1971]: 2025-12-16 12:18:25.895 [INFO][5231] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.13.6/26] block=192.168.13.0/26 handle="k8s-pod-network.7bb200e90adaf01e99e1a8bc0cde0082359e54bae5a940d89d8a53bb6dc6250f" host="ip-172-31-20-6" Dec 16 12:18:25.959346 containerd[1971]: 2025-12-16 12:18:25.895 [INFO][5231] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.13.6/26] handle="k8s-pod-network.7bb200e90adaf01e99e1a8bc0cde0082359e54bae5a940d89d8a53bb6dc6250f" host="ip-172-31-20-6" Dec 16 12:18:25.959346 containerd[1971]: 2025-12-16 12:18:25.896 [INFO][5231] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 12:18:25.959346 containerd[1971]: 2025-12-16 12:18:25.896 [INFO][5231] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.13.6/26] IPv6=[] ContainerID="7bb200e90adaf01e99e1a8bc0cde0082359e54bae5a940d89d8a53bb6dc6250f" HandleID="k8s-pod-network.7bb200e90adaf01e99e1a8bc0cde0082359e54bae5a940d89d8a53bb6dc6250f" Workload="ip--172--31--20--6-k8s-csi--node--driver--7wp4r-eth0" Dec 16 12:18:25.963458 containerd[1971]: 2025-12-16 12:18:25.903 [INFO][5198] cni-plugin/k8s.go 418: Populated endpoint ContainerID="7bb200e90adaf01e99e1a8bc0cde0082359e54bae5a940d89d8a53bb6dc6250f" Namespace="calico-system" Pod="csi-node-driver-7wp4r" WorkloadEndpoint="ip--172--31--20--6-k8s-csi--node--driver--7wp4r-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--20--6-k8s-csi--node--driver--7wp4r-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"7f31f51a-3fe7-4796-97ca-d9a3c9b5116f", ResourceVersion:"724", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 12, 18, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-20-6", ContainerID:"", Pod:"csi-node-driver-7wp4r", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.13.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"califcb24684bad", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 12:18:25.963458 containerd[1971]: 2025-12-16 12:18:25.903 [INFO][5198] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.13.6/32] ContainerID="7bb200e90adaf01e99e1a8bc0cde0082359e54bae5a940d89d8a53bb6dc6250f" Namespace="calico-system" Pod="csi-node-driver-7wp4r" WorkloadEndpoint="ip--172--31--20--6-k8s-csi--node--driver--7wp4r-eth0" Dec 16 12:18:25.963458 containerd[1971]: 2025-12-16 12:18:25.903 [INFO][5198] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to califcb24684bad ContainerID="7bb200e90adaf01e99e1a8bc0cde0082359e54bae5a940d89d8a53bb6dc6250f" Namespace="calico-system" Pod="csi-node-driver-7wp4r" WorkloadEndpoint="ip--172--31--20--6-k8s-csi--node--driver--7wp4r-eth0" Dec 16 12:18:25.963458 containerd[1971]: 2025-12-16 12:18:25.912 [INFO][5198] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7bb200e90adaf01e99e1a8bc0cde0082359e54bae5a940d89d8a53bb6dc6250f" Namespace="calico-system" Pod="csi-node-driver-7wp4r" WorkloadEndpoint="ip--172--31--20--6-k8s-csi--node--driver--7wp4r-eth0" Dec 16 12:18:25.963458 containerd[1971]: 2025-12-16 12:18:25.915 [INFO][5198] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="7bb200e90adaf01e99e1a8bc0cde0082359e54bae5a940d89d8a53bb6dc6250f" Namespace="calico-system" Pod="csi-node-driver-7wp4r" WorkloadEndpoint="ip--172--31--20--6-k8s-csi--node--driver--7wp4r-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--20--6-k8s-csi--node--driver--7wp4r-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"7f31f51a-3fe7-4796-97ca-d9a3c9b5116f", ResourceVersion:"724", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 12, 18, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-20-6", ContainerID:"7bb200e90adaf01e99e1a8bc0cde0082359e54bae5a940d89d8a53bb6dc6250f", Pod:"csi-node-driver-7wp4r", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.13.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"califcb24684bad", MAC:"3e:26:de:e1:ea:4b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 12:18:25.963458 containerd[1971]: 2025-12-16 12:18:25.951 [INFO][5198] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="7bb200e90adaf01e99e1a8bc0cde0082359e54bae5a940d89d8a53bb6dc6250f" Namespace="calico-system" Pod="csi-node-driver-7wp4r" WorkloadEndpoint="ip--172--31--20--6-k8s-csi--node--driver--7wp4r-eth0" Dec 16 12:18:26.022116 systemd-networkd[1564]: cali4c864395248: Gained IPv6LL Dec 16 12:18:26.044424 containerd[1971]: time="2025-12-16T12:18:26.044366873Z" level=info msg="connecting to shim 7bb200e90adaf01e99e1a8bc0cde0082359e54bae5a940d89d8a53bb6dc6250f" address="unix:///run/containerd/s/9f7f0176a13d7a60f192a01eef8a6ade65021070f82002aa3ebd6ecd4330e66d" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:18:26.055968 kubelet[3411]: E1216 12:18:26.055038 3411 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-fcb9bdb55-6k77x" podUID="41fe1dec-6478-42fa-9c60-8b697b125498" Dec 16 12:18:26.064414 kubelet[3411]: E1216 12:18:26.064320 3411 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-f4x44" podUID="9e39aa72-dd6b-4253-877f-1d57a9236239" Dec 16 12:18:26.069657 kubelet[3411]: E1216 12:18:26.069312 3411 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-fcb9bdb55-c27r4" podUID="a0556f5e-184b-4527-b60e-270da372abfb" Dec 16 12:18:26.087172 systemd-networkd[1564]: cali0829b09dc9b: Gained IPv6LL Dec 16 12:18:26.157256 systemd-networkd[1564]: calib5bdc49a1b9: Link UP Dec 16 12:18:26.158874 systemd-networkd[1564]: calib5bdc49a1b9: Gained carrier Dec 16 12:18:26.167619 systemd[1]: Started cri-containerd-7bb200e90adaf01e99e1a8bc0cde0082359e54bae5a940d89d8a53bb6dc6250f.scope - libcontainer container 7bb200e90adaf01e99e1a8bc0cde0082359e54bae5a940d89d8a53bb6dc6250f. Dec 16 12:18:26.187807 kubelet[3411]: I1216 12:18:26.187692 3411 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-d2s2m" podStartSLOduration=51.18766605 podStartE2EDuration="51.18766605s" podCreationTimestamp="2025-12-16 12:17:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 12:18:26.126712973 +0000 UTC m=+55.301683427" watchObservedRunningTime="2025-12-16 12:18:26.18766605 +0000 UTC m=+55.362636480" Dec 16 12:18:26.256477 containerd[1971]: 2025-12-16 12:18:25.741 [INFO][5205] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--20--6-k8s-calico--kube--controllers--5959d55c94--c8546-eth0 calico-kube-controllers-5959d55c94- calico-system 9a9f64cf-c939-425f-bc9b-14da143ab498 827 0 2025-12-16 12:18:01 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:5959d55c94 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ip-172-31-20-6 calico-kube-controllers-5959d55c94-c8546 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calib5bdc49a1b9 [] [] }} ContainerID="4c5de0bca15aa15c3d79e6080dd97eedc074936cc346960c7cd274aa6d4782a8" Namespace="calico-system" Pod="calico-kube-controllers-5959d55c94-c8546" WorkloadEndpoint="ip--172--31--20--6-k8s-calico--kube--controllers--5959d55c94--c8546-" Dec 16 12:18:26.256477 containerd[1971]: 2025-12-16 12:18:25.742 [INFO][5205] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="4c5de0bca15aa15c3d79e6080dd97eedc074936cc346960c7cd274aa6d4782a8" Namespace="calico-system" Pod="calico-kube-controllers-5959d55c94-c8546" WorkloadEndpoint="ip--172--31--20--6-k8s-calico--kube--controllers--5959d55c94--c8546-eth0" Dec 16 12:18:26.256477 containerd[1971]: 2025-12-16 12:18:25.847 [INFO][5237] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4c5de0bca15aa15c3d79e6080dd97eedc074936cc346960c7cd274aa6d4782a8" HandleID="k8s-pod-network.4c5de0bca15aa15c3d79e6080dd97eedc074936cc346960c7cd274aa6d4782a8" Workload="ip--172--31--20--6-k8s-calico--kube--controllers--5959d55c94--c8546-eth0" Dec 16 12:18:26.256477 containerd[1971]: 2025-12-16 12:18:25.850 [INFO][5237] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="4c5de0bca15aa15c3d79e6080dd97eedc074936cc346960c7cd274aa6d4782a8" HandleID="k8s-pod-network.4c5de0bca15aa15c3d79e6080dd97eedc074936cc346960c7cd274aa6d4782a8" Workload="ip--172--31--20--6-k8s-calico--kube--controllers--5959d55c94--c8546-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002c9100), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-20-6", "pod":"calico-kube-controllers-5959d55c94-c8546", "timestamp":"2025-12-16 12:18:25.847260896 +0000 UTC"}, Hostname:"ip-172-31-20-6", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 12:18:26.256477 containerd[1971]: 2025-12-16 12:18:25.850 [INFO][5237] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 12:18:26.256477 containerd[1971]: 2025-12-16 12:18:25.896 [INFO][5237] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 12:18:26.256477 containerd[1971]: 2025-12-16 12:18:25.896 [INFO][5237] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-20-6' Dec 16 12:18:26.256477 containerd[1971]: 2025-12-16 12:18:25.939 [INFO][5237] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.4c5de0bca15aa15c3d79e6080dd97eedc074936cc346960c7cd274aa6d4782a8" host="ip-172-31-20-6" Dec 16 12:18:26.256477 containerd[1971]: 2025-12-16 12:18:25.966 [INFO][5237] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-20-6" Dec 16 12:18:26.256477 containerd[1971]: 2025-12-16 12:18:25.989 [INFO][5237] ipam/ipam.go 511: Trying affinity for 192.168.13.0/26 host="ip-172-31-20-6" Dec 16 12:18:26.256477 containerd[1971]: 2025-12-16 12:18:25.993 [INFO][5237] ipam/ipam.go 158: Attempting to load block cidr=192.168.13.0/26 host="ip-172-31-20-6" Dec 16 12:18:26.256477 containerd[1971]: 2025-12-16 12:18:26.006 [INFO][5237] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.13.0/26 host="ip-172-31-20-6" Dec 16 12:18:26.256477 containerd[1971]: 2025-12-16 12:18:26.007 [INFO][5237] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.13.0/26 handle="k8s-pod-network.4c5de0bca15aa15c3d79e6080dd97eedc074936cc346960c7cd274aa6d4782a8" host="ip-172-31-20-6" Dec 16 12:18:26.256477 containerd[1971]: 2025-12-16 12:18:26.020 [INFO][5237] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.4c5de0bca15aa15c3d79e6080dd97eedc074936cc346960c7cd274aa6d4782a8 Dec 16 12:18:26.256477 containerd[1971]: 2025-12-16 12:18:26.034 [INFO][5237] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.13.0/26 handle="k8s-pod-network.4c5de0bca15aa15c3d79e6080dd97eedc074936cc346960c7cd274aa6d4782a8" host="ip-172-31-20-6" Dec 16 12:18:26.256477 containerd[1971]: 2025-12-16 12:18:26.084 [INFO][5237] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.13.7/26] block=192.168.13.0/26 handle="k8s-pod-network.4c5de0bca15aa15c3d79e6080dd97eedc074936cc346960c7cd274aa6d4782a8" host="ip-172-31-20-6" Dec 16 12:18:26.256477 containerd[1971]: 2025-12-16 12:18:26.087 [INFO][5237] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.13.7/26] handle="k8s-pod-network.4c5de0bca15aa15c3d79e6080dd97eedc074936cc346960c7cd274aa6d4782a8" host="ip-172-31-20-6" Dec 16 12:18:26.256477 containerd[1971]: 2025-12-16 12:18:26.101 [INFO][5237] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 12:18:26.256477 containerd[1971]: 2025-12-16 12:18:26.107 [INFO][5237] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.13.7/26] IPv6=[] ContainerID="4c5de0bca15aa15c3d79e6080dd97eedc074936cc346960c7cd274aa6d4782a8" HandleID="k8s-pod-network.4c5de0bca15aa15c3d79e6080dd97eedc074936cc346960c7cd274aa6d4782a8" Workload="ip--172--31--20--6-k8s-calico--kube--controllers--5959d55c94--c8546-eth0" Dec 16 12:18:26.257922 containerd[1971]: 2025-12-16 12:18:26.132 [INFO][5205] cni-plugin/k8s.go 418: Populated endpoint ContainerID="4c5de0bca15aa15c3d79e6080dd97eedc074936cc346960c7cd274aa6d4782a8" Namespace="calico-system" Pod="calico-kube-controllers-5959d55c94-c8546" WorkloadEndpoint="ip--172--31--20--6-k8s-calico--kube--controllers--5959d55c94--c8546-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--20--6-k8s-calico--kube--controllers--5959d55c94--c8546-eth0", GenerateName:"calico-kube-controllers-5959d55c94-", Namespace:"calico-system", SelfLink:"", UID:"9a9f64cf-c939-425f-bc9b-14da143ab498", ResourceVersion:"827", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 12, 18, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5959d55c94", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-20-6", ContainerID:"", Pod:"calico-kube-controllers-5959d55c94-c8546", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.13.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calib5bdc49a1b9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 12:18:26.257922 containerd[1971]: 2025-12-16 12:18:26.134 [INFO][5205] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.13.7/32] ContainerID="4c5de0bca15aa15c3d79e6080dd97eedc074936cc346960c7cd274aa6d4782a8" Namespace="calico-system" Pod="calico-kube-controllers-5959d55c94-c8546" WorkloadEndpoint="ip--172--31--20--6-k8s-calico--kube--controllers--5959d55c94--c8546-eth0" Dec 16 12:18:26.257922 containerd[1971]: 2025-12-16 12:18:26.139 [INFO][5205] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib5bdc49a1b9 ContainerID="4c5de0bca15aa15c3d79e6080dd97eedc074936cc346960c7cd274aa6d4782a8" Namespace="calico-system" Pod="calico-kube-controllers-5959d55c94-c8546" WorkloadEndpoint="ip--172--31--20--6-k8s-calico--kube--controllers--5959d55c94--c8546-eth0" Dec 16 12:18:26.257922 containerd[1971]: 2025-12-16 12:18:26.178 [INFO][5205] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4c5de0bca15aa15c3d79e6080dd97eedc074936cc346960c7cd274aa6d4782a8" Namespace="calico-system" Pod="calico-kube-controllers-5959d55c94-c8546" WorkloadEndpoint="ip--172--31--20--6-k8s-calico--kube--controllers--5959d55c94--c8546-eth0" Dec 16 12:18:26.257922 containerd[1971]: 2025-12-16 12:18:26.192 [INFO][5205] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="4c5de0bca15aa15c3d79e6080dd97eedc074936cc346960c7cd274aa6d4782a8" Namespace="calico-system" Pod="calico-kube-controllers-5959d55c94-c8546" WorkloadEndpoint="ip--172--31--20--6-k8s-calico--kube--controllers--5959d55c94--c8546-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--20--6-k8s-calico--kube--controllers--5959d55c94--c8546-eth0", GenerateName:"calico-kube-controllers-5959d55c94-", Namespace:"calico-system", SelfLink:"", UID:"9a9f64cf-c939-425f-bc9b-14da143ab498", ResourceVersion:"827", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 12, 18, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5959d55c94", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-20-6", ContainerID:"4c5de0bca15aa15c3d79e6080dd97eedc074936cc346960c7cd274aa6d4782a8", Pod:"calico-kube-controllers-5959d55c94-c8546", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.13.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calib5bdc49a1b9", MAC:"aa:e6:5f:1a:a6:8e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 12:18:26.257922 containerd[1971]: 2025-12-16 12:18:26.242 [INFO][5205] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="4c5de0bca15aa15c3d79e6080dd97eedc074936cc346960c7cd274aa6d4782a8" Namespace="calico-system" Pod="calico-kube-controllers-5959d55c94-c8546" WorkloadEndpoint="ip--172--31--20--6-k8s-calico--kube--controllers--5959d55c94--c8546-eth0" Dec 16 12:18:26.330199 containerd[1971]: time="2025-12-16T12:18:26.329302554Z" level=info msg="connecting to shim 4c5de0bca15aa15c3d79e6080dd97eedc074936cc346960c7cd274aa6d4782a8" address="unix:///run/containerd/s/d4f9132ddc162a22173447f0199c032234beb37439088a1507de04edb9f9636c" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:18:26.349000 audit[5296]: NETFILTER_CFG table=filter:133 family=2 entries=20 op=nft_register_rule pid=5296 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:18:26.349000 audit[5296]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=7480 a0=3 a1=fffff469af20 a2=0 a3=1 items=0 ppid=3639 pid=5296 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:18:26.349000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:18:26.361000 audit[5296]: NETFILTER_CFG table=nat:134 family=2 entries=14 op=nft_register_rule pid=5296 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:18:26.361000 audit[5296]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=3468 a0=3 a1=fffff469af20 a2=0 a3=1 items=0 ppid=3639 pid=5296 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:18:26.361000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:18:26.368000 audit: BPF prog-id=243 op=LOAD Dec 16 12:18:26.371000 audit: BPF prog-id=244 op=LOAD Dec 16 12:18:26.371000 audit[5274]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130180 a2=98 a3=0 items=0 ppid=5262 pid=5274 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:18:26.371000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3762623230306539306164616630316539396531613862633063646530 Dec 16 12:18:26.372000 audit: BPF prog-id=244 op=UNLOAD Dec 16 12:18:26.372000 audit[5274]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5262 pid=5274 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:18:26.372000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3762623230306539306164616630316539396531613862633063646530 Dec 16 12:18:26.373000 audit: BPF prog-id=245 op=LOAD Dec 16 12:18:26.373000 audit[5274]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001303e8 a2=98 a3=0 items=0 ppid=5262 pid=5274 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:18:26.373000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3762623230306539306164616630316539396531613862633063646530 Dec 16 12:18:26.374000 audit: BPF prog-id=246 op=LOAD Dec 16 12:18:26.374000 audit[5274]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000130168 a2=98 a3=0 items=0 ppid=5262 pid=5274 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:18:26.374000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3762623230306539306164616630316539396531613862633063646530 Dec 16 12:18:26.376000 audit: BPF prog-id=246 op=UNLOAD Dec 16 12:18:26.376000 audit[5274]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5262 pid=5274 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:18:26.376000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3762623230306539306164616630316539396531613862633063646530 Dec 16 12:18:26.376000 audit: BPF prog-id=245 op=UNLOAD Dec 16 12:18:26.376000 audit[5274]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5262 pid=5274 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:18:26.376000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3762623230306539306164616630316539396531613862633063646530 Dec 16 12:18:26.377000 audit: BPF prog-id=247 op=LOAD Dec 16 12:18:26.377000 audit[5274]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130648 a2=98 a3=0 items=0 ppid=5262 pid=5274 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:18:26.377000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3762623230306539306164616630316539396531613862633063646530 Dec 16 12:18:26.399946 systemd[1]: Started cri-containerd-4c5de0bca15aa15c3d79e6080dd97eedc074936cc346960c7cd274aa6d4782a8.scope - libcontainer container 4c5de0bca15aa15c3d79e6080dd97eedc074936cc346960c7cd274aa6d4782a8. Dec 16 12:18:26.476000 audit[5302]: NETFILTER_CFG table=filter:135 family=2 entries=48 op=nft_register_chain pid=5302 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 16 12:18:26.476000 audit[5302]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=23124 a0=3 a1=ffffe3477230 a2=0 a3=ffff87a13fa8 items=0 ppid=4637 pid=5302 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:18:26.476000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 16 12:18:26.480000 audit: BPF prog-id=248 op=LOAD Dec 16 12:18:26.481000 audit: BPF prog-id=249 op=LOAD Dec 16 12:18:26.481000 audit[5323]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a0180 a2=98 a3=0 items=0 ppid=5312 pid=5323 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:18:26.481000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3463356465306263613135616131356333643739653630383064643937 Dec 16 12:18:26.481000 audit: BPF prog-id=249 op=UNLOAD Dec 16 12:18:26.481000 audit[5323]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5312 pid=5323 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:18:26.481000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3463356465306263613135616131356333643739653630383064643937 Dec 16 12:18:26.481000 audit: BPF prog-id=250 op=LOAD Dec 16 12:18:26.481000 audit[5323]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a03e8 a2=98 a3=0 items=0 ppid=5312 pid=5323 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:18:26.481000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3463356465306263613135616131356333643739653630383064643937 Dec 16 12:18:26.483000 audit: BPF prog-id=251 op=LOAD Dec 16 12:18:26.483000 audit[5323]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=40001a0168 a2=98 a3=0 items=0 ppid=5312 pid=5323 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:18:26.483000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3463356465306263613135616131356333643739653630383064643937 Dec 16 12:18:26.483000 audit: BPF prog-id=251 op=UNLOAD Dec 16 12:18:26.483000 audit[5323]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5312 pid=5323 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:18:26.483000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3463356465306263613135616131356333643739653630383064643937 Dec 16 12:18:26.484000 audit: BPF prog-id=250 op=UNLOAD Dec 16 12:18:26.484000 audit[5323]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5312 pid=5323 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:18:26.484000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3463356465306263613135616131356333643739653630383064643937 Dec 16 12:18:26.485000 audit: BPF prog-id=252 op=LOAD Dec 16 12:18:26.485000 audit[5323]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a0648 a2=98 a3=0 items=0 ppid=5312 pid=5323 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:18:26.485000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3463356465306263613135616131356333643739653630383064643937 Dec 16 12:18:26.495484 containerd[1971]: time="2025-12-16T12:18:26.495000019Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-7wp4r,Uid:7f31f51a-3fe7-4796-97ca-d9a3c9b5116f,Namespace:calico-system,Attempt:0,} returns sandbox id \"7bb200e90adaf01e99e1a8bc0cde0082359e54bae5a940d89d8a53bb6dc6250f\"" Dec 16 12:18:26.498978 containerd[1971]: time="2025-12-16T12:18:26.498833539Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Dec 16 12:18:26.536000 audit[5349]: NETFILTER_CFG table=filter:136 family=2 entries=20 op=nft_register_rule pid=5349 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:18:26.536000 audit[5349]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=7480 a0=3 a1=ffffda14c050 a2=0 a3=1 items=0 ppid=3639 pid=5349 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:18:26.536000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:18:26.546843 containerd[1971]: time="2025-12-16T12:18:26.546780415Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-wbgzs,Uid:debc4e05-9014-472c-af09-ca9dd2acb4d3,Namespace:kube-system,Attempt:0,}" Dec 16 12:18:26.547000 audit[5349]: NETFILTER_CFG table=nat:137 family=2 entries=14 op=nft_register_rule pid=5349 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:18:26.547000 audit[5349]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=3468 a0=3 a1=ffffda14c050 a2=0 a3=1 items=0 ppid=3639 pid=5349 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:18:26.547000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:18:26.638000 audit[5363]: NETFILTER_CFG table=filter:138 family=2 entries=52 op=nft_register_chain pid=5363 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 16 12:18:26.638000 audit[5363]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=24312 a0=3 a1=ffffdf7c63f0 a2=0 a3=ffffbc865fa8 items=0 ppid=4637 pid=5363 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:18:26.638000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 16 12:18:26.655201 containerd[1971]: time="2025-12-16T12:18:26.655135280Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5959d55c94-c8546,Uid:9a9f64cf-c939-425f-bc9b-14da143ab498,Namespace:calico-system,Attempt:0,} returns sandbox id \"4c5de0bca15aa15c3d79e6080dd97eedc074936cc346960c7cd274aa6d4782a8\"" Dec 16 12:18:26.807693 containerd[1971]: time="2025-12-16T12:18:26.807641337Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:18:26.811014 containerd[1971]: time="2025-12-16T12:18:26.810247509Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Dec 16 12:18:26.811014 containerd[1971]: time="2025-12-16T12:18:26.810932721Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Dec 16 12:18:26.812232 kubelet[3411]: E1216 12:18:26.811847 3411 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 16 12:18:26.815166 kubelet[3411]: E1216 12:18:26.812246 3411 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 16 12:18:26.815166 kubelet[3411]: E1216 12:18:26.812553 3411 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-84lph,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-7wp4r_calico-system(7f31f51a-3fe7-4796-97ca-d9a3c9b5116f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Dec 16 12:18:26.817109 containerd[1971]: time="2025-12-16T12:18:26.816405249Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Dec 16 12:18:26.846254 systemd-networkd[1564]: calif62800b3d04: Link UP Dec 16 12:18:26.848636 systemd-networkd[1564]: calif62800b3d04: Gained carrier Dec 16 12:18:26.882923 containerd[1971]: 2025-12-16 12:18:26.696 [INFO][5356] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--20--6-k8s-coredns--668d6bf9bc--wbgzs-eth0 coredns-668d6bf9bc- kube-system debc4e05-9014-472c-af09-ca9dd2acb4d3 830 0 2025-12-16 12:17:35 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-20-6 coredns-668d6bf9bc-wbgzs eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calif62800b3d04 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="794abbf20da6cdfa717841aaa257655700455884ab843d43fe8c832de96c7e0b" Namespace="kube-system" Pod="coredns-668d6bf9bc-wbgzs" WorkloadEndpoint="ip--172--31--20--6-k8s-coredns--668d6bf9bc--wbgzs-" Dec 16 12:18:26.882923 containerd[1971]: 2025-12-16 12:18:26.696 [INFO][5356] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="794abbf20da6cdfa717841aaa257655700455884ab843d43fe8c832de96c7e0b" Namespace="kube-system" Pod="coredns-668d6bf9bc-wbgzs" WorkloadEndpoint="ip--172--31--20--6-k8s-coredns--668d6bf9bc--wbgzs-eth0" Dec 16 12:18:26.882923 containerd[1971]: 2025-12-16 12:18:26.746 [INFO][5373] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="794abbf20da6cdfa717841aaa257655700455884ab843d43fe8c832de96c7e0b" HandleID="k8s-pod-network.794abbf20da6cdfa717841aaa257655700455884ab843d43fe8c832de96c7e0b" Workload="ip--172--31--20--6-k8s-coredns--668d6bf9bc--wbgzs-eth0" Dec 16 12:18:26.882923 containerd[1971]: 2025-12-16 12:18:26.747 [INFO][5373] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="794abbf20da6cdfa717841aaa257655700455884ab843d43fe8c832de96c7e0b" HandleID="k8s-pod-network.794abbf20da6cdfa717841aaa257655700455884ab843d43fe8c832de96c7e0b" Workload="ip--172--31--20--6-k8s-coredns--668d6bf9bc--wbgzs-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002d3920), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-20-6", "pod":"coredns-668d6bf9bc-wbgzs", "timestamp":"2025-12-16 12:18:26.746795564 +0000 UTC"}, Hostname:"ip-172-31-20-6", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 12:18:26.882923 containerd[1971]: 2025-12-16 12:18:26.747 [INFO][5373] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 12:18:26.882923 containerd[1971]: 2025-12-16 12:18:26.747 [INFO][5373] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 12:18:26.882923 containerd[1971]: 2025-12-16 12:18:26.747 [INFO][5373] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-20-6' Dec 16 12:18:26.882923 containerd[1971]: 2025-12-16 12:18:26.764 [INFO][5373] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.794abbf20da6cdfa717841aaa257655700455884ab843d43fe8c832de96c7e0b" host="ip-172-31-20-6" Dec 16 12:18:26.882923 containerd[1971]: 2025-12-16 12:18:26.780 [INFO][5373] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-20-6" Dec 16 12:18:26.882923 containerd[1971]: 2025-12-16 12:18:26.790 [INFO][5373] ipam/ipam.go 511: Trying affinity for 192.168.13.0/26 host="ip-172-31-20-6" Dec 16 12:18:26.882923 containerd[1971]: 2025-12-16 12:18:26.794 [INFO][5373] ipam/ipam.go 158: Attempting to load block cidr=192.168.13.0/26 host="ip-172-31-20-6" Dec 16 12:18:26.882923 containerd[1971]: 2025-12-16 12:18:26.799 [INFO][5373] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.13.0/26 host="ip-172-31-20-6" Dec 16 12:18:26.882923 containerd[1971]: 2025-12-16 12:18:26.799 [INFO][5373] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.13.0/26 handle="k8s-pod-network.794abbf20da6cdfa717841aaa257655700455884ab843d43fe8c832de96c7e0b" host="ip-172-31-20-6" Dec 16 12:18:26.882923 containerd[1971]: 2025-12-16 12:18:26.803 [INFO][5373] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.794abbf20da6cdfa717841aaa257655700455884ab843d43fe8c832de96c7e0b Dec 16 12:18:26.882923 containerd[1971]: 2025-12-16 12:18:26.814 [INFO][5373] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.13.0/26 handle="k8s-pod-network.794abbf20da6cdfa717841aaa257655700455884ab843d43fe8c832de96c7e0b" host="ip-172-31-20-6" Dec 16 12:18:26.882923 containerd[1971]: 2025-12-16 12:18:26.831 [INFO][5373] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.13.8/26] block=192.168.13.0/26 handle="k8s-pod-network.794abbf20da6cdfa717841aaa257655700455884ab843d43fe8c832de96c7e0b" host="ip-172-31-20-6" Dec 16 12:18:26.882923 containerd[1971]: 2025-12-16 12:18:26.831 [INFO][5373] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.13.8/26] handle="k8s-pod-network.794abbf20da6cdfa717841aaa257655700455884ab843d43fe8c832de96c7e0b" host="ip-172-31-20-6" Dec 16 12:18:26.882923 containerd[1971]: 2025-12-16 12:18:26.831 [INFO][5373] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 12:18:26.882923 containerd[1971]: 2025-12-16 12:18:26.831 [INFO][5373] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.13.8/26] IPv6=[] ContainerID="794abbf20da6cdfa717841aaa257655700455884ab843d43fe8c832de96c7e0b" HandleID="k8s-pod-network.794abbf20da6cdfa717841aaa257655700455884ab843d43fe8c832de96c7e0b" Workload="ip--172--31--20--6-k8s-coredns--668d6bf9bc--wbgzs-eth0" Dec 16 12:18:26.885658 containerd[1971]: 2025-12-16 12:18:26.837 [INFO][5356] cni-plugin/k8s.go 418: Populated endpoint ContainerID="794abbf20da6cdfa717841aaa257655700455884ab843d43fe8c832de96c7e0b" Namespace="kube-system" Pod="coredns-668d6bf9bc-wbgzs" WorkloadEndpoint="ip--172--31--20--6-k8s-coredns--668d6bf9bc--wbgzs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--20--6-k8s-coredns--668d6bf9bc--wbgzs-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"debc4e05-9014-472c-af09-ca9dd2acb4d3", ResourceVersion:"830", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 12, 17, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-20-6", ContainerID:"", Pod:"coredns-668d6bf9bc-wbgzs", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.13.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif62800b3d04", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 12:18:26.885658 containerd[1971]: 2025-12-16 12:18:26.838 [INFO][5356] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.13.8/32] ContainerID="794abbf20da6cdfa717841aaa257655700455884ab843d43fe8c832de96c7e0b" Namespace="kube-system" Pod="coredns-668d6bf9bc-wbgzs" WorkloadEndpoint="ip--172--31--20--6-k8s-coredns--668d6bf9bc--wbgzs-eth0" Dec 16 12:18:26.885658 containerd[1971]: 2025-12-16 12:18:26.838 [INFO][5356] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif62800b3d04 ContainerID="794abbf20da6cdfa717841aaa257655700455884ab843d43fe8c832de96c7e0b" Namespace="kube-system" Pod="coredns-668d6bf9bc-wbgzs" WorkloadEndpoint="ip--172--31--20--6-k8s-coredns--668d6bf9bc--wbgzs-eth0" Dec 16 12:18:26.885658 containerd[1971]: 2025-12-16 12:18:26.850 [INFO][5356] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="794abbf20da6cdfa717841aaa257655700455884ab843d43fe8c832de96c7e0b" Namespace="kube-system" Pod="coredns-668d6bf9bc-wbgzs" WorkloadEndpoint="ip--172--31--20--6-k8s-coredns--668d6bf9bc--wbgzs-eth0" Dec 16 12:18:26.885658 containerd[1971]: 2025-12-16 12:18:26.852 [INFO][5356] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="794abbf20da6cdfa717841aaa257655700455884ab843d43fe8c832de96c7e0b" Namespace="kube-system" Pod="coredns-668d6bf9bc-wbgzs" WorkloadEndpoint="ip--172--31--20--6-k8s-coredns--668d6bf9bc--wbgzs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--20--6-k8s-coredns--668d6bf9bc--wbgzs-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"debc4e05-9014-472c-af09-ca9dd2acb4d3", ResourceVersion:"830", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 12, 17, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-20-6", ContainerID:"794abbf20da6cdfa717841aaa257655700455884ab843d43fe8c832de96c7e0b", Pod:"coredns-668d6bf9bc-wbgzs", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.13.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif62800b3d04", MAC:"d6:6a:b2:a0:94:cf", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 12:18:26.885658 containerd[1971]: 2025-12-16 12:18:26.877 [INFO][5356] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="794abbf20da6cdfa717841aaa257655700455884ab843d43fe8c832de96c7e0b" Namespace="kube-system" Pod="coredns-668d6bf9bc-wbgzs" WorkloadEndpoint="ip--172--31--20--6-k8s-coredns--668d6bf9bc--wbgzs-eth0" Dec 16 12:18:26.931000 audit[5388]: NETFILTER_CFG table=filter:139 family=2 entries=58 op=nft_register_chain pid=5388 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 16 12:18:26.931000 audit[5388]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=26744 a0=3 a1=ffffea63d7a0 a2=0 a3=ffffa25f4fa8 items=0 ppid=4637 pid=5388 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:18:26.931000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 16 12:18:26.940540 containerd[1971]: time="2025-12-16T12:18:26.940460085Z" level=info msg="connecting to shim 794abbf20da6cdfa717841aaa257655700455884ab843d43fe8c832de96c7e0b" address="unix:///run/containerd/s/dd9910236c661f891166421e1c2effb89e04eaffebfe575544294fe74e63bd69" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:18:27.006453 systemd[1]: Started cri-containerd-794abbf20da6cdfa717841aaa257655700455884ab843d43fe8c832de96c7e0b.scope - libcontainer container 794abbf20da6cdfa717841aaa257655700455884ab843d43fe8c832de96c7e0b. Dec 16 12:18:27.036000 audit: BPF prog-id=253 op=LOAD Dec 16 12:18:27.038000 audit: BPF prog-id=254 op=LOAD Dec 16 12:18:27.038000 audit[5409]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a0180 a2=98 a3=0 items=0 ppid=5397 pid=5409 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:18:27.038000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3739346162626632306461366364666137313738343161616132353736 Dec 16 12:18:27.038000 audit: BPF prog-id=254 op=UNLOAD Dec 16 12:18:27.038000 audit[5409]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5397 pid=5409 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:18:27.038000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3739346162626632306461366364666137313738343161616132353736 Dec 16 12:18:27.038000 audit: BPF prog-id=255 op=LOAD Dec 16 12:18:27.038000 audit[5409]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a03e8 a2=98 a3=0 items=0 ppid=5397 pid=5409 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:18:27.038000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3739346162626632306461366364666137313738343161616132353736 Dec 16 12:18:27.039000 audit: BPF prog-id=256 op=LOAD Dec 16 12:18:27.039000 audit[5409]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=40001a0168 a2=98 a3=0 items=0 ppid=5397 pid=5409 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:18:27.039000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3739346162626632306461366364666137313738343161616132353736 Dec 16 12:18:27.039000 audit: BPF prog-id=256 op=UNLOAD Dec 16 12:18:27.039000 audit[5409]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5397 pid=5409 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:18:27.039000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3739346162626632306461366364666137313738343161616132353736 Dec 16 12:18:27.039000 audit: BPF prog-id=255 op=UNLOAD Dec 16 12:18:27.039000 audit[5409]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5397 pid=5409 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:18:27.039000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3739346162626632306461366364666137313738343161616132353736 Dec 16 12:18:27.039000 audit: BPF prog-id=257 op=LOAD Dec 16 12:18:27.039000 audit[5409]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a0648 a2=98 a3=0 items=0 ppid=5397 pid=5409 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:18:27.039000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3739346162626632306461366364666137313738343161616132353736 Dec 16 12:18:27.091526 kubelet[3411]: E1216 12:18:27.089740 3411 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-fcb9bdb55-c27r4" podUID="a0556f5e-184b-4527-b60e-270da372abfb" Dec 16 12:18:27.096402 kubelet[3411]: E1216 12:18:27.091851 3411 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-fcb9bdb55-6k77x" podUID="41fe1dec-6478-42fa-9c60-8b697b125498" Dec 16 12:18:27.102177 containerd[1971]: time="2025-12-16T12:18:27.101868342Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:18:27.106182 containerd[1971]: time="2025-12-16T12:18:27.104526078Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Dec 16 12:18:27.106182 containerd[1971]: time="2025-12-16T12:18:27.104669310Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Dec 16 12:18:27.110175 systemd-networkd[1564]: califcb24684bad: Gained IPv6LL Dec 16 12:18:27.117582 kubelet[3411]: E1216 12:18:27.117448 3411 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 16 12:18:27.117582 kubelet[3411]: E1216 12:18:27.117513 3411 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 16 12:18:27.118616 kubelet[3411]: E1216 12:18:27.117910 3411 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8z56p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-5959d55c94-c8546_calico-system(9a9f64cf-c939-425f-bc9b-14da143ab498): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Dec 16 12:18:27.120340 kubelet[3411]: E1216 12:18:27.120164 3411 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5959d55c94-c8546" podUID="9a9f64cf-c939-425f-bc9b-14da143ab498" Dec 16 12:18:27.122881 containerd[1971]: time="2025-12-16T12:18:27.119356734Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Dec 16 12:18:27.163797 containerd[1971]: time="2025-12-16T12:18:27.163731174Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-wbgzs,Uid:debc4e05-9014-472c-af09-ca9dd2acb4d3,Namespace:kube-system,Attempt:0,} returns sandbox id \"794abbf20da6cdfa717841aaa257655700455884ab843d43fe8c832de96c7e0b\"" Dec 16 12:18:27.171979 containerd[1971]: time="2025-12-16T12:18:27.171870066Z" level=info msg="CreateContainer within sandbox \"794abbf20da6cdfa717841aaa257655700455884ab843d43fe8c832de96c7e0b\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 16 12:18:27.201950 containerd[1971]: time="2025-12-16T12:18:27.201304615Z" level=info msg="Container 130517cb64f7ce496e7a3e48d69d297e0a595ad3a69b26800a0a437b5937f587: CDI devices from CRI Config.CDIDevices: []" Dec 16 12:18:27.220793 containerd[1971]: time="2025-12-16T12:18:27.220704451Z" level=info msg="CreateContainer within sandbox \"794abbf20da6cdfa717841aaa257655700455884ab843d43fe8c832de96c7e0b\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"130517cb64f7ce496e7a3e48d69d297e0a595ad3a69b26800a0a437b5937f587\"" Dec 16 12:18:27.222652 containerd[1971]: time="2025-12-16T12:18:27.222606355Z" level=info msg="StartContainer for \"130517cb64f7ce496e7a3e48d69d297e0a595ad3a69b26800a0a437b5937f587\"" Dec 16 12:18:27.228633 containerd[1971]: time="2025-12-16T12:18:27.228568783Z" level=info msg="connecting to shim 130517cb64f7ce496e7a3e48d69d297e0a595ad3a69b26800a0a437b5937f587" address="unix:///run/containerd/s/dd9910236c661f891166421e1c2effb89e04eaffebfe575544294fe74e63bd69" protocol=ttrpc version=3 Dec 16 12:18:27.262628 systemd[1]: Started cri-containerd-130517cb64f7ce496e7a3e48d69d297e0a595ad3a69b26800a0a437b5937f587.scope - libcontainer container 130517cb64f7ce496e7a3e48d69d297e0a595ad3a69b26800a0a437b5937f587. Dec 16 12:18:27.289000 audit: BPF prog-id=258 op=LOAD Dec 16 12:18:27.291000 audit: BPF prog-id=259 op=LOAD Dec 16 12:18:27.291000 audit[5436]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178180 a2=98 a3=0 items=0 ppid=5397 pid=5436 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:18:27.291000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3133303531376362363466376365343936653761336534386436396432 Dec 16 12:18:27.291000 audit: BPF prog-id=259 op=UNLOAD Dec 16 12:18:27.291000 audit[5436]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5397 pid=5436 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:18:27.291000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3133303531376362363466376365343936653761336534386436396432 Dec 16 12:18:27.292000 audit: BPF prog-id=260 op=LOAD Dec 16 12:18:27.292000 audit[5436]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=5397 pid=5436 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:18:27.292000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3133303531376362363466376365343936653761336534386436396432 Dec 16 12:18:27.293000 audit: BPF prog-id=261 op=LOAD Dec 16 12:18:27.293000 audit[5436]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000178168 a2=98 a3=0 items=0 ppid=5397 pid=5436 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:18:27.293000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3133303531376362363466376365343936653761336534386436396432 Dec 16 12:18:27.293000 audit: BPF prog-id=261 op=UNLOAD Dec 16 12:18:27.293000 audit[5436]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5397 pid=5436 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:18:27.293000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3133303531376362363466376365343936653761336534386436396432 Dec 16 12:18:27.294000 audit: BPF prog-id=260 op=UNLOAD Dec 16 12:18:27.294000 audit[5436]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5397 pid=5436 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:18:27.294000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3133303531376362363466376365343936653761336534386436396432 Dec 16 12:18:27.294000 audit: BPF prog-id=262 op=LOAD Dec 16 12:18:27.294000 audit[5436]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178648 a2=98 a3=0 items=0 ppid=5397 pid=5436 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:18:27.294000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3133303531376362363466376365343936653761336534386436396432 Dec 16 12:18:27.341041 containerd[1971]: time="2025-12-16T12:18:27.340759351Z" level=info msg="StartContainer for \"130517cb64f7ce496e7a3e48d69d297e0a595ad3a69b26800a0a437b5937f587\" returns successfully" Dec 16 12:18:27.386174 containerd[1971]: time="2025-12-16T12:18:27.385898348Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:18:27.388571 containerd[1971]: time="2025-12-16T12:18:27.388451936Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Dec 16 12:18:27.388909 containerd[1971]: time="2025-12-16T12:18:27.388534904Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Dec 16 12:18:27.389270 kubelet[3411]: E1216 12:18:27.389213 3411 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 16 12:18:27.389410 kubelet[3411]: E1216 12:18:27.389284 3411 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 16 12:18:27.389517 kubelet[3411]: E1216 12:18:27.389446 3411 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-84lph,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-7wp4r_calico-system(7f31f51a-3fe7-4796-97ca-d9a3c9b5116f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Dec 16 12:18:27.391356 kubelet[3411]: E1216 12:18:27.391169 3411 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-7wp4r" podUID="7f31f51a-3fe7-4796-97ca-d9a3c9b5116f" Dec 16 12:18:27.685360 systemd-networkd[1564]: calib5bdc49a1b9: Gained IPv6LL Dec 16 12:18:28.099172 kubelet[3411]: E1216 12:18:28.099041 3411 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-7wp4r" podUID="7f31f51a-3fe7-4796-97ca-d9a3c9b5116f" Dec 16 12:18:28.100865 kubelet[3411]: E1216 12:18:28.099534 3411 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5959d55c94-c8546" podUID="9a9f64cf-c939-425f-bc9b-14da143ab498" Dec 16 12:18:28.223307 kubelet[3411]: I1216 12:18:28.223190 3411 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-wbgzs" podStartSLOduration=53.223162364 podStartE2EDuration="53.223162364s" podCreationTimestamp="2025-12-16 12:17:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 12:18:28.197691692 +0000 UTC m=+57.372662242" watchObservedRunningTime="2025-12-16 12:18:28.223162364 +0000 UTC m=+57.398132830" Dec 16 12:18:28.239000 audit[5480]: NETFILTER_CFG table=filter:140 family=2 entries=20 op=nft_register_rule pid=5480 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:18:28.239000 audit[5480]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=7480 a0=3 a1=ffffe1dda2e0 a2=0 a3=1 items=0 ppid=3639 pid=5480 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:18:28.239000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:18:28.245000 audit[5480]: NETFILTER_CFG table=nat:141 family=2 entries=14 op=nft_register_rule pid=5480 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:18:28.245000 audit[5480]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=3468 a0=3 a1=ffffe1dda2e0 a2=0 a3=1 items=0 ppid=3639 pid=5480 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:18:28.245000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:18:28.645344 systemd-networkd[1564]: calif62800b3d04: Gained IPv6LL Dec 16 12:18:29.175000 audit[5482]: NETFILTER_CFG table=filter:142 family=2 entries=17 op=nft_register_rule pid=5482 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:18:29.178401 kernel: kauditd_printk_skb: 227 callbacks suppressed Dec 16 12:18:29.178588 kernel: audit: type=1325 audit(1765887509.175:753): table=filter:142 family=2 entries=17 op=nft_register_rule pid=5482 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:18:29.175000 audit[5482]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=ffffee7846f0 a2=0 a3=1 items=0 ppid=3639 pid=5482 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:18:29.188777 kernel: audit: type=1300 audit(1765887509.175:753): arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=ffffee7846f0 a2=0 a3=1 items=0 ppid=3639 pid=5482 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:18:29.175000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:18:29.193441 kernel: audit: type=1327 audit(1765887509.175:753): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:18:29.195000 audit[5482]: NETFILTER_CFG table=nat:143 family=2 entries=35 op=nft_register_chain pid=5482 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:18:29.195000 audit[5482]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=14196 a0=3 a1=ffffee7846f0 a2=0 a3=1 items=0 ppid=3639 pid=5482 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:18:29.207070 kernel: audit: type=1325 audit(1765887509.195:754): table=nat:143 family=2 entries=35 op=nft_register_chain pid=5482 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:18:29.207244 kernel: audit: type=1300 audit(1765887509.195:754): arch=c00000b7 syscall=211 success=yes exit=14196 a0=3 a1=ffffee7846f0 a2=0 a3=1 items=0 ppid=3639 pid=5482 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:18:29.195000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:18:29.211093 kernel: audit: type=1327 audit(1765887509.195:754): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:18:31.356784 ntpd[1930]: Listen normally on 6 vxlan.calico 192.168.13.0:123 Dec 16 12:18:31.357944 ntpd[1930]: 16 Dec 12:18:31 ntpd[1930]: Listen normally on 6 vxlan.calico 192.168.13.0:123 Dec 16 12:18:31.357944 ntpd[1930]: 16 Dec 12:18:31 ntpd[1930]: Listen normally on 7 cali91c6949d5f9 [fe80::ecee:eeff:feee:eeee%4]:123 Dec 16 12:18:31.357944 ntpd[1930]: 16 Dec 12:18:31 ntpd[1930]: Listen normally on 8 vxlan.calico [fe80::6481:1eff:fe99:da8d%5]:123 Dec 16 12:18:31.357944 ntpd[1930]: 16 Dec 12:18:31 ntpd[1930]: Listen normally on 9 cali01a2e1d9fb5 [fe80::ecee:eeff:feee:eeee%8]:123 Dec 16 12:18:31.357944 ntpd[1930]: 16 Dec 12:18:31 ntpd[1930]: Listen normally on 10 calid737a80e72b [fe80::ecee:eeff:feee:eeee%9]:123 Dec 16 12:18:31.357944 ntpd[1930]: 16 Dec 12:18:31 ntpd[1930]: Listen normally on 11 cali0829b09dc9b [fe80::ecee:eeff:feee:eeee%10]:123 Dec 16 12:18:31.357944 ntpd[1930]: 16 Dec 12:18:31 ntpd[1930]: Listen normally on 12 cali4c864395248 [fe80::ecee:eeff:feee:eeee%11]:123 Dec 16 12:18:31.357944 ntpd[1930]: 16 Dec 12:18:31 ntpd[1930]: Listen normally on 13 califcb24684bad [fe80::ecee:eeff:feee:eeee%12]:123 Dec 16 12:18:31.357944 ntpd[1930]: 16 Dec 12:18:31 ntpd[1930]: Listen normally on 14 calib5bdc49a1b9 [fe80::ecee:eeff:feee:eeee%13]:123 Dec 16 12:18:31.357944 ntpd[1930]: 16 Dec 12:18:31 ntpd[1930]: Listen normally on 15 calif62800b3d04 [fe80::ecee:eeff:feee:eeee%14]:123 Dec 16 12:18:31.356874 ntpd[1930]: Listen normally on 7 cali91c6949d5f9 [fe80::ecee:eeff:feee:eeee%4]:123 Dec 16 12:18:31.356922 ntpd[1930]: Listen normally on 8 vxlan.calico [fe80::6481:1eff:fe99:da8d%5]:123 Dec 16 12:18:31.356967 ntpd[1930]: Listen normally on 9 cali01a2e1d9fb5 [fe80::ecee:eeff:feee:eeee%8]:123 Dec 16 12:18:31.357012 ntpd[1930]: Listen normally on 10 calid737a80e72b [fe80::ecee:eeff:feee:eeee%9]:123 Dec 16 12:18:31.357098 ntpd[1930]: Listen normally on 11 cali0829b09dc9b [fe80::ecee:eeff:feee:eeee%10]:123 Dec 16 12:18:31.357149 ntpd[1930]: Listen normally on 12 cali4c864395248 [fe80::ecee:eeff:feee:eeee%11]:123 Dec 16 12:18:31.357200 ntpd[1930]: Listen normally on 13 califcb24684bad [fe80::ecee:eeff:feee:eeee%12]:123 Dec 16 12:18:31.357245 ntpd[1930]: Listen normally on 14 calib5bdc49a1b9 [fe80::ecee:eeff:feee:eeee%13]:123 Dec 16 12:18:31.357290 ntpd[1930]: Listen normally on 15 calif62800b3d04 [fe80::ecee:eeff:feee:eeee%14]:123 Dec 16 12:18:35.548994 containerd[1971]: time="2025-12-16T12:18:35.548758336Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Dec 16 12:18:35.941570 containerd[1971]: time="2025-12-16T12:18:35.940916946Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:18:35.944690 containerd[1971]: time="2025-12-16T12:18:35.944430282Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Dec 16 12:18:35.944690 containerd[1971]: time="2025-12-16T12:18:35.944520882Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Dec 16 12:18:35.945369 kubelet[3411]: E1216 12:18:35.945296 3411 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 12:18:35.947176 kubelet[3411]: E1216 12:18:35.945370 3411 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 12:18:35.947554 kubelet[3411]: E1216 12:18:35.947428 3411 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:f43187b9eb34459fb9682ed9d785cfc9,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-dclhd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-54cb69c56c-bnxxh_calico-system(41dcedc9-f0d1-4389-a970-074857eabb8a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Dec 16 12:18:35.950854 containerd[1971]: time="2025-12-16T12:18:35.950787342Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Dec 16 12:18:36.270036 containerd[1971]: time="2025-12-16T12:18:36.269817220Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:18:36.272744 containerd[1971]: time="2025-12-16T12:18:36.272595436Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Dec 16 12:18:36.272744 containerd[1971]: time="2025-12-16T12:18:36.272657392Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Dec 16 12:18:36.273019 kubelet[3411]: E1216 12:18:36.272947 3411 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 12:18:36.273184 kubelet[3411]: E1216 12:18:36.273093 3411 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 12:18:36.273821 kubelet[3411]: E1216 12:18:36.273260 3411 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dclhd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-54cb69c56c-bnxxh_calico-system(41dcedc9-f0d1-4389-a970-074857eabb8a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Dec 16 12:18:36.275302 kubelet[3411]: E1216 12:18:36.275189 3411 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-54cb69c56c-bnxxh" podUID="41dcedc9-f0d1-4389-a970-074857eabb8a" Dec 16 12:18:36.367000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-172.31.20.6:22-147.75.109.163:55934 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:18:36.368800 systemd[1]: Started sshd@7-172.31.20.6:22-147.75.109.163:55934.service - OpenSSH per-connection server daemon (147.75.109.163:55934). Dec 16 12:18:36.377124 kernel: audit: type=1130 audit(1765887516.367:755): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-172.31.20.6:22-147.75.109.163:55934 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:18:36.561000 audit[5501]: USER_ACCT pid=5501 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:18:36.568769 sshd[5501]: Accepted publickey for core from 147.75.109.163 port 55934 ssh2: RSA SHA256:akQMr2ov2nuM0LGikPoztxlYyDrZ4eOQOASr+dHXo3U Dec 16 12:18:36.569324 kernel: audit: type=1101 audit(1765887516.561:756): pid=5501 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:18:36.569000 audit[5501]: CRED_ACQ pid=5501 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:18:36.578851 kernel: audit: type=1103 audit(1765887516.569:757): pid=5501 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:18:36.578983 kernel: audit: type=1006 audit(1765887516.574:758): pid=5501 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=9 res=1 Dec 16 12:18:36.574000 audit[5501]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=fffff3fec020 a2=3 a3=0 items=0 ppid=1 pid=5501 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=9 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:18:36.579709 sshd-session[5501]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:18:36.574000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 12:18:36.587843 kernel: audit: type=1300 audit(1765887516.574:758): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=fffff3fec020 a2=3 a3=0 items=0 ppid=1 pid=5501 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=9 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:18:36.587947 kernel: audit: type=1327 audit(1765887516.574:758): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 12:18:36.597217 systemd-logind[1939]: New session 9 of user core. Dec 16 12:18:36.604356 systemd[1]: Started session-9.scope - Session 9 of User core. Dec 16 12:18:36.610000 audit[5501]: USER_START pid=5501 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:18:36.621203 kernel: audit: type=1105 audit(1765887516.610:759): pid=5501 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:18:36.620000 audit[5505]: CRED_ACQ pid=5505 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:18:36.634094 kernel: audit: type=1103 audit(1765887516.620:760): pid=5505 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:18:36.938948 sshd[5505]: Connection closed by 147.75.109.163 port 55934 Dec 16 12:18:36.939413 sshd-session[5501]: pam_unix(sshd:session): session closed for user core Dec 16 12:18:36.942000 audit[5501]: USER_END pid=5501 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:18:36.948842 systemd[1]: sshd@7-172.31.20.6:22-147.75.109.163:55934.service: Deactivated successfully. Dec 16 12:18:36.942000 audit[5501]: CRED_DISP pid=5501 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:18:36.956695 systemd[1]: session-9.scope: Deactivated successfully. Dec 16 12:18:36.960705 kernel: audit: type=1106 audit(1765887516.942:761): pid=5501 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:18:36.960789 kernel: audit: type=1104 audit(1765887516.942:762): pid=5501 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:18:36.950000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-172.31.20.6:22-147.75.109.163:55934 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:18:36.965577 systemd-logind[1939]: Session 9 logged out. Waiting for processes to exit. Dec 16 12:18:36.968398 systemd-logind[1939]: Removed session 9. Dec 16 12:18:38.156000 audit[5520]: NETFILTER_CFG table=filter:144 family=2 entries=14 op=nft_register_rule pid=5520 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:18:38.156000 audit[5520]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=ffffdd4f94a0 a2=0 a3=1 items=0 ppid=3639 pid=5520 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:18:38.156000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:18:38.174000 audit[5520]: NETFILTER_CFG table=nat:145 family=2 entries=56 op=nft_register_chain pid=5520 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:18:38.174000 audit[5520]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=19860 a0=3 a1=ffffdd4f94a0 a2=0 a3=1 items=0 ppid=3639 pid=5520 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:18:38.174000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:18:39.549205 containerd[1971]: time="2025-12-16T12:18:39.548906444Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Dec 16 12:18:39.871107 containerd[1971]: time="2025-12-16T12:18:39.870897106Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:18:39.873219 containerd[1971]: time="2025-12-16T12:18:39.873144742Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Dec 16 12:18:39.873363 containerd[1971]: time="2025-12-16T12:18:39.873270010Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Dec 16 12:18:39.873558 kubelet[3411]: E1216 12:18:39.873501 3411 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 16 12:18:39.874088 kubelet[3411]: E1216 12:18:39.873569 3411 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 16 12:18:39.874088 kubelet[3411]: E1216 12:18:39.873745 3411 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-h456d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-f4x44_calico-system(9e39aa72-dd6b-4253-877f-1d57a9236239): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Dec 16 12:18:39.875696 kubelet[3411]: E1216 12:18:39.875451 3411 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-f4x44" podUID="9e39aa72-dd6b-4253-877f-1d57a9236239" Dec 16 12:18:40.549087 containerd[1971]: time="2025-12-16T12:18:40.549012261Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Dec 16 12:18:40.876179 containerd[1971]: time="2025-12-16T12:18:40.875827871Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:18:40.878398 containerd[1971]: time="2025-12-16T12:18:40.878308631Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Dec 16 12:18:40.878555 containerd[1971]: time="2025-12-16T12:18:40.878444807Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Dec 16 12:18:40.879399 kubelet[3411]: E1216 12:18:40.879006 3411 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 16 12:18:40.879399 kubelet[3411]: E1216 12:18:40.879171 3411 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 16 12:18:40.879999 kubelet[3411]: E1216 12:18:40.879484 3411 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8z56p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-5959d55c94-c8546_calico-system(9a9f64cf-c939-425f-bc9b-14da143ab498): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Dec 16 12:18:40.881608 kubelet[3411]: E1216 12:18:40.881043 3411 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5959d55c94-c8546" podUID="9a9f64cf-c939-425f-bc9b-14da143ab498" Dec 16 12:18:40.881742 containerd[1971]: time="2025-12-16T12:18:40.880383323Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Dec 16 12:18:41.150409 containerd[1971]: time="2025-12-16T12:18:41.150120464Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:18:41.152466 containerd[1971]: time="2025-12-16T12:18:41.152284028Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Dec 16 12:18:41.152466 containerd[1971]: time="2025-12-16T12:18:41.152403656Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Dec 16 12:18:41.153106 kubelet[3411]: E1216 12:18:41.152758 3411 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 16 12:18:41.153106 kubelet[3411]: E1216 12:18:41.152821 3411 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 16 12:18:41.153106 kubelet[3411]: E1216 12:18:41.152984 3411 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-84lph,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-7wp4r_calico-system(7f31f51a-3fe7-4796-97ca-d9a3c9b5116f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Dec 16 12:18:41.157461 containerd[1971]: time="2025-12-16T12:18:41.156972776Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Dec 16 12:18:41.458216 containerd[1971]: time="2025-12-16T12:18:41.458022837Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:18:41.460481 containerd[1971]: time="2025-12-16T12:18:41.460400433Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Dec 16 12:18:41.460481 containerd[1971]: time="2025-12-16T12:18:41.460438869Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Dec 16 12:18:41.460930 kubelet[3411]: E1216 12:18:41.460699 3411 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 16 12:18:41.460930 kubelet[3411]: E1216 12:18:41.460768 3411 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 16 12:18:41.461234 kubelet[3411]: E1216 12:18:41.460931 3411 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-84lph,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-7wp4r_calico-system(7f31f51a-3fe7-4796-97ca-d9a3c9b5116f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Dec 16 12:18:41.462303 kubelet[3411]: E1216 12:18:41.462215 3411 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-7wp4r" podUID="7f31f51a-3fe7-4796-97ca-d9a3c9b5116f" Dec 16 12:18:41.550126 containerd[1971]: time="2025-12-16T12:18:41.549850222Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 12:18:41.829652 containerd[1971]: time="2025-12-16T12:18:41.829559411Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:18:41.831780 containerd[1971]: time="2025-12-16T12:18:41.831707363Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 12:18:41.831894 containerd[1971]: time="2025-12-16T12:18:41.831825815Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 16 12:18:41.832200 kubelet[3411]: E1216 12:18:41.832048 3411 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 12:18:41.832311 kubelet[3411]: E1216 12:18:41.832215 3411 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 12:18:41.832817 kubelet[3411]: E1216 12:18:41.832399 3411 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6ntjh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-fcb9bdb55-6k77x_calico-apiserver(41fe1dec-6478-42fa-9c60-8b697b125498): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 12:18:41.833925 kubelet[3411]: E1216 12:18:41.833856 3411 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-fcb9bdb55-6k77x" podUID="41fe1dec-6478-42fa-9c60-8b697b125498" Dec 16 12:18:41.975578 systemd[1]: Started sshd@8-172.31.20.6:22-147.75.109.163:55938.service - OpenSSH per-connection server daemon (147.75.109.163:55938). Dec 16 12:18:41.975000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-172.31.20.6:22-147.75.109.163:55938 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:18:41.979108 kernel: kauditd_printk_skb: 7 callbacks suppressed Dec 16 12:18:41.979260 kernel: audit: type=1130 audit(1765887521.975:766): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-172.31.20.6:22-147.75.109.163:55938 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:18:42.167000 audit[5524]: USER_ACCT pid=5524 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:18:42.170235 sshd[5524]: Accepted publickey for core from 147.75.109.163 port 55938 ssh2: RSA SHA256:akQMr2ov2nuM0LGikPoztxlYyDrZ4eOQOASr+dHXo3U Dec 16 12:18:42.177139 kernel: audit: type=1101 audit(1765887522.167:767): pid=5524 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:18:42.177228 kernel: audit: type=1103 audit(1765887522.175:768): pid=5524 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:18:42.175000 audit[5524]: CRED_ACQ pid=5524 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:18:42.180403 sshd-session[5524]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:18:42.186456 kernel: audit: type=1006 audit(1765887522.175:769): pid=5524 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=10 res=1 Dec 16 12:18:42.175000 audit[5524]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffe7ac0fa0 a2=3 a3=0 items=0 ppid=1 pid=5524 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:18:42.193743 kernel: audit: type=1300 audit(1765887522.175:769): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffe7ac0fa0 a2=3 a3=0 items=0 ppid=1 pid=5524 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:18:42.175000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 12:18:42.196852 kernel: audit: type=1327 audit(1765887522.175:769): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 12:18:42.205539 systemd-logind[1939]: New session 10 of user core. Dec 16 12:18:42.211620 systemd[1]: Started session-10.scope - Session 10 of User core. Dec 16 12:18:42.219000 audit[5524]: USER_START pid=5524 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:18:42.229000 audit[5528]: CRED_ACQ pid=5528 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:18:42.237361 kernel: audit: type=1105 audit(1765887522.219:770): pid=5524 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:18:42.237451 kernel: audit: type=1103 audit(1765887522.229:771): pid=5528 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:18:42.444283 sshd[5528]: Connection closed by 147.75.109.163 port 55938 Dec 16 12:18:42.445283 sshd-session[5524]: pam_unix(sshd:session): session closed for user core Dec 16 12:18:42.447000 audit[5524]: USER_END pid=5524 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:18:42.454649 systemd[1]: sshd@8-172.31.20.6:22-147.75.109.163:55938.service: Deactivated successfully. Dec 16 12:18:42.447000 audit[5524]: CRED_DISP pid=5524 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:18:42.461614 systemd[1]: session-10.scope: Deactivated successfully. Dec 16 12:18:42.461775 kernel: audit: type=1106 audit(1765887522.447:772): pid=5524 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:18:42.461820 kernel: audit: type=1104 audit(1765887522.447:773): pid=5524 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:18:42.455000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-172.31.20.6:22-147.75.109.163:55938 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:18:42.467802 systemd-logind[1939]: Session 10 logged out. Waiting for processes to exit. Dec 16 12:18:42.469946 systemd-logind[1939]: Removed session 10. Dec 16 12:18:42.548035 containerd[1971]: time="2025-12-16T12:18:42.547960679Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 12:18:42.843522 containerd[1971]: time="2025-12-16T12:18:42.843445056Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:18:42.845697 containerd[1971]: time="2025-12-16T12:18:42.845625600Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 12:18:42.845928 containerd[1971]: time="2025-12-16T12:18:42.845656632Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 16 12:18:42.845983 kubelet[3411]: E1216 12:18:42.845924 3411 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 12:18:42.846491 kubelet[3411]: E1216 12:18:42.845988 3411 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 12:18:42.846938 kubelet[3411]: E1216 12:18:42.846783 3411 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-88br8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-fcb9bdb55-c27r4_calico-apiserver(a0556f5e-184b-4527-b60e-270da372abfb): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 12:18:42.848291 kubelet[3411]: E1216 12:18:42.848131 3411 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-fcb9bdb55-c27r4" podUID="a0556f5e-184b-4527-b60e-270da372abfb" Dec 16 12:18:47.485599 systemd[1]: Started sshd@9-172.31.20.6:22-147.75.109.163:56732.service - OpenSSH per-connection server daemon (147.75.109.163:56732). Dec 16 12:18:47.485000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-172.31.20.6:22-147.75.109.163:56732 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:18:47.487381 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 16 12:18:47.487474 kernel: audit: type=1130 audit(1765887527.485:775): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-172.31.20.6:22-147.75.109.163:56732 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:18:47.550653 kubelet[3411]: E1216 12:18:47.550585 3411 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-54cb69c56c-bnxxh" podUID="41dcedc9-f0d1-4389-a970-074857eabb8a" Dec 16 12:18:47.668000 audit[5549]: USER_ACCT pid=5549 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:18:47.675829 sshd[5549]: Accepted publickey for core from 147.75.109.163 port 56732 ssh2: RSA SHA256:akQMr2ov2nuM0LGikPoztxlYyDrZ4eOQOASr+dHXo3U Dec 16 12:18:47.677110 kernel: audit: type=1101 audit(1765887527.668:776): pid=5549 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:18:47.676000 audit[5549]: CRED_ACQ pid=5549 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:18:47.681715 sshd-session[5549]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:18:47.686783 kernel: audit: type=1103 audit(1765887527.676:777): pid=5549 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:18:47.686894 kernel: audit: type=1006 audit(1765887527.676:778): pid=5549 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=11 res=1 Dec 16 12:18:47.676000 audit[5549]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=fffff6d87b00 a2=3 a3=0 items=0 ppid=1 pid=5549 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:18:47.694481 kernel: audit: type=1300 audit(1765887527.676:778): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=fffff6d87b00 a2=3 a3=0 items=0 ppid=1 pid=5549 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:18:47.694615 kernel: audit: type=1327 audit(1765887527.676:778): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 12:18:47.676000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 12:18:47.693431 systemd-logind[1939]: New session 11 of user core. Dec 16 12:18:47.698394 systemd[1]: Started session-11.scope - Session 11 of User core. Dec 16 12:18:47.704000 audit[5549]: USER_START pid=5549 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:18:47.712000 audit[5553]: CRED_ACQ pid=5553 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:18:47.718045 kernel: audit: type=1105 audit(1765887527.704:779): pid=5549 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:18:47.718168 kernel: audit: type=1103 audit(1765887527.712:780): pid=5553 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:18:47.896811 sshd[5553]: Connection closed by 147.75.109.163 port 56732 Dec 16 12:18:47.897654 sshd-session[5549]: pam_unix(sshd:session): session closed for user core Dec 16 12:18:47.929000 audit[5549]: USER_END pid=5549 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:18:47.929000 audit[5549]: CRED_DISP pid=5549 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:18:47.943779 kernel: audit: type=1106 audit(1765887527.929:781): pid=5549 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:18:47.943898 kernel: audit: type=1104 audit(1765887527.929:782): pid=5549 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:18:47.941537 systemd[1]: Started sshd@10-172.31.20.6:22-147.75.109.163:56744.service - OpenSSH per-connection server daemon (147.75.109.163:56744). Dec 16 12:18:47.942670 systemd[1]: sshd@9-172.31.20.6:22-147.75.109.163:56732.service: Deactivated successfully. Dec 16 12:18:47.937000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-172.31.20.6:22-147.75.109.163:56744 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:18:47.937000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-172.31.20.6:22-147.75.109.163:56732 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:18:47.949337 systemd[1]: session-11.scope: Deactivated successfully. Dec 16 12:18:47.953994 systemd-logind[1939]: Session 11 logged out. Waiting for processes to exit. Dec 16 12:18:47.961443 systemd-logind[1939]: Removed session 11. Dec 16 12:18:48.136000 audit[5563]: USER_ACCT pid=5563 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:18:48.137763 sshd[5563]: Accepted publickey for core from 147.75.109.163 port 56744 ssh2: RSA SHA256:akQMr2ov2nuM0LGikPoztxlYyDrZ4eOQOASr+dHXo3U Dec 16 12:18:48.138000 audit[5563]: CRED_ACQ pid=5563 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:18:48.138000 audit[5563]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=fffff8492ad0 a2=3 a3=0 items=0 ppid=1 pid=5563 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:18:48.138000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 12:18:48.141247 sshd-session[5563]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:18:48.150193 systemd-logind[1939]: New session 12 of user core. Dec 16 12:18:48.157420 systemd[1]: Started session-12.scope - Session 12 of User core. Dec 16 12:18:48.163000 audit[5563]: USER_START pid=5563 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:18:48.166000 audit[5570]: CRED_ACQ pid=5570 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:18:48.445378 sshd[5570]: Connection closed by 147.75.109.163 port 56744 Dec 16 12:18:48.444830 sshd-session[5563]: pam_unix(sshd:session): session closed for user core Dec 16 12:18:48.451000 audit[5563]: USER_END pid=5563 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:18:48.451000 audit[5563]: CRED_DISP pid=5563 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:18:48.461742 systemd[1]: sshd@10-172.31.20.6:22-147.75.109.163:56744.service: Deactivated successfully. Dec 16 12:18:48.463000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-172.31.20.6:22-147.75.109.163:56744 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:18:48.473685 systemd[1]: session-12.scope: Deactivated successfully. Dec 16 12:18:48.478196 systemd-logind[1939]: Session 12 logged out. Waiting for processes to exit. Dec 16 12:18:48.498000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-172.31.20.6:22-147.75.109.163:56758 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:18:48.499674 systemd[1]: Started sshd@11-172.31.20.6:22-147.75.109.163:56758.service - OpenSSH per-connection server daemon (147.75.109.163:56758). Dec 16 12:18:48.503269 systemd-logind[1939]: Removed session 12. Dec 16 12:18:48.679000 audit[5581]: USER_ACCT pid=5581 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:18:48.680400 sshd[5581]: Accepted publickey for core from 147.75.109.163 port 56758 ssh2: RSA SHA256:akQMr2ov2nuM0LGikPoztxlYyDrZ4eOQOASr+dHXo3U Dec 16 12:18:48.680000 audit[5581]: CRED_ACQ pid=5581 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:18:48.681000 audit[5581]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffd7d341f0 a2=3 a3=0 items=0 ppid=1 pid=5581 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:18:48.681000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 12:18:48.684501 sshd-session[5581]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:18:48.695183 systemd-logind[1939]: New session 13 of user core. Dec 16 12:18:48.700354 systemd[1]: Started session-13.scope - Session 13 of User core. Dec 16 12:18:48.708000 audit[5581]: USER_START pid=5581 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:18:48.712000 audit[5585]: CRED_ACQ pid=5585 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:18:48.903215 sshd[5585]: Connection closed by 147.75.109.163 port 56758 Dec 16 12:18:48.904007 sshd-session[5581]: pam_unix(sshd:session): session closed for user core Dec 16 12:18:48.905000 audit[5581]: USER_END pid=5581 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:18:48.906000 audit[5581]: CRED_DISP pid=5581 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:18:48.912001 systemd[1]: sshd@11-172.31.20.6:22-147.75.109.163:56758.service: Deactivated successfully. Dec 16 12:18:48.913000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-172.31.20.6:22-147.75.109.163:56758 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:18:48.917506 systemd[1]: session-13.scope: Deactivated successfully. Dec 16 12:18:48.921154 systemd-logind[1939]: Session 13 logged out. Waiting for processes to exit. Dec 16 12:18:48.923808 systemd-logind[1939]: Removed session 13. Dec 16 12:18:53.552687 kubelet[3411]: E1216 12:18:53.552493 3411 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-7wp4r" podUID="7f31f51a-3fe7-4796-97ca-d9a3c9b5116f" Dec 16 12:18:53.941626 systemd[1]: Started sshd@12-172.31.20.6:22-147.75.109.163:35324.service - OpenSSH per-connection server daemon (147.75.109.163:35324). Dec 16 12:18:53.942000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-172.31.20.6:22-147.75.109.163:35324 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:18:53.944201 kernel: kauditd_printk_skb: 23 callbacks suppressed Dec 16 12:18:53.944275 kernel: audit: type=1130 audit(1765887533.942:802): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-172.31.20.6:22-147.75.109.163:35324 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:18:54.154000 audit[5629]: USER_ACCT pid=5629 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:18:54.156329 sshd[5629]: Accepted publickey for core from 147.75.109.163 port 35324 ssh2: RSA SHA256:akQMr2ov2nuM0LGikPoztxlYyDrZ4eOQOASr+dHXo3U Dec 16 12:18:54.162637 kernel: audit: type=1101 audit(1765887534.154:803): pid=5629 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:18:54.162753 kernel: audit: type=1103 audit(1765887534.161:804): pid=5629 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:18:54.161000 audit[5629]: CRED_ACQ pid=5629 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:18:54.164742 sshd-session[5629]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:18:54.171437 kernel: audit: type=1006 audit(1765887534.162:805): pid=5629 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=14 res=1 Dec 16 12:18:54.171864 kernel: audit: type=1300 audit(1765887534.162:805): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffd4d00960 a2=3 a3=0 items=0 ppid=1 pid=5629 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:18:54.162000 audit[5629]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffd4d00960 a2=3 a3=0 items=0 ppid=1 pid=5629 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:18:54.162000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 12:18:54.180665 kernel: audit: type=1327 audit(1765887534.162:805): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 12:18:54.184253 systemd-logind[1939]: New session 14 of user core. Dec 16 12:18:54.191419 systemd[1]: Started session-14.scope - Session 14 of User core. Dec 16 12:18:54.196000 audit[5629]: USER_START pid=5629 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:18:54.205000 audit[5634]: CRED_ACQ pid=5634 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:18:54.210713 kernel: audit: type=1105 audit(1765887534.196:806): pid=5629 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:18:54.210791 kernel: audit: type=1103 audit(1765887534.205:807): pid=5634 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:18:54.432598 sshd[5634]: Connection closed by 147.75.109.163 port 35324 Dec 16 12:18:54.433465 sshd-session[5629]: pam_unix(sshd:session): session closed for user core Dec 16 12:18:54.436000 audit[5629]: USER_END pid=5629 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:18:54.444379 systemd[1]: sshd@12-172.31.20.6:22-147.75.109.163:35324.service: Deactivated successfully. Dec 16 12:18:54.450244 kernel: audit: type=1106 audit(1765887534.436:808): pid=5629 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:18:54.450356 kernel: audit: type=1104 audit(1765887534.436:809): pid=5629 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:18:54.436000 audit[5629]: CRED_DISP pid=5629 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:18:54.450487 systemd[1]: session-14.scope: Deactivated successfully. Dec 16 12:18:54.444000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-172.31.20.6:22-147.75.109.163:35324 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:18:54.453848 systemd-logind[1939]: Session 14 logged out. Waiting for processes to exit. Dec 16 12:18:54.458311 systemd-logind[1939]: Removed session 14. Dec 16 12:18:54.546866 kubelet[3411]: E1216 12:18:54.546781 3411 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5959d55c94-c8546" podUID="9a9f64cf-c939-425f-bc9b-14da143ab498" Dec 16 12:18:54.549403 kubelet[3411]: E1216 12:18:54.549147 3411 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-f4x44" podUID="9e39aa72-dd6b-4253-877f-1d57a9236239" Dec 16 12:18:55.551118 kubelet[3411]: E1216 12:18:55.549334 3411 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-fcb9bdb55-6k77x" podUID="41fe1dec-6478-42fa-9c60-8b697b125498" Dec 16 12:18:55.561786 kubelet[3411]: E1216 12:18:55.561643 3411 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-fcb9bdb55-c27r4" podUID="a0556f5e-184b-4527-b60e-270da372abfb" Dec 16 12:18:59.474696 systemd[1]: Started sshd@13-172.31.20.6:22-147.75.109.163:35332.service - OpenSSH per-connection server daemon (147.75.109.163:35332). Dec 16 12:18:59.475000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-172.31.20.6:22-147.75.109.163:35332 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:18:59.477493 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 16 12:18:59.477585 kernel: audit: type=1130 audit(1765887539.475:811): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-172.31.20.6:22-147.75.109.163:35332 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:18:59.664000 audit[5650]: USER_ACCT pid=5650 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:18:59.665261 sshd[5650]: Accepted publickey for core from 147.75.109.163 port 35332 ssh2: RSA SHA256:akQMr2ov2nuM0LGikPoztxlYyDrZ4eOQOASr+dHXo3U Dec 16 12:18:59.672090 kernel: audit: type=1101 audit(1765887539.664:812): pid=5650 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:18:59.672000 audit[5650]: CRED_ACQ pid=5650 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:18:59.674586 sshd-session[5650]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:18:59.682267 kernel: audit: type=1103 audit(1765887539.672:813): pid=5650 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:18:59.682327 kernel: audit: type=1006 audit(1765887539.672:814): pid=5650 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=15 res=1 Dec 16 12:18:59.672000 audit[5650]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=fffff47fcf50 a2=3 a3=0 items=0 ppid=1 pid=5650 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:18:59.688915 kernel: audit: type=1300 audit(1765887539.672:814): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=fffff47fcf50 a2=3 a3=0 items=0 ppid=1 pid=5650 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:18:59.672000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 12:18:59.692112 kernel: audit: type=1327 audit(1765887539.672:814): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 12:18:59.696093 systemd-logind[1939]: New session 15 of user core. Dec 16 12:18:59.701375 systemd[1]: Started session-15.scope - Session 15 of User core. Dec 16 12:18:59.706000 audit[5650]: USER_START pid=5650 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:18:59.715136 kernel: audit: type=1105 audit(1765887539.706:815): pid=5650 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:18:59.715000 audit[5654]: CRED_ACQ pid=5654 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:18:59.722229 kernel: audit: type=1103 audit(1765887539.715:816): pid=5654 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:18:59.902919 sshd[5654]: Connection closed by 147.75.109.163 port 35332 Dec 16 12:18:59.905420 sshd-session[5650]: pam_unix(sshd:session): session closed for user core Dec 16 12:18:59.910000 audit[5650]: USER_END pid=5650 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:18:59.919200 systemd[1]: sshd@13-172.31.20.6:22-147.75.109.163:35332.service: Deactivated successfully. Dec 16 12:18:59.910000 audit[5650]: CRED_DISP pid=5650 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:18:59.925352 kernel: audit: type=1106 audit(1765887539.910:817): pid=5650 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:18:59.925814 kernel: audit: type=1104 audit(1765887539.910:818): pid=5650 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:18:59.931266 systemd[1]: session-15.scope: Deactivated successfully. Dec 16 12:18:59.925000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-172.31.20.6:22-147.75.109.163:35332 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:18:59.938008 systemd-logind[1939]: Session 15 logged out. Waiting for processes to exit. Dec 16 12:18:59.941144 systemd-logind[1939]: Removed session 15. Dec 16 12:19:01.552348 containerd[1971]: time="2025-12-16T12:19:01.551990249Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Dec 16 12:19:01.855572 containerd[1971]: time="2025-12-16T12:19:01.855259855Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:19:01.859544 containerd[1971]: time="2025-12-16T12:19:01.859418215Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Dec 16 12:19:01.859934 containerd[1971]: time="2025-12-16T12:19:01.859473211Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Dec 16 12:19:01.861074 kubelet[3411]: E1216 12:19:01.860325 3411 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 12:19:01.861637 kubelet[3411]: E1216 12:19:01.861148 3411 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 12:19:01.861637 kubelet[3411]: E1216 12:19:01.861383 3411 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:f43187b9eb34459fb9682ed9d785cfc9,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-dclhd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-54cb69c56c-bnxxh_calico-system(41dcedc9-f0d1-4389-a970-074857eabb8a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Dec 16 12:19:01.867072 containerd[1971]: time="2025-12-16T12:19:01.866213659Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Dec 16 12:19:02.162962 containerd[1971]: time="2025-12-16T12:19:02.162789484Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:19:02.165570 containerd[1971]: time="2025-12-16T12:19:02.165483508Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Dec 16 12:19:02.165570 containerd[1971]: time="2025-12-16T12:19:02.165516064Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Dec 16 12:19:02.165905 kubelet[3411]: E1216 12:19:02.165839 3411 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 12:19:02.165972 kubelet[3411]: E1216 12:19:02.165902 3411 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 12:19:02.166839 kubelet[3411]: E1216 12:19:02.166715 3411 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dclhd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-54cb69c56c-bnxxh_calico-system(41dcedc9-f0d1-4389-a970-074857eabb8a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Dec 16 12:19:02.168100 kubelet[3411]: E1216 12:19:02.167996 3411 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-54cb69c56c-bnxxh" podUID="41dcedc9-f0d1-4389-a970-074857eabb8a" Dec 16 12:19:04.548540 containerd[1971]: time="2025-12-16T12:19:04.548481908Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Dec 16 12:19:04.857110 containerd[1971]: time="2025-12-16T12:19:04.856611298Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:19:04.859590 containerd[1971]: time="2025-12-16T12:19:04.859380550Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Dec 16 12:19:04.859590 containerd[1971]: time="2025-12-16T12:19:04.859522846Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Dec 16 12:19:04.859825 kubelet[3411]: E1216 12:19:04.859672 3411 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 16 12:19:04.859825 kubelet[3411]: E1216 12:19:04.859730 3411 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 16 12:19:04.860633 kubelet[3411]: E1216 12:19:04.859881 3411 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-84lph,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-7wp4r_calico-system(7f31f51a-3fe7-4796-97ca-d9a3c9b5116f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Dec 16 12:19:04.863283 containerd[1971]: time="2025-12-16T12:19:04.862882246Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Dec 16 12:19:04.942000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-172.31.20.6:22-147.75.109.163:47604 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:19:04.942557 systemd[1]: Started sshd@14-172.31.20.6:22-147.75.109.163:47604.service - OpenSSH per-connection server daemon (147.75.109.163:47604). Dec 16 12:19:04.945075 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 16 12:19:04.945175 kernel: audit: type=1130 audit(1765887544.942:820): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-172.31.20.6:22-147.75.109.163:47604 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:19:05.151000 audit[5672]: USER_ACCT pid=5672 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:19:05.152748 sshd[5672]: Accepted publickey for core from 147.75.109.163 port 47604 ssh2: RSA SHA256:akQMr2ov2nuM0LGikPoztxlYyDrZ4eOQOASr+dHXo3U Dec 16 12:19:05.158000 audit[5672]: CRED_ACQ pid=5672 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:19:05.160838 sshd-session[5672]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:19:05.164357 kernel: audit: type=1101 audit(1765887545.151:821): pid=5672 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:19:05.164473 kernel: audit: type=1103 audit(1765887545.158:822): pid=5672 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:19:05.168093 kernel: audit: type=1006 audit(1765887545.158:823): pid=5672 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=16 res=1 Dec 16 12:19:05.168788 containerd[1971]: time="2025-12-16T12:19:05.168384379Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:19:05.158000 audit[5672]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffdf066160 a2=3 a3=0 items=0 ppid=1 pid=5672 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:19:05.171764 containerd[1971]: time="2025-12-16T12:19:05.171574867Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Dec 16 12:19:05.171764 containerd[1971]: time="2025-12-16T12:19:05.171696679Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Dec 16 12:19:05.175159 kernel: audit: type=1300 audit(1765887545.158:823): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffdf066160 a2=3 a3=0 items=0 ppid=1 pid=5672 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:19:05.175895 kubelet[3411]: E1216 12:19:05.175418 3411 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 16 12:19:05.175895 kubelet[3411]: E1216 12:19:05.175639 3411 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 16 12:19:05.175895 kubelet[3411]: E1216 12:19:05.175815 3411 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-84lph,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-7wp4r_calico-system(7f31f51a-3fe7-4796-97ca-d9a3c9b5116f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Dec 16 12:19:05.177675 kubelet[3411]: E1216 12:19:05.177570 3411 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-7wp4r" podUID="7f31f51a-3fe7-4796-97ca-d9a3c9b5116f" Dec 16 12:19:05.158000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 12:19:05.182364 kernel: audit: type=1327 audit(1765887545.158:823): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 12:19:05.192161 systemd-logind[1939]: New session 16 of user core. Dec 16 12:19:05.196431 systemd[1]: Started session-16.scope - Session 16 of User core. Dec 16 12:19:05.205000 audit[5672]: USER_START pid=5672 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:19:05.209000 audit[5676]: CRED_ACQ pid=5676 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:19:05.223989 kernel: audit: type=1105 audit(1765887545.205:824): pid=5672 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:19:05.224116 kernel: audit: type=1103 audit(1765887545.209:825): pid=5676 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:19:05.418442 sshd[5676]: Connection closed by 147.75.109.163 port 47604 Dec 16 12:19:05.420385 sshd-session[5672]: pam_unix(sshd:session): session closed for user core Dec 16 12:19:05.423000 audit[5672]: USER_END pid=5672 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:19:05.430760 systemd[1]: sshd@14-172.31.20.6:22-147.75.109.163:47604.service: Deactivated successfully. Dec 16 12:19:05.423000 audit[5672]: CRED_DISP pid=5672 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:19:05.437625 systemd[1]: session-16.scope: Deactivated successfully. Dec 16 12:19:05.437812 kernel: audit: type=1106 audit(1765887545.423:826): pid=5672 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:19:05.437872 kernel: audit: type=1104 audit(1765887545.423:827): pid=5672 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:19:05.431000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-172.31.20.6:22-147.75.109.163:47604 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:19:05.442731 systemd-logind[1939]: Session 16 logged out. Waiting for processes to exit. Dec 16 12:19:05.446802 systemd-logind[1939]: Removed session 16. Dec 16 12:19:05.549976 containerd[1971]: time="2025-12-16T12:19:05.549643713Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Dec 16 12:19:05.812334 containerd[1971]: time="2025-12-16T12:19:05.812258302Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:19:05.814540 containerd[1971]: time="2025-12-16T12:19:05.814438378Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Dec 16 12:19:05.814712 containerd[1971]: time="2025-12-16T12:19:05.814548478Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Dec 16 12:19:05.815119 kubelet[3411]: E1216 12:19:05.815027 3411 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 16 12:19:05.815558 kubelet[3411]: E1216 12:19:05.815275 3411 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 16 12:19:05.815558 kubelet[3411]: E1216 12:19:05.815474 3411 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8z56p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-5959d55c94-c8546_calico-system(9a9f64cf-c939-425f-bc9b-14da143ab498): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Dec 16 12:19:05.816984 kubelet[3411]: E1216 12:19:05.816787 3411 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5959d55c94-c8546" podUID="9a9f64cf-c939-425f-bc9b-14da143ab498" Dec 16 12:19:06.271845 update_engine[1942]: I20251216 12:19:06.271104 1942 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Dec 16 12:19:06.271845 update_engine[1942]: I20251216 12:19:06.271180 1942 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Dec 16 12:19:06.271845 update_engine[1942]: I20251216 12:19:06.271642 1942 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Dec 16 12:19:06.274139 update_engine[1942]: I20251216 12:19:06.273968 1942 omaha_request_params.cc:62] Current group set to alpha Dec 16 12:19:06.274423 update_engine[1942]: I20251216 12:19:06.274389 1942 update_attempter.cc:499] Already updated boot flags. Skipping. Dec 16 12:19:06.274529 update_engine[1942]: I20251216 12:19:06.274500 1942 update_attempter.cc:643] Scheduling an action processor start. Dec 16 12:19:06.274667 update_engine[1942]: I20251216 12:19:06.274636 1942 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Dec 16 12:19:06.279801 update_engine[1942]: I20251216 12:19:06.279738 1942 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Dec 16 12:19:06.280393 update_engine[1942]: I20251216 12:19:06.280297 1942 omaha_request_action.cc:271] Posting an Omaha request to disabled Dec 16 12:19:06.281324 update_engine[1942]: I20251216 12:19:06.280331 1942 omaha_request_action.cc:272] Request: Dec 16 12:19:06.281324 update_engine[1942]: Dec 16 12:19:06.281324 update_engine[1942]: Dec 16 12:19:06.281324 update_engine[1942]: Dec 16 12:19:06.281324 update_engine[1942]: Dec 16 12:19:06.281324 update_engine[1942]: Dec 16 12:19:06.281324 update_engine[1942]: Dec 16 12:19:06.281324 update_engine[1942]: Dec 16 12:19:06.281324 update_engine[1942]: Dec 16 12:19:06.281324 update_engine[1942]: I20251216 12:19:06.280530 1942 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Dec 16 12:19:06.283119 locksmithd[2002]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Dec 16 12:19:06.287114 update_engine[1942]: I20251216 12:19:06.286830 1942 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Dec 16 12:19:06.288467 update_engine[1942]: I20251216 12:19:06.288389 1942 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Dec 16 12:19:06.301474 update_engine[1942]: E20251216 12:19:06.301396 1942 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled (Domain name not found) Dec 16 12:19:06.301590 update_engine[1942]: I20251216 12:19:06.301527 1942 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Dec 16 12:19:07.549926 containerd[1971]: time="2025-12-16T12:19:07.549325367Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 12:19:07.870083 containerd[1971]: time="2025-12-16T12:19:07.869828389Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:19:07.872304 containerd[1971]: time="2025-12-16T12:19:07.872223325Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 12:19:07.872526 containerd[1971]: time="2025-12-16T12:19:07.872267461Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 16 12:19:07.872589 kubelet[3411]: E1216 12:19:07.872508 3411 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 12:19:07.872589 kubelet[3411]: E1216 12:19:07.872572 3411 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 12:19:07.873185 kubelet[3411]: E1216 12:19:07.872879 3411 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-88br8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-fcb9bdb55-c27r4_calico-apiserver(a0556f5e-184b-4527-b60e-270da372abfb): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 12:19:07.874691 kubelet[3411]: E1216 12:19:07.874029 3411 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-fcb9bdb55-c27r4" podUID="a0556f5e-184b-4527-b60e-270da372abfb" Dec 16 12:19:07.874846 containerd[1971]: time="2025-12-16T12:19:07.874184617Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 12:19:08.257170 containerd[1971]: time="2025-12-16T12:19:08.256937459Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:19:08.259750 containerd[1971]: time="2025-12-16T12:19:08.259560911Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 12:19:08.259750 containerd[1971]: time="2025-12-16T12:19:08.259683707Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 16 12:19:08.260111 kubelet[3411]: E1216 12:19:08.259993 3411 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 12:19:08.260230 kubelet[3411]: E1216 12:19:08.260118 3411 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 12:19:08.260386 kubelet[3411]: E1216 12:19:08.260298 3411 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6ntjh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-fcb9bdb55-6k77x_calico-apiserver(41fe1dec-6478-42fa-9c60-8b697b125498): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 12:19:08.262236 kubelet[3411]: E1216 12:19:08.262166 3411 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-fcb9bdb55-6k77x" podUID="41fe1dec-6478-42fa-9c60-8b697b125498" Dec 16 12:19:09.549520 containerd[1971]: time="2025-12-16T12:19:09.549434569Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Dec 16 12:19:09.827395 containerd[1971]: time="2025-12-16T12:19:09.827228162Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:19:09.829663 containerd[1971]: time="2025-12-16T12:19:09.829517462Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Dec 16 12:19:09.829663 containerd[1971]: time="2025-12-16T12:19:09.829531514Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Dec 16 12:19:09.829882 kubelet[3411]: E1216 12:19:09.829831 3411 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 16 12:19:09.830409 kubelet[3411]: E1216 12:19:09.829893 3411 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 16 12:19:09.830409 kubelet[3411]: E1216 12:19:09.830094 3411 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-h456d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-f4x44_calico-system(9e39aa72-dd6b-4253-877f-1d57a9236239): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Dec 16 12:19:09.832166 kubelet[3411]: E1216 12:19:09.831846 3411 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-f4x44" podUID="9e39aa72-dd6b-4253-877f-1d57a9236239" Dec 16 12:19:10.456335 systemd[1]: Started sshd@15-172.31.20.6:22-147.75.109.163:47620.service - OpenSSH per-connection server daemon (147.75.109.163:47620). Dec 16 12:19:10.456000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-172.31.20.6:22-147.75.109.163:47620 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:19:10.459206 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 16 12:19:10.459299 kernel: audit: type=1130 audit(1765887550.456:829): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-172.31.20.6:22-147.75.109.163:47620 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:19:10.653000 audit[5693]: USER_ACCT pid=5693 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:19:10.655159 sshd[5693]: Accepted publickey for core from 147.75.109.163 port 47620 ssh2: RSA SHA256:akQMr2ov2nuM0LGikPoztxlYyDrZ4eOQOASr+dHXo3U Dec 16 12:19:10.661132 kernel: audit: type=1101 audit(1765887550.653:830): pid=5693 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:19:10.661000 audit[5693]: CRED_ACQ pid=5693 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:19:10.663891 sshd-session[5693]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:19:10.671843 kernel: audit: type=1103 audit(1765887550.661:831): pid=5693 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:19:10.672014 kernel: audit: type=1006 audit(1765887550.661:832): pid=5693 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=17 res=1 Dec 16 12:19:10.661000 audit[5693]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffc9645b70 a2=3 a3=0 items=0 ppid=1 pid=5693 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:19:10.678741 kernel: audit: type=1300 audit(1765887550.661:832): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffc9645b70 a2=3 a3=0 items=0 ppid=1 pid=5693 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:19:10.661000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 12:19:10.681755 kernel: audit: type=1327 audit(1765887550.661:832): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 12:19:10.684311 systemd-logind[1939]: New session 17 of user core. Dec 16 12:19:10.698384 systemd[1]: Started session-17.scope - Session 17 of User core. Dec 16 12:19:10.703000 audit[5693]: USER_START pid=5693 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:19:10.711000 audit[5697]: CRED_ACQ pid=5697 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:19:10.717959 kernel: audit: type=1105 audit(1765887550.703:833): pid=5693 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:19:10.718037 kernel: audit: type=1103 audit(1765887550.711:834): pid=5697 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:19:10.890903 sshd[5697]: Connection closed by 147.75.109.163 port 47620 Dec 16 12:19:10.891758 sshd-session[5693]: pam_unix(sshd:session): session closed for user core Dec 16 12:19:10.893000 audit[5693]: USER_END pid=5693 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:19:10.902741 systemd[1]: sshd@15-172.31.20.6:22-147.75.109.163:47620.service: Deactivated successfully. Dec 16 12:19:10.893000 audit[5693]: CRED_DISP pid=5693 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:19:10.909323 kernel: audit: type=1106 audit(1765887550.893:835): pid=5693 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:19:10.909470 kernel: audit: type=1104 audit(1765887550.893:836): pid=5693 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:19:10.902000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-172.31.20.6:22-147.75.109.163:47620 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:19:10.912846 systemd[1]: session-17.scope: Deactivated successfully. Dec 16 12:19:10.917973 systemd-logind[1939]: Session 17 logged out. Waiting for processes to exit. Dec 16 12:19:10.937000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-172.31.20.6:22-147.75.109.163:47636 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:19:10.938578 systemd[1]: Started sshd@16-172.31.20.6:22-147.75.109.163:47636.service - OpenSSH per-connection server daemon (147.75.109.163:47636). Dec 16 12:19:10.942240 systemd-logind[1939]: Removed session 17. Dec 16 12:19:11.125000 audit[5709]: USER_ACCT pid=5709 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:19:11.126907 sshd[5709]: Accepted publickey for core from 147.75.109.163 port 47636 ssh2: RSA SHA256:akQMr2ov2nuM0LGikPoztxlYyDrZ4eOQOASr+dHXo3U Dec 16 12:19:11.127000 audit[5709]: CRED_ACQ pid=5709 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:19:11.127000 audit[5709]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffc06426e0 a2=3 a3=0 items=0 ppid=1 pid=5709 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:19:11.127000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 12:19:11.130147 sshd-session[5709]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:19:11.140264 systemd-logind[1939]: New session 18 of user core. Dec 16 12:19:11.147406 systemd[1]: Started session-18.scope - Session 18 of User core. Dec 16 12:19:11.153000 audit[5709]: USER_START pid=5709 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:19:11.157000 audit[5713]: CRED_ACQ pid=5713 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:19:11.559110 sshd[5713]: Connection closed by 147.75.109.163 port 47636 Dec 16 12:19:11.558948 sshd-session[5709]: pam_unix(sshd:session): session closed for user core Dec 16 12:19:11.560000 audit[5709]: USER_END pid=5709 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:19:11.560000 audit[5709]: CRED_DISP pid=5709 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:19:11.566828 systemd-logind[1939]: Session 18 logged out. Waiting for processes to exit. Dec 16 12:19:11.567593 systemd[1]: sshd@16-172.31.20.6:22-147.75.109.163:47636.service: Deactivated successfully. Dec 16 12:19:11.567000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-172.31.20.6:22-147.75.109.163:47636 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:19:11.572475 systemd[1]: session-18.scope: Deactivated successfully. Dec 16 12:19:11.580521 systemd-logind[1939]: Removed session 18. Dec 16 12:19:11.594000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-172.31.20.6:22-147.75.109.163:47650 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:19:11.595198 systemd[1]: Started sshd@17-172.31.20.6:22-147.75.109.163:47650.service - OpenSSH per-connection server daemon (147.75.109.163:47650). Dec 16 12:19:11.783000 audit[5723]: USER_ACCT pid=5723 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:19:11.786072 sshd[5723]: Accepted publickey for core from 147.75.109.163 port 47650 ssh2: RSA SHA256:akQMr2ov2nuM0LGikPoztxlYyDrZ4eOQOASr+dHXo3U Dec 16 12:19:11.786000 audit[5723]: CRED_ACQ pid=5723 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:19:11.786000 audit[5723]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffc00fe8d0 a2=3 a3=0 items=0 ppid=1 pid=5723 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:19:11.786000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 12:19:11.788999 sshd-session[5723]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:19:11.797617 systemd-logind[1939]: New session 19 of user core. Dec 16 12:19:11.804358 systemd[1]: Started session-19.scope - Session 19 of User core. Dec 16 12:19:11.812000 audit[5723]: USER_START pid=5723 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:19:11.819000 audit[5727]: CRED_ACQ pid=5727 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:19:12.786106 sshd[5727]: Connection closed by 147.75.109.163 port 47650 Dec 16 12:19:12.786431 sshd-session[5723]: pam_unix(sshd:session): session closed for user core Dec 16 12:19:12.793000 audit[5723]: USER_END pid=5723 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:19:12.793000 audit[5723]: CRED_DISP pid=5723 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:19:12.791000 audit[5738]: NETFILTER_CFG table=filter:146 family=2 entries=26 op=nft_register_rule pid=5738 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:19:12.791000 audit[5738]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=14176 a0=3 a1=fffff05bc3c0 a2=0 a3=1 items=0 ppid=3639 pid=5738 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:19:12.791000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:19:12.803365 systemd-logind[1939]: Session 19 logged out. Waiting for processes to exit. Dec 16 12:19:12.801000 audit[5738]: NETFILTER_CFG table=nat:147 family=2 entries=20 op=nft_register_rule pid=5738 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:19:12.801000 audit[5738]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5772 a0=3 a1=fffff05bc3c0 a2=0 a3=1 items=0 ppid=3639 pid=5738 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:19:12.801000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:19:12.803744 systemd[1]: sshd@17-172.31.20.6:22-147.75.109.163:47650.service: Deactivated successfully. Dec 16 12:19:12.804000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-172.31.20.6:22-147.75.109.163:47650 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:19:12.811875 systemd[1]: session-19.scope: Deactivated successfully. Dec 16 12:19:12.840000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-172.31.20.6:22-147.75.109.163:59560 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:19:12.841635 systemd[1]: Started sshd@18-172.31.20.6:22-147.75.109.163:59560.service - OpenSSH per-connection server daemon (147.75.109.163:59560). Dec 16 12:19:12.847645 systemd-logind[1939]: Removed session 19. Dec 16 12:19:12.851000 audit[5745]: NETFILTER_CFG table=filter:148 family=2 entries=38 op=nft_register_rule pid=5745 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:19:12.851000 audit[5745]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=14176 a0=3 a1=ffffcb816550 a2=0 a3=1 items=0 ppid=3639 pid=5745 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:19:12.851000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:19:12.857000 audit[5745]: NETFILTER_CFG table=nat:149 family=2 entries=20 op=nft_register_rule pid=5745 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:19:12.857000 audit[5745]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5772 a0=3 a1=ffffcb816550 a2=0 a3=1 items=0 ppid=3639 pid=5745 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:19:12.857000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:19:13.046000 audit[5744]: USER_ACCT pid=5744 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:19:13.048514 sshd[5744]: Accepted publickey for core from 147.75.109.163 port 59560 ssh2: RSA SHA256:akQMr2ov2nuM0LGikPoztxlYyDrZ4eOQOASr+dHXo3U Dec 16 12:19:13.047000 audit[5744]: CRED_ACQ pid=5744 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:19:13.048000 audit[5744]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=fffffc09e160 a2=3 a3=0 items=0 ppid=1 pid=5744 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=20 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:19:13.048000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 12:19:13.051759 sshd-session[5744]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:19:13.061096 systemd-logind[1939]: New session 20 of user core. Dec 16 12:19:13.071388 systemd[1]: Started session-20.scope - Session 20 of User core. Dec 16 12:19:13.076000 audit[5744]: USER_START pid=5744 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:19:13.081000 audit[5749]: CRED_ACQ pid=5749 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:19:13.584100 sshd[5749]: Connection closed by 147.75.109.163 port 59560 Dec 16 12:19:13.586320 sshd-session[5744]: pam_unix(sshd:session): session closed for user core Dec 16 12:19:13.588000 audit[5744]: USER_END pid=5744 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:19:13.588000 audit[5744]: CRED_DISP pid=5744 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:19:13.594559 systemd[1]: sshd@18-172.31.20.6:22-147.75.109.163:59560.service: Deactivated successfully. Dec 16 12:19:13.595000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-172.31.20.6:22-147.75.109.163:59560 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:19:13.602699 systemd[1]: session-20.scope: Deactivated successfully. Dec 16 12:19:13.606402 systemd-logind[1939]: Session 20 logged out. Waiting for processes to exit. Dec 16 12:19:13.625000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-172.31.20.6:22-147.75.109.163:59572 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:19:13.626298 systemd[1]: Started sshd@19-172.31.20.6:22-147.75.109.163:59572.service - OpenSSH per-connection server daemon (147.75.109.163:59572). Dec 16 12:19:13.629457 systemd-logind[1939]: Removed session 20. Dec 16 12:19:13.815000 audit[5758]: USER_ACCT pid=5758 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:19:13.817174 sshd[5758]: Accepted publickey for core from 147.75.109.163 port 59572 ssh2: RSA SHA256:akQMr2ov2nuM0LGikPoztxlYyDrZ4eOQOASr+dHXo3U Dec 16 12:19:13.817000 audit[5758]: CRED_ACQ pid=5758 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:19:13.817000 audit[5758]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffe2fe8650 a2=3 a3=0 items=0 ppid=1 pid=5758 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=21 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:19:13.817000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 12:19:13.820644 sshd-session[5758]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:19:13.828916 systemd-logind[1939]: New session 21 of user core. Dec 16 12:19:13.844399 systemd[1]: Started session-21.scope - Session 21 of User core. Dec 16 12:19:13.850000 audit[5758]: USER_START pid=5758 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:19:13.853000 audit[5762]: CRED_ACQ pid=5762 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:19:14.024358 sshd[5762]: Connection closed by 147.75.109.163 port 59572 Dec 16 12:19:14.025332 sshd-session[5758]: pam_unix(sshd:session): session closed for user core Dec 16 12:19:14.027000 audit[5758]: USER_END pid=5758 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:19:14.027000 audit[5758]: CRED_DISP pid=5758 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:19:14.033574 systemd[1]: sshd@19-172.31.20.6:22-147.75.109.163:59572.service: Deactivated successfully. Dec 16 12:19:14.033000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-172.31.20.6:22-147.75.109.163:59572 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:19:14.037362 systemd[1]: session-21.scope: Deactivated successfully. Dec 16 12:19:14.040993 systemd-logind[1939]: Session 21 logged out. Waiting for processes to exit. Dec 16 12:19:14.044990 systemd-logind[1939]: Removed session 21. Dec 16 12:19:15.552494 kubelet[3411]: E1216 12:19:15.551412 3411 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-54cb69c56c-bnxxh" podUID="41dcedc9-f0d1-4389-a970-074857eabb8a" Dec 16 12:19:16.270706 update_engine[1942]: I20251216 12:19:16.270610 1942 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Dec 16 12:19:16.271565 update_engine[1942]: I20251216 12:19:16.270735 1942 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Dec 16 12:19:16.271565 update_engine[1942]: I20251216 12:19:16.271442 1942 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Dec 16 12:19:16.272473 update_engine[1942]: E20251216 12:19:16.272411 1942 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled (Domain name not found) Dec 16 12:19:16.272581 update_engine[1942]: I20251216 12:19:16.272532 1942 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Dec 16 12:19:19.065129 kernel: kauditd_printk_skb: 57 callbacks suppressed Dec 16 12:19:19.065311 kernel: audit: type=1130 audit(1765887559.062:878): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-172.31.20.6:22-147.75.109.163:59576 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:19:19.062000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-172.31.20.6:22-147.75.109.163:59576 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:19:19.062840 systemd[1]: Started sshd@20-172.31.20.6:22-147.75.109.163:59576.service - OpenSSH per-connection server daemon (147.75.109.163:59576). Dec 16 12:19:19.256000 audit[5774]: USER_ACCT pid=5774 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:19:19.257797 sshd[5774]: Accepted publickey for core from 147.75.109.163 port 59576 ssh2: RSA SHA256:akQMr2ov2nuM0LGikPoztxlYyDrZ4eOQOASr+dHXo3U Dec 16 12:19:19.264107 kernel: audit: type=1101 audit(1765887559.256:879): pid=5774 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:19:19.265000 audit[5774]: CRED_ACQ pid=5774 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:19:19.268978 sshd-session[5774]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:19:19.277339 kernel: audit: type=1103 audit(1765887559.265:880): pid=5774 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:19:19.277465 kernel: audit: type=1006 audit(1765887559.266:881): pid=5774 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=22 res=1 Dec 16 12:19:19.278364 kernel: audit: type=1300 audit(1765887559.266:881): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffde3a1bf0 a2=3 a3=0 items=0 ppid=1 pid=5774 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:19:19.266000 audit[5774]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffde3a1bf0 a2=3 a3=0 items=0 ppid=1 pid=5774 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:19:19.266000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 12:19:19.287272 kernel: audit: type=1327 audit(1765887559.266:881): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 12:19:19.293589 systemd-logind[1939]: New session 22 of user core. Dec 16 12:19:19.299393 systemd[1]: Started session-22.scope - Session 22 of User core. Dec 16 12:19:19.304000 audit[5774]: USER_START pid=5774 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:19:19.313170 kernel: audit: type=1105 audit(1765887559.304:882): pid=5774 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:19:19.312000 audit[5778]: CRED_ACQ pid=5778 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:19:19.322119 kernel: audit: type=1103 audit(1765887559.312:883): pid=5778 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:19:19.496045 sshd[5778]: Connection closed by 147.75.109.163 port 59576 Dec 16 12:19:19.497554 sshd-session[5774]: pam_unix(sshd:session): session closed for user core Dec 16 12:19:19.499000 audit[5774]: USER_END pid=5774 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:19:19.505177 systemd-logind[1939]: Session 22 logged out. Waiting for processes to exit. Dec 16 12:19:19.499000 audit[5774]: CRED_DISP pid=5774 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:19:19.510541 systemd[1]: sshd@20-172.31.20.6:22-147.75.109.163:59576.service: Deactivated successfully. Dec 16 12:19:19.515283 kernel: audit: type=1106 audit(1765887559.499:884): pid=5774 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:19:19.515392 kernel: audit: type=1104 audit(1765887559.499:885): pid=5774 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:19:19.508000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-172.31.20.6:22-147.75.109.163:59576 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:19:19.517774 systemd[1]: session-22.scope: Deactivated successfully. Dec 16 12:19:19.521479 systemd-logind[1939]: Removed session 22. Dec 16 12:19:19.553773 kubelet[3411]: E1216 12:19:19.553611 3411 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-7wp4r" podUID="7f31f51a-3fe7-4796-97ca-d9a3c9b5116f" Dec 16 12:19:19.557150 kubelet[3411]: E1216 12:19:19.554794 3411 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-fcb9bdb55-c27r4" podUID="a0556f5e-184b-4527-b60e-270da372abfb" Dec 16 12:19:19.557150 kubelet[3411]: E1216 12:19:19.554942 3411 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5959d55c94-c8546" podUID="9a9f64cf-c939-425f-bc9b-14da143ab498" Dec 16 12:19:20.147000 audit[5790]: NETFILTER_CFG table=filter:150 family=2 entries=26 op=nft_register_rule pid=5790 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:19:20.147000 audit[5790]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=ffffede69d60 a2=0 a3=1 items=0 ppid=3639 pid=5790 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:19:20.147000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:19:20.155000 audit[5790]: NETFILTER_CFG table=nat:151 family=2 entries=104 op=nft_register_chain pid=5790 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:19:20.155000 audit[5790]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=48684 a0=3 a1=ffffede69d60 a2=0 a3=1 items=0 ppid=3639 pid=5790 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:19:20.155000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:19:22.547702 kubelet[3411]: E1216 12:19:22.547403 3411 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-f4x44" podUID="9e39aa72-dd6b-4253-877f-1d57a9236239" Dec 16 12:19:22.549207 kubelet[3411]: E1216 12:19:22.548691 3411 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-fcb9bdb55-6k77x" podUID="41fe1dec-6478-42fa-9c60-8b697b125498" Dec 16 12:19:24.532953 systemd[1]: Started sshd@21-172.31.20.6:22-147.75.109.163:39624.service - OpenSSH per-connection server daemon (147.75.109.163:39624). Dec 16 12:19:24.532000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-172.31.20.6:22-147.75.109.163:39624 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:19:24.536277 kernel: kauditd_printk_skb: 7 callbacks suppressed Dec 16 12:19:24.536376 kernel: audit: type=1130 audit(1765887564.532:889): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-172.31.20.6:22-147.75.109.163:39624 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:19:24.721000 audit[5817]: USER_ACCT pid=5817 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:19:24.723328 sshd[5817]: Accepted publickey for core from 147.75.109.163 port 39624 ssh2: RSA SHA256:akQMr2ov2nuM0LGikPoztxlYyDrZ4eOQOASr+dHXo3U Dec 16 12:19:24.729144 kernel: audit: type=1101 audit(1765887564.721:890): pid=5817 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:19:24.730000 audit[5817]: CRED_ACQ pid=5817 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:19:24.735713 sshd-session[5817]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:19:24.742427 kernel: audit: type=1103 audit(1765887564.730:891): pid=5817 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:19:24.742550 kernel: audit: type=1006 audit(1765887564.733:892): pid=5817 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=23 res=1 Dec 16 12:19:24.742616 kernel: audit: type=1300 audit(1765887564.733:892): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffdac3be70 a2=3 a3=0 items=0 ppid=1 pid=5817 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:19:24.733000 audit[5817]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffdac3be70 a2=3 a3=0 items=0 ppid=1 pid=5817 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:19:24.733000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 12:19:24.752351 kernel: audit: type=1327 audit(1765887564.733:892): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 12:19:24.760160 systemd-logind[1939]: New session 23 of user core. Dec 16 12:19:24.770370 systemd[1]: Started session-23.scope - Session 23 of User core. Dec 16 12:19:24.775000 audit[5817]: USER_START pid=5817 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:19:24.783000 audit[5821]: CRED_ACQ pid=5821 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:19:24.789851 kernel: audit: type=1105 audit(1765887564.775:893): pid=5817 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:19:24.789984 kernel: audit: type=1103 audit(1765887564.783:894): pid=5821 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:19:24.986262 sshd[5821]: Connection closed by 147.75.109.163 port 39624 Dec 16 12:19:24.989142 sshd-session[5817]: pam_unix(sshd:session): session closed for user core Dec 16 12:19:24.991000 audit[5817]: USER_END pid=5817 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:19:25.004860 systemd[1]: sshd@21-172.31.20.6:22-147.75.109.163:39624.service: Deactivated successfully. Dec 16 12:19:25.011116 kernel: audit: type=1106 audit(1765887564.991:895): pid=5817 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:19:25.011198 kernel: audit: type=1104 audit(1765887564.991:896): pid=5817 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:19:24.991000 audit[5817]: CRED_DISP pid=5817 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:19:25.011597 systemd[1]: session-23.scope: Deactivated successfully. Dec 16 12:19:25.004000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-172.31.20.6:22-147.75.109.163:39624 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:19:25.020209 systemd-logind[1939]: Session 23 logged out. Waiting for processes to exit. Dec 16 12:19:25.026769 systemd-logind[1939]: Removed session 23. Dec 16 12:19:26.271127 update_engine[1942]: I20251216 12:19:26.270701 1942 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Dec 16 12:19:26.271127 update_engine[1942]: I20251216 12:19:26.270813 1942 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Dec 16 12:19:26.271820 update_engine[1942]: I20251216 12:19:26.271476 1942 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Dec 16 12:19:26.272806 update_engine[1942]: E20251216 12:19:26.272737 1942 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled (Domain name not found) Dec 16 12:19:26.272907 update_engine[1942]: I20251216 12:19:26.272866 1942 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Dec 16 12:19:30.030261 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 16 12:19:30.030400 kernel: audit: type=1130 audit(1765887570.025:898): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-172.31.20.6:22-147.75.109.163:39632 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:19:30.025000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-172.31.20.6:22-147.75.109.163:39632 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:19:30.026557 systemd[1]: Started sshd@22-172.31.20.6:22-147.75.109.163:39632.service - OpenSSH per-connection server daemon (147.75.109.163:39632). Dec 16 12:19:30.223000 audit[5835]: USER_ACCT pid=5835 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:19:30.224979 sshd[5835]: Accepted publickey for core from 147.75.109.163 port 39632 ssh2: RSA SHA256:akQMr2ov2nuM0LGikPoztxlYyDrZ4eOQOASr+dHXo3U Dec 16 12:19:30.235118 kernel: audit: type=1101 audit(1765887570.223:899): pid=5835 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:19:30.234000 audit[5835]: CRED_ACQ pid=5835 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:19:30.243274 sshd-session[5835]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:19:30.250182 kernel: audit: type=1103 audit(1765887570.234:900): pid=5835 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:19:30.250275 kernel: audit: type=1006 audit(1765887570.234:901): pid=5835 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=24 res=1 Dec 16 12:19:30.234000 audit[5835]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffc1195160 a2=3 a3=0 items=0 ppid=1 pid=5835 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:19:30.258136 kernel: audit: type=1300 audit(1765887570.234:901): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffc1195160 a2=3 a3=0 items=0 ppid=1 pid=5835 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:19:30.261878 kernel: audit: type=1327 audit(1765887570.234:901): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 12:19:30.234000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 12:19:30.268096 systemd-logind[1939]: New session 24 of user core. Dec 16 12:19:30.278797 systemd[1]: Started session-24.scope - Session 24 of User core. Dec 16 12:19:30.285000 audit[5835]: USER_START pid=5835 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:19:30.297125 kernel: audit: type=1105 audit(1765887570.285:902): pid=5835 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:19:30.301000 audit[5839]: CRED_ACQ pid=5839 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:19:30.309139 kernel: audit: type=1103 audit(1765887570.301:903): pid=5839 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:19:30.547792 sshd[5839]: Connection closed by 147.75.109.163 port 39632 Dec 16 12:19:30.548751 sshd-session[5835]: pam_unix(sshd:session): session closed for user core Dec 16 12:19:30.552736 kubelet[3411]: E1216 12:19:30.551890 3411 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-54cb69c56c-bnxxh" podUID="41dcedc9-f0d1-4389-a970-074857eabb8a" Dec 16 12:19:30.554412 kubelet[3411]: E1216 12:19:30.552799 3411 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-7wp4r" podUID="7f31f51a-3fe7-4796-97ca-d9a3c9b5116f" Dec 16 12:19:30.554000 audit[5835]: USER_END pid=5835 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:19:30.571174 systemd-logind[1939]: Session 24 logged out. Waiting for processes to exit. Dec 16 12:19:30.554000 audit[5835]: CRED_DISP pid=5835 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:19:30.579104 kernel: audit: type=1106 audit(1765887570.554:904): pid=5835 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:19:30.579325 kernel: audit: type=1104 audit(1765887570.554:905): pid=5835 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:19:30.579759 systemd[1]: sshd@22-172.31.20.6:22-147.75.109.163:39632.service: Deactivated successfully. Dec 16 12:19:30.579000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-172.31.20.6:22-147.75.109.163:39632 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:19:30.589576 systemd[1]: session-24.scope: Deactivated successfully. Dec 16 12:19:30.602210 systemd-logind[1939]: Removed session 24. Dec 16 12:19:31.550097 kubelet[3411]: E1216 12:19:31.549554 3411 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5959d55c94-c8546" podUID="9a9f64cf-c939-425f-bc9b-14da143ab498" Dec 16 12:19:34.547221 kubelet[3411]: E1216 12:19:34.546992 3411 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-fcb9bdb55-c27r4" podUID="a0556f5e-184b-4527-b60e-270da372abfb" Dec 16 12:19:35.555211 kubelet[3411]: E1216 12:19:35.554920 3411 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-fcb9bdb55-6k77x" podUID="41fe1dec-6478-42fa-9c60-8b697b125498" Dec 16 12:19:35.596277 systemd[1]: Started sshd@23-172.31.20.6:22-147.75.109.163:45488.service - OpenSSH per-connection server daemon (147.75.109.163:45488). Dec 16 12:19:35.606230 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 16 12:19:35.606380 kernel: audit: type=1130 audit(1765887575.595:907): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-172.31.20.6:22-147.75.109.163:45488 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:19:35.595000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-172.31.20.6:22-147.75.109.163:45488 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:19:35.800000 audit[5856]: USER_ACCT pid=5856 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:19:35.808009 sshd[5856]: Accepted publickey for core from 147.75.109.163 port 45488 ssh2: RSA SHA256:akQMr2ov2nuM0LGikPoztxlYyDrZ4eOQOASr+dHXo3U Dec 16 12:19:35.808000 audit[5856]: CRED_ACQ pid=5856 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:19:35.816776 kernel: audit: type=1101 audit(1765887575.800:908): pid=5856 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:19:35.816898 kernel: audit: type=1103 audit(1765887575.808:909): pid=5856 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:19:35.811543 sshd-session[5856]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:19:35.822664 kernel: audit: type=1006 audit(1765887575.808:910): pid=5856 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=25 res=1 Dec 16 12:19:35.808000 audit[5856]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffe7233d50 a2=3 a3=0 items=0 ppid=1 pid=5856 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:19:35.808000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 12:19:35.833220 kernel: audit: type=1300 audit(1765887575.808:910): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffe7233d50 a2=3 a3=0 items=0 ppid=1 pid=5856 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:19:35.833365 kernel: audit: type=1327 audit(1765887575.808:910): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 12:19:35.843179 systemd-logind[1939]: New session 25 of user core. Dec 16 12:19:35.846423 systemd[1]: Started session-25.scope - Session 25 of User core. Dec 16 12:19:35.854000 audit[5856]: USER_START pid=5856 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:19:35.862000 audit[5860]: CRED_ACQ pid=5860 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:19:35.869049 kernel: audit: type=1105 audit(1765887575.854:911): pid=5856 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:19:35.869185 kernel: audit: type=1103 audit(1765887575.862:912): pid=5860 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:19:36.100566 sshd[5860]: Connection closed by 147.75.109.163 port 45488 Dec 16 12:19:36.102611 sshd-session[5856]: pam_unix(sshd:session): session closed for user core Dec 16 12:19:36.106000 audit[5856]: USER_END pid=5856 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:19:36.116411 systemd[1]: sshd@23-172.31.20.6:22-147.75.109.163:45488.service: Deactivated successfully. Dec 16 12:19:36.122873 systemd[1]: session-25.scope: Deactivated successfully. Dec 16 12:19:36.125637 systemd-logind[1939]: Session 25 logged out. Waiting for processes to exit. Dec 16 12:19:36.107000 audit[5856]: CRED_DISP pid=5856 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:19:36.137268 kernel: audit: type=1106 audit(1765887576.106:913): pid=5856 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:19:36.137349 kernel: audit: type=1104 audit(1765887576.107:914): pid=5856 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:19:36.138885 systemd-logind[1939]: Removed session 25. Dec 16 12:19:36.115000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-172.31.20.6:22-147.75.109.163:45488 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:19:36.271245 update_engine[1942]: I20251216 12:19:36.270241 1942 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Dec 16 12:19:36.271245 update_engine[1942]: I20251216 12:19:36.270356 1942 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Dec 16 12:19:36.272020 update_engine[1942]: I20251216 12:19:36.270928 1942 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Dec 16 12:19:36.274085 update_engine[1942]: E20251216 12:19:36.273587 1942 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled (Domain name not found) Dec 16 12:19:36.274085 update_engine[1942]: I20251216 12:19:36.273754 1942 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Dec 16 12:19:36.274085 update_engine[1942]: I20251216 12:19:36.273775 1942 omaha_request_action.cc:617] Omaha request response: Dec 16 12:19:36.274085 update_engine[1942]: E20251216 12:19:36.273897 1942 omaha_request_action.cc:636] Omaha request network transfer failed. Dec 16 12:19:36.274085 update_engine[1942]: I20251216 12:19:36.273941 1942 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Dec 16 12:19:36.274085 update_engine[1942]: I20251216 12:19:36.273957 1942 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Dec 16 12:19:36.274085 update_engine[1942]: I20251216 12:19:36.273972 1942 update_attempter.cc:306] Processing Done. Dec 16 12:19:36.274085 update_engine[1942]: E20251216 12:19:36.273996 1942 update_attempter.cc:619] Update failed. Dec 16 12:19:36.274085 update_engine[1942]: I20251216 12:19:36.274011 1942 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Dec 16 12:19:36.274085 update_engine[1942]: I20251216 12:19:36.274024 1942 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Dec 16 12:19:36.275273 update_engine[1942]: I20251216 12:19:36.274039 1942 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Dec 16 12:19:36.275273 update_engine[1942]: I20251216 12:19:36.274747 1942 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Dec 16 12:19:36.275273 update_engine[1942]: I20251216 12:19:36.274796 1942 omaha_request_action.cc:271] Posting an Omaha request to disabled Dec 16 12:19:36.275273 update_engine[1942]: I20251216 12:19:36.274812 1942 omaha_request_action.cc:272] Request: Dec 16 12:19:36.275273 update_engine[1942]: Dec 16 12:19:36.275273 update_engine[1942]: Dec 16 12:19:36.275273 update_engine[1942]: Dec 16 12:19:36.275273 update_engine[1942]: Dec 16 12:19:36.275273 update_engine[1942]: Dec 16 12:19:36.275273 update_engine[1942]: Dec 16 12:19:36.275273 update_engine[1942]: I20251216 12:19:36.274847 1942 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Dec 16 12:19:36.275273 update_engine[1942]: I20251216 12:19:36.274894 1942 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Dec 16 12:19:36.276838 update_engine[1942]: I20251216 12:19:36.276750 1942 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Dec 16 12:19:36.277919 locksmithd[2002]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Dec 16 12:19:36.278610 update_engine[1942]: E20251216 12:19:36.277551 1942 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled (Domain name not found) Dec 16 12:19:36.278610 update_engine[1942]: I20251216 12:19:36.277673 1942 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Dec 16 12:19:36.278610 update_engine[1942]: I20251216 12:19:36.277691 1942 omaha_request_action.cc:617] Omaha request response: Dec 16 12:19:36.278610 update_engine[1942]: I20251216 12:19:36.277709 1942 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Dec 16 12:19:36.278610 update_engine[1942]: I20251216 12:19:36.277722 1942 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Dec 16 12:19:36.278610 update_engine[1942]: I20251216 12:19:36.277736 1942 update_attempter.cc:306] Processing Done. Dec 16 12:19:36.278610 update_engine[1942]: I20251216 12:19:36.277755 1942 update_attempter.cc:310] Error event sent. Dec 16 12:19:36.278610 update_engine[1942]: I20251216 12:19:36.277773 1942 update_check_scheduler.cc:74] Next update check in 41m40s Dec 16 12:19:36.279270 locksmithd[2002]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Dec 16 12:19:36.549518 kubelet[3411]: E1216 12:19:36.549437 3411 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-f4x44" podUID="9e39aa72-dd6b-4253-877f-1d57a9236239" Dec 16 12:19:41.139882 systemd[1]: Started sshd@24-172.31.20.6:22-147.75.109.163:45500.service - OpenSSH per-connection server daemon (147.75.109.163:45500). Dec 16 12:19:41.139000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-172.31.20.6:22-147.75.109.163:45500 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:19:41.142681 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 16 12:19:41.142780 kernel: audit: type=1130 audit(1765887581.139:916): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-172.31.20.6:22-147.75.109.163:45500 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:19:41.348000 audit[5874]: USER_ACCT pid=5874 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:19:41.356255 sshd[5874]: Accepted publickey for core from 147.75.109.163 port 45500 ssh2: RSA SHA256:akQMr2ov2nuM0LGikPoztxlYyDrZ4eOQOASr+dHXo3U Dec 16 12:19:41.361651 sshd-session[5874]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:19:41.356000 audit[5874]: CRED_ACQ pid=5874 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:19:41.367695 kernel: audit: type=1101 audit(1765887581.348:917): pid=5874 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:19:41.367974 kernel: audit: type=1103 audit(1765887581.356:918): pid=5874 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:19:41.375402 kernel: audit: type=1006 audit(1765887581.357:919): pid=5874 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=26 res=1 Dec 16 12:19:41.357000 audit[5874]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=fffff0f1bae0 a2=3 a3=0 items=0 ppid=1 pid=5874 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=26 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:19:41.386150 kernel: audit: type=1300 audit(1765887581.357:919): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=fffff0f1bae0 a2=3 a3=0 items=0 ppid=1 pid=5874 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=26 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:19:41.386304 kernel: audit: type=1327 audit(1765887581.357:919): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 12:19:41.357000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 12:19:41.386510 systemd-logind[1939]: New session 26 of user core. Dec 16 12:19:41.401744 systemd[1]: Started session-26.scope - Session 26 of User core. Dec 16 12:19:41.411000 audit[5874]: USER_START pid=5874 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:19:41.422000 audit[5878]: CRED_ACQ pid=5878 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:19:41.429068 kernel: audit: type=1105 audit(1765887581.411:920): pid=5874 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:19:41.429494 kernel: audit: type=1103 audit(1765887581.422:921): pid=5878 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:19:41.634992 sshd[5878]: Connection closed by 147.75.109.163 port 45500 Dec 16 12:19:41.637155 sshd-session[5874]: pam_unix(sshd:session): session closed for user core Dec 16 12:19:41.639000 audit[5874]: USER_END pid=5874 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:19:41.639000 audit[5874]: CRED_DISP pid=5874 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:19:41.657393 kernel: audit: type=1106 audit(1765887581.639:922): pid=5874 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:19:41.657519 kernel: audit: type=1104 audit(1765887581.639:923): pid=5874 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:19:41.652408 systemd[1]: sshd@24-172.31.20.6:22-147.75.109.163:45500.service: Deactivated successfully. Dec 16 12:19:41.650000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-172.31.20.6:22-147.75.109.163:45500 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:19:41.660706 systemd[1]: session-26.scope: Deactivated successfully. Dec 16 12:19:41.664444 systemd-logind[1939]: Session 26 logged out. Waiting for processes to exit. Dec 16 12:19:41.672914 systemd-logind[1939]: Removed session 26. Dec 16 12:19:45.552041 containerd[1971]: time="2025-12-16T12:19:45.551435928Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Dec 16 12:19:45.554518 kubelet[3411]: E1216 12:19:45.550019 3411 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5959d55c94-c8546" podUID="9a9f64cf-c939-425f-bc9b-14da143ab498" Dec 16 12:19:45.831926 containerd[1971]: time="2025-12-16T12:19:45.831738577Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:19:45.834352 containerd[1971]: time="2025-12-16T12:19:45.834261565Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Dec 16 12:19:45.834510 containerd[1971]: time="2025-12-16T12:19:45.834396817Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Dec 16 12:19:45.834786 kubelet[3411]: E1216 12:19:45.834720 3411 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 16 12:19:45.834968 kubelet[3411]: E1216 12:19:45.834937 3411 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 16 12:19:45.835713 kubelet[3411]: E1216 12:19:45.835599 3411 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-84lph,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-7wp4r_calico-system(7f31f51a-3fe7-4796-97ca-d9a3c9b5116f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Dec 16 12:19:45.838249 containerd[1971]: time="2025-12-16T12:19:45.838188049Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Dec 16 12:19:46.134400 containerd[1971]: time="2025-12-16T12:19:46.134244659Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:19:46.136646 containerd[1971]: time="2025-12-16T12:19:46.136491383Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Dec 16 12:19:46.137030 kubelet[3411]: E1216 12:19:46.136934 3411 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 12:19:46.137182 containerd[1971]: time="2025-12-16T12:19:46.136581851Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Dec 16 12:19:46.137437 kubelet[3411]: E1216 12:19:46.137292 3411 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 12:19:46.137708 kubelet[3411]: E1216 12:19:46.137584 3411 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:f43187b9eb34459fb9682ed9d785cfc9,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-dclhd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-54cb69c56c-bnxxh_calico-system(41dcedc9-f0d1-4389-a970-074857eabb8a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Dec 16 12:19:46.138904 containerd[1971]: time="2025-12-16T12:19:46.138836207Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Dec 16 12:19:46.419732 containerd[1971]: time="2025-12-16T12:19:46.419561100Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:19:46.422188 containerd[1971]: time="2025-12-16T12:19:46.422084052Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Dec 16 12:19:46.422363 containerd[1971]: time="2025-12-16T12:19:46.422150184Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Dec 16 12:19:46.422859 kubelet[3411]: E1216 12:19:46.422707 3411 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 16 12:19:46.424240 kubelet[3411]: E1216 12:19:46.424147 3411 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 16 12:19:46.424701 kubelet[3411]: E1216 12:19:46.424558 3411 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-84lph,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-7wp4r_calico-system(7f31f51a-3fe7-4796-97ca-d9a3c9b5116f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Dec 16 12:19:46.425246 containerd[1971]: time="2025-12-16T12:19:46.425199516Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Dec 16 12:19:46.425897 kubelet[3411]: E1216 12:19:46.425791 3411 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-7wp4r" podUID="7f31f51a-3fe7-4796-97ca-d9a3c9b5116f" Dec 16 12:19:46.548156 kubelet[3411]: E1216 12:19:46.548024 3411 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-fcb9bdb55-c27r4" podUID="a0556f5e-184b-4527-b60e-270da372abfb" Dec 16 12:19:46.677547 systemd[1]: Started sshd@25-172.31.20.6:22-147.75.109.163:59064.service - OpenSSH per-connection server daemon (147.75.109.163:59064). Dec 16 12:19:46.687094 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 16 12:19:46.687194 kernel: audit: type=1130 audit(1765887586.677:925): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-172.31.20.6:22-147.75.109.163:59064 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:19:46.677000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-172.31.20.6:22-147.75.109.163:59064 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:19:46.707697 containerd[1971]: time="2025-12-16T12:19:46.707623070Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:19:46.710185 containerd[1971]: time="2025-12-16T12:19:46.709993238Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Dec 16 12:19:46.710352 containerd[1971]: time="2025-12-16T12:19:46.710134826Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Dec 16 12:19:46.711327 kubelet[3411]: E1216 12:19:46.711258 3411 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 12:19:46.711887 kubelet[3411]: E1216 12:19:46.711331 3411 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 12:19:46.711887 kubelet[3411]: E1216 12:19:46.711483 3411 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dclhd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-54cb69c56c-bnxxh_calico-system(41dcedc9-f0d1-4389-a970-074857eabb8a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Dec 16 12:19:46.713460 kubelet[3411]: E1216 12:19:46.713186 3411 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-54cb69c56c-bnxxh" podUID="41dcedc9-f0d1-4389-a970-074857eabb8a" Dec 16 12:19:46.896000 audit[5896]: USER_ACCT pid=5896 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:19:46.904078 sshd[5896]: Accepted publickey for core from 147.75.109.163 port 59064 ssh2: RSA SHA256:akQMr2ov2nuM0LGikPoztxlYyDrZ4eOQOASr+dHXo3U Dec 16 12:19:46.907435 sshd-session[5896]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:19:46.904000 audit[5896]: CRED_ACQ pid=5896 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:19:46.913894 kernel: audit: type=1101 audit(1765887586.896:926): pid=5896 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:19:46.914031 kernel: audit: type=1103 audit(1765887586.904:927): pid=5896 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:19:46.919928 kernel: audit: type=1006 audit(1765887586.904:928): pid=5896 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=27 res=1 Dec 16 12:19:46.904000 audit[5896]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=fffff74496f0 a2=3 a3=0 items=0 ppid=1 pid=5896 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=27 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:19:46.929188 kernel: audit: type=1300 audit(1765887586.904:928): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=fffff74496f0 a2=3 a3=0 items=0 ppid=1 pid=5896 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=27 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:19:46.904000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 12:19:46.935380 kernel: audit: type=1327 audit(1765887586.904:928): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 12:19:46.936526 systemd-logind[1939]: New session 27 of user core. Dec 16 12:19:46.944664 systemd[1]: Started session-27.scope - Session 27 of User core. Dec 16 12:19:46.952000 audit[5896]: USER_START pid=5896 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:19:46.962000 audit[5900]: CRED_ACQ pid=5900 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:19:46.969897 kernel: audit: type=1105 audit(1765887586.952:929): pid=5896 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:19:46.970030 kernel: audit: type=1103 audit(1765887586.962:930): pid=5900 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:19:47.201645 sshd[5900]: Connection closed by 147.75.109.163 port 59064 Dec 16 12:19:47.202580 sshd-session[5896]: pam_unix(sshd:session): session closed for user core Dec 16 12:19:47.204000 audit[5896]: USER_END pid=5896 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:19:47.218042 systemd[1]: sshd@25-172.31.20.6:22-147.75.109.163:59064.service: Deactivated successfully. Dec 16 12:19:47.211000 audit[5896]: CRED_DISP pid=5896 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:19:47.230379 kernel: audit: type=1106 audit(1765887587.204:931): pid=5896 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:19:47.230487 kernel: audit: type=1104 audit(1765887587.211:932): pid=5896 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:19:47.232043 systemd[1]: session-27.scope: Deactivated successfully. Dec 16 12:19:47.236647 systemd-logind[1939]: Session 27 logged out. Waiting for processes to exit. Dec 16 12:19:47.217000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-172.31.20.6:22-147.75.109.163:59064 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:19:47.242114 systemd-logind[1939]: Removed session 27. Dec 16 12:19:47.547841 kubelet[3411]: E1216 12:19:47.547179 3411 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-fcb9bdb55-6k77x" podUID="41fe1dec-6478-42fa-9c60-8b697b125498" Dec 16 12:19:49.554086 kubelet[3411]: E1216 12:19:49.552542 3411 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-f4x44" podUID="9e39aa72-dd6b-4253-877f-1d57a9236239" Dec 16 12:19:52.249436 systemd[1]: Started sshd@26-172.31.20.6:22-147.75.109.163:54964.service - OpenSSH per-connection server daemon (147.75.109.163:54964). Dec 16 12:19:52.251276 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 16 12:19:52.251510 kernel: audit: type=1130 audit(1765887592.248:934): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@26-172.31.20.6:22-147.75.109.163:54964 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:19:52.248000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@26-172.31.20.6:22-147.75.109.163:54964 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:19:52.471000 audit[5940]: USER_ACCT pid=5940 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:19:52.472768 sshd[5940]: Accepted publickey for core from 147.75.109.163 port 54964 ssh2: RSA SHA256:akQMr2ov2nuM0LGikPoztxlYyDrZ4eOQOASr+dHXo3U Dec 16 12:19:52.481160 kernel: audit: type=1101 audit(1765887592.471:935): pid=5940 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:19:52.481000 audit[5940]: CRED_ACQ pid=5940 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:19:52.490034 sshd-session[5940]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:19:52.495876 kernel: audit: type=1103 audit(1765887592.481:936): pid=5940 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:19:52.496014 kernel: audit: type=1006 audit(1765887592.481:937): pid=5940 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=28 res=1 Dec 16 12:19:52.481000 audit[5940]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=fffff2e9ba40 a2=3 a3=0 items=0 ppid=1 pid=5940 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=28 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:19:52.502801 kernel: audit: type=1300 audit(1765887592.481:937): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=fffff2e9ba40 a2=3 a3=0 items=0 ppid=1 pid=5940 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=28 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:19:52.481000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 12:19:52.506010 kernel: audit: type=1327 audit(1765887592.481:937): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 12:19:52.505365 systemd-logind[1939]: New session 28 of user core. Dec 16 12:19:52.517343 systemd[1]: Started session-28.scope - Session 28 of User core. Dec 16 12:19:52.524000 audit[5940]: USER_START pid=5940 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:19:52.533000 audit[5944]: CRED_ACQ pid=5944 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:19:52.541158 kernel: audit: type=1105 audit(1765887592.524:938): pid=5940 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:19:52.541354 kernel: audit: type=1103 audit(1765887592.533:939): pid=5944 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:19:52.758383 sshd[5944]: Connection closed by 147.75.109.163 port 54964 Dec 16 12:19:52.757698 sshd-session[5940]: pam_unix(sshd:session): session closed for user core Dec 16 12:19:52.760000 audit[5940]: USER_END pid=5940 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:19:52.761000 audit[5940]: CRED_DISP pid=5940 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:19:52.777123 kernel: audit: type=1106 audit(1765887592.760:940): pid=5940 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:19:52.777275 kernel: audit: type=1104 audit(1765887592.761:941): pid=5940 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:19:52.778377 systemd[1]: sshd@26-172.31.20.6:22-147.75.109.163:54964.service: Deactivated successfully. Dec 16 12:19:52.777000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@26-172.31.20.6:22-147.75.109.163:54964 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:19:52.783228 systemd[1]: session-28.scope: Deactivated successfully. Dec 16 12:19:52.786788 systemd-logind[1939]: Session 28 logged out. Waiting for processes to exit. Dec 16 12:19:52.794905 systemd-logind[1939]: Removed session 28. Dec 16 12:19:56.550685 containerd[1971]: time="2025-12-16T12:19:56.550614586Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Dec 16 12:19:56.805782 containerd[1971]: time="2025-12-16T12:19:56.805334400Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:19:56.808026 containerd[1971]: time="2025-12-16T12:19:56.807815664Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Dec 16 12:19:56.808026 containerd[1971]: time="2025-12-16T12:19:56.807955260Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Dec 16 12:19:56.809131 kubelet[3411]: E1216 12:19:56.808445 3411 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 16 12:19:56.809131 kubelet[3411]: E1216 12:19:56.808513 3411 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 16 12:19:56.809131 kubelet[3411]: E1216 12:19:56.808722 3411 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8z56p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-5959d55c94-c8546_calico-system(9a9f64cf-c939-425f-bc9b-14da143ab498): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Dec 16 12:19:56.810406 kubelet[3411]: E1216 12:19:56.810354 3411 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5959d55c94-c8546" podUID="9a9f64cf-c939-425f-bc9b-14da143ab498" Dec 16 12:19:59.559018 containerd[1971]: time="2025-12-16T12:19:59.558125017Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 12:19:59.561448 kubelet[3411]: E1216 12:19:59.561128 3411 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-7wp4r" podUID="7f31f51a-3fe7-4796-97ca-d9a3c9b5116f" Dec 16 12:19:59.867148 containerd[1971]: time="2025-12-16T12:19:59.866470851Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:19:59.868859 containerd[1971]: time="2025-12-16T12:19:59.868692027Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 12:19:59.868859 containerd[1971]: time="2025-12-16T12:19:59.868746603Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 16 12:19:59.869283 kubelet[3411]: E1216 12:19:59.869205 3411 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 12:19:59.869572 kubelet[3411]: E1216 12:19:59.869277 3411 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 12:19:59.869572 kubelet[3411]: E1216 12:19:59.869480 3411 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6ntjh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-fcb9bdb55-6k77x_calico-apiserver(41fe1dec-6478-42fa-9c60-8b697b125498): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 12:19:59.870827 kubelet[3411]: E1216 12:19:59.870741 3411 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-fcb9bdb55-6k77x" podUID="41fe1dec-6478-42fa-9c60-8b697b125498" Dec 16 12:20:00.550220 kubelet[3411]: E1216 12:20:00.550093 3411 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-54cb69c56c-bnxxh" podUID="41dcedc9-f0d1-4389-a970-074857eabb8a" Dec 16 12:20:00.550956 containerd[1971]: time="2025-12-16T12:20:00.550857974Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 12:20:00.826322 containerd[1971]: time="2025-12-16T12:20:00.825532012Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:20:00.827962 containerd[1971]: time="2025-12-16T12:20:00.827820004Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 12:20:00.827962 containerd[1971]: time="2025-12-16T12:20:00.827882464Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 16 12:20:00.829101 kubelet[3411]: E1216 12:20:00.828330 3411 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 12:20:00.829101 kubelet[3411]: E1216 12:20:00.828395 3411 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 12:20:00.829101 kubelet[3411]: E1216 12:20:00.828573 3411 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-88br8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-fcb9bdb55-c27r4_calico-apiserver(a0556f5e-184b-4527-b60e-270da372abfb): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 12:20:00.832187 kubelet[3411]: E1216 12:20:00.830284 3411 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-fcb9bdb55-c27r4" podUID="a0556f5e-184b-4527-b60e-270da372abfb" Dec 16 12:20:03.554883 containerd[1971]: time="2025-12-16T12:20:03.554813501Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Dec 16 12:20:03.879036 containerd[1971]: time="2025-12-16T12:20:03.878739487Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:20:03.881574 containerd[1971]: time="2025-12-16T12:20:03.881393647Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Dec 16 12:20:03.881749 containerd[1971]: time="2025-12-16T12:20:03.881453887Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Dec 16 12:20:03.882222 kubelet[3411]: E1216 12:20:03.882049 3411 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 16 12:20:03.882856 kubelet[3411]: E1216 12:20:03.882257 3411 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 16 12:20:03.882856 kubelet[3411]: E1216 12:20:03.882529 3411 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-h456d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-f4x44_calico-system(9e39aa72-dd6b-4253-877f-1d57a9236239): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Dec 16 12:20:03.883919 kubelet[3411]: E1216 12:20:03.883843 3411 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-f4x44" podUID="9e39aa72-dd6b-4253-877f-1d57a9236239" Dec 16 12:20:09.549913 kubelet[3411]: E1216 12:20:09.549244 3411 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5959d55c94-c8546" podUID="9a9f64cf-c939-425f-bc9b-14da143ab498" Dec 16 12:20:10.553282 kubelet[3411]: E1216 12:20:10.553211 3411 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-7wp4r" podUID="7f31f51a-3fe7-4796-97ca-d9a3c9b5116f" Dec 16 12:20:12.549511 kubelet[3411]: E1216 12:20:12.549373 3411 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-fcb9bdb55-6k77x" podUID="41fe1dec-6478-42fa-9c60-8b697b125498" Dec 16 12:20:12.553250 kubelet[3411]: E1216 12:20:12.553148 3411 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-54cb69c56c-bnxxh" podUID="41dcedc9-f0d1-4389-a970-074857eabb8a" Dec 16 12:20:14.547954 kubelet[3411]: E1216 12:20:14.547898 3411 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-f4x44" podUID="9e39aa72-dd6b-4253-877f-1d57a9236239" Dec 16 12:20:15.547345 kubelet[3411]: E1216 12:20:15.547247 3411 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-fcb9bdb55-c27r4" podUID="a0556f5e-184b-4527-b60e-270da372abfb" Dec 16 12:20:22.550560 kubelet[3411]: E1216 12:20:22.550458 3411 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-7wp4r" podUID="7f31f51a-3fe7-4796-97ca-d9a3c9b5116f" Dec 16 12:20:23.552085 kubelet[3411]: E1216 12:20:23.551258 3411 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5959d55c94-c8546" podUID="9a9f64cf-c939-425f-bc9b-14da143ab498" Dec 16 12:20:24.548994 kubelet[3411]: E1216 12:20:24.548177 3411 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-fcb9bdb55-6k77x" podUID="41fe1dec-6478-42fa-9c60-8b697b125498" Dec 16 12:20:26.548726 kubelet[3411]: E1216 12:20:26.548622 3411 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-54cb69c56c-bnxxh" podUID="41dcedc9-f0d1-4389-a970-074857eabb8a" Dec 16 12:20:28.548350 kubelet[3411]: E1216 12:20:28.548273 3411 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-f4x44" podUID="9e39aa72-dd6b-4253-877f-1d57a9236239" Dec 16 12:20:28.548994 kubelet[3411]: E1216 12:20:28.548900 3411 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-fcb9bdb55-c27r4" podUID="a0556f5e-184b-4527-b60e-270da372abfb" Dec 16 12:20:34.547598 kubelet[3411]: E1216 12:20:34.547394 3411 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5959d55c94-c8546" podUID="9a9f64cf-c939-425f-bc9b-14da143ab498" Dec 16 12:20:37.547779 kubelet[3411]: E1216 12:20:37.547636 3411 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-7wp4r" podUID="7f31f51a-3fe7-4796-97ca-d9a3c9b5116f" Dec 16 12:20:39.547958 kubelet[3411]: E1216 12:20:39.547782 3411 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-fcb9bdb55-6k77x" podUID="41fe1dec-6478-42fa-9c60-8b697b125498" Dec 16 12:20:39.549306 kubelet[3411]: E1216 12:20:39.549235 3411 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-f4x44" podUID="9e39aa72-dd6b-4253-877f-1d57a9236239" Dec 16 12:20:39.852883 systemd[1]: cri-containerd-b74d125ff7b95df5bf4fa34ef892d25e892af41f4bdff6fbda5fd85d21338f5f.scope: Deactivated successfully. Dec 16 12:20:39.853572 systemd[1]: cri-containerd-b74d125ff7b95df5bf4fa34ef892d25e892af41f4bdff6fbda5fd85d21338f5f.scope: Consumed 30.506s CPU time, 107.2M memory peak. Dec 16 12:20:39.860520 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 16 12:20:39.860666 kernel: audit: type=1334 audit(1765887639.856:943): prog-id=153 op=UNLOAD Dec 16 12:20:39.856000 audit: BPF prog-id=153 op=UNLOAD Dec 16 12:20:39.861872 containerd[1971]: time="2025-12-16T12:20:39.861782514Z" level=info msg="received container exit event container_id:\"b74d125ff7b95df5bf4fa34ef892d25e892af41f4bdff6fbda5fd85d21338f5f\" id:\"b74d125ff7b95df5bf4fa34ef892d25e892af41f4bdff6fbda5fd85d21338f5f\" pid:3813 exit_status:1 exited_at:{seconds:1765887639 nanos:860761218}" Dec 16 12:20:39.865239 kernel: audit: type=1334 audit(1765887639.856:944): prog-id=157 op=UNLOAD Dec 16 12:20:39.856000 audit: BPF prog-id=157 op=UNLOAD Dec 16 12:20:39.908777 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b74d125ff7b95df5bf4fa34ef892d25e892af41f4bdff6fbda5fd85d21338f5f-rootfs.mount: Deactivated successfully. Dec 16 12:20:40.450486 systemd[1]: cri-containerd-e8f64c3e95425875968841a31bda77e8ddaaf3cfa6d58e82005d2056847e8db0.scope: Deactivated successfully. Dec 16 12:20:40.452221 systemd[1]: cri-containerd-e8f64c3e95425875968841a31bda77e8ddaaf3cfa6d58e82005d2056847e8db0.scope: Consumed 6.057s CPU time, 57.9M memory peak, 248K read from disk. Dec 16 12:20:40.453000 audit: BPF prog-id=263 op=LOAD Dec 16 12:20:40.453000 audit: BPF prog-id=95 op=UNLOAD Dec 16 12:20:40.459569 kernel: audit: type=1334 audit(1765887640.453:945): prog-id=263 op=LOAD Dec 16 12:20:40.459661 kernel: audit: type=1334 audit(1765887640.453:946): prog-id=95 op=UNLOAD Dec 16 12:20:40.459709 kernel: audit: type=1334 audit(1765887640.454:947): prog-id=110 op=UNLOAD Dec 16 12:20:40.454000 audit: BPF prog-id=110 op=UNLOAD Dec 16 12:20:40.463762 kernel: audit: type=1334 audit(1765887640.454:948): prog-id=114 op=UNLOAD Dec 16 12:20:40.454000 audit: BPF prog-id=114 op=UNLOAD Dec 16 12:20:40.463985 containerd[1971]: time="2025-12-16T12:20:40.462220625Z" level=info msg="received container exit event container_id:\"e8f64c3e95425875968841a31bda77e8ddaaf3cfa6d58e82005d2056847e8db0\" id:\"e8f64c3e95425875968841a31bda77e8ddaaf3cfa6d58e82005d2056847e8db0\" pid:3146 exit_status:1 exited_at:{seconds:1765887640 nanos:457838944}" Dec 16 12:20:40.504674 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e8f64c3e95425875968841a31bda77e8ddaaf3cfa6d58e82005d2056847e8db0-rootfs.mount: Deactivated successfully. Dec 16 12:20:40.571841 kubelet[3411]: I1216 12:20:40.571766 3411 scope.go:117] "RemoveContainer" containerID="e8f64c3e95425875968841a31bda77e8ddaaf3cfa6d58e82005d2056847e8db0" Dec 16 12:20:40.574836 kubelet[3411]: I1216 12:20:40.574788 3411 scope.go:117] "RemoveContainer" containerID="b74d125ff7b95df5bf4fa34ef892d25e892af41f4bdff6fbda5fd85d21338f5f" Dec 16 12:20:40.577845 containerd[1971]: time="2025-12-16T12:20:40.577779989Z" level=info msg="CreateContainer within sandbox \"10501a4b98e3306760e7659fcb35b25bffeb293eb3b06519aa824d5b82b07aa7\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Dec 16 12:20:40.580404 containerd[1971]: time="2025-12-16T12:20:40.580305341Z" level=info msg="CreateContainer within sandbox \"caecc70f2907c364a3748809972cf3693dca789785eb9dd95eeeaa3c5ab89eee\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Dec 16 12:20:40.607082 containerd[1971]: time="2025-12-16T12:20:40.606979757Z" level=info msg="Container 093870402c155d09cccfb1fe4b20f1af82201546e0fed3135f6ff1158b057f9a: CDI devices from CRI Config.CDIDevices: []" Dec 16 12:20:40.611716 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3826860220.mount: Deactivated successfully. Dec 16 12:20:40.616911 containerd[1971]: time="2025-12-16T12:20:40.616447205Z" level=info msg="Container 590230b12fe85ffbb0747f97771344e236c49fd389aa9f7b2879a6df6dd30b6a: CDI devices from CRI Config.CDIDevices: []" Dec 16 12:20:40.636881 containerd[1971]: time="2025-12-16T12:20:40.636776909Z" level=info msg="CreateContainer within sandbox \"caecc70f2907c364a3748809972cf3693dca789785eb9dd95eeeaa3c5ab89eee\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"093870402c155d09cccfb1fe4b20f1af82201546e0fed3135f6ff1158b057f9a\"" Dec 16 12:20:40.638490 containerd[1971]: time="2025-12-16T12:20:40.638411381Z" level=info msg="StartContainer for \"093870402c155d09cccfb1fe4b20f1af82201546e0fed3135f6ff1158b057f9a\"" Dec 16 12:20:40.645049 containerd[1971]: time="2025-12-16T12:20:40.644948297Z" level=info msg="connecting to shim 093870402c155d09cccfb1fe4b20f1af82201546e0fed3135f6ff1158b057f9a" address="unix:///run/containerd/s/8822b850440a04f998592980a40b5be9139a4d107f089a0abb02aa56a783d4a6" protocol=ttrpc version=3 Dec 16 12:20:40.650356 containerd[1971]: time="2025-12-16T12:20:40.650287085Z" level=info msg="CreateContainer within sandbox \"10501a4b98e3306760e7659fcb35b25bffeb293eb3b06519aa824d5b82b07aa7\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"590230b12fe85ffbb0747f97771344e236c49fd389aa9f7b2879a6df6dd30b6a\"" Dec 16 12:20:40.652149 containerd[1971]: time="2025-12-16T12:20:40.652045037Z" level=info msg="StartContainer for \"590230b12fe85ffbb0747f97771344e236c49fd389aa9f7b2879a6df6dd30b6a\"" Dec 16 12:20:40.658417 containerd[1971]: time="2025-12-16T12:20:40.658309913Z" level=info msg="connecting to shim 590230b12fe85ffbb0747f97771344e236c49fd389aa9f7b2879a6df6dd30b6a" address="unix:///run/containerd/s/54cae46c8b306d43329af42cd45bbab01f0ff2cf83421204d687166d9ee956d6" protocol=ttrpc version=3 Dec 16 12:20:40.692433 systemd[1]: Started cri-containerd-093870402c155d09cccfb1fe4b20f1af82201546e0fed3135f6ff1158b057f9a.scope - libcontainer container 093870402c155d09cccfb1fe4b20f1af82201546e0fed3135f6ff1158b057f9a. Dec 16 12:20:40.712683 systemd[1]: Started cri-containerd-590230b12fe85ffbb0747f97771344e236c49fd389aa9f7b2879a6df6dd30b6a.scope - libcontainer container 590230b12fe85ffbb0747f97771344e236c49fd389aa9f7b2879a6df6dd30b6a. Dec 16 12:20:40.737000 audit: BPF prog-id=264 op=LOAD Dec 16 12:20:40.741219 kernel: audit: type=1334 audit(1765887640.737:949): prog-id=264 op=LOAD Dec 16 12:20:40.740000 audit: BPF prog-id=265 op=LOAD Dec 16 12:20:40.740000 audit[6035]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a8180 a2=98 a3=0 items=0 ppid=3595 pid=6035 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:20:40.750286 kernel: audit: type=1334 audit(1765887640.740:950): prog-id=265 op=LOAD Dec 16 12:20:40.750434 kernel: audit: type=1300 audit(1765887640.740:950): arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a8180 a2=98 a3=0 items=0 ppid=3595 pid=6035 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:20:40.740000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3039333837303430326331353564303963636366623166653462323066 Dec 16 12:20:40.757315 kernel: audit: type=1327 audit(1765887640.740:950): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3039333837303430326331353564303963636366623166653462323066 Dec 16 12:20:40.742000 audit: BPF prog-id=265 op=UNLOAD Dec 16 12:20:40.742000 audit[6035]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3595 pid=6035 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:20:40.742000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3039333837303430326331353564303963636366623166653462323066 Dec 16 12:20:40.742000 audit: BPF prog-id=266 op=LOAD Dec 16 12:20:40.742000 audit[6035]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a83e8 a2=98 a3=0 items=0 ppid=3595 pid=6035 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:20:40.742000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3039333837303430326331353564303963636366623166653462323066 Dec 16 12:20:40.742000 audit: BPF prog-id=267 op=LOAD Dec 16 12:20:40.742000 audit[6035]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=40001a8168 a2=98 a3=0 items=0 ppid=3595 pid=6035 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:20:40.742000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3039333837303430326331353564303963636366623166653462323066 Dec 16 12:20:40.742000 audit: BPF prog-id=267 op=UNLOAD Dec 16 12:20:40.742000 audit[6035]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3595 pid=6035 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:20:40.742000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3039333837303430326331353564303963636366623166653462323066 Dec 16 12:20:40.742000 audit: BPF prog-id=266 op=UNLOAD Dec 16 12:20:40.742000 audit[6035]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3595 pid=6035 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:20:40.742000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3039333837303430326331353564303963636366623166653462323066 Dec 16 12:20:40.743000 audit: BPF prog-id=268 op=LOAD Dec 16 12:20:40.743000 audit[6035]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a8648 a2=98 a3=0 items=0 ppid=3595 pid=6035 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:20:40.743000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3039333837303430326331353564303963636366623166653462323066 Dec 16 12:20:40.757000 audit: BPF prog-id=269 op=LOAD Dec 16 12:20:40.758000 audit: BPF prog-id=270 op=LOAD Dec 16 12:20:40.758000 audit[6042]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000186180 a2=98 a3=0 items=0 ppid=2997 pid=6042 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:20:40.758000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3539303233306231326665383566666262303734376639373737313334 Dec 16 12:20:40.759000 audit: BPF prog-id=270 op=UNLOAD Dec 16 12:20:40.759000 audit[6042]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2997 pid=6042 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:20:40.759000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3539303233306231326665383566666262303734376639373737313334 Dec 16 12:20:40.760000 audit: BPF prog-id=271 op=LOAD Dec 16 12:20:40.760000 audit[6042]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001863e8 a2=98 a3=0 items=0 ppid=2997 pid=6042 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:20:40.760000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3539303233306231326665383566666262303734376639373737313334 Dec 16 12:20:40.760000 audit: BPF prog-id=272 op=LOAD Dec 16 12:20:40.760000 audit[6042]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000186168 a2=98 a3=0 items=0 ppid=2997 pid=6042 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:20:40.760000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3539303233306231326665383566666262303734376639373737313334 Dec 16 12:20:40.761000 audit: BPF prog-id=272 op=UNLOAD Dec 16 12:20:40.761000 audit[6042]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2997 pid=6042 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:20:40.761000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3539303233306231326665383566666262303734376639373737313334 Dec 16 12:20:40.762000 audit: BPF prog-id=271 op=UNLOAD Dec 16 12:20:40.762000 audit[6042]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2997 pid=6042 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:20:40.762000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3539303233306231326665383566666262303734376639373737313334 Dec 16 12:20:40.762000 audit: BPF prog-id=273 op=LOAD Dec 16 12:20:40.762000 audit[6042]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000186648 a2=98 a3=0 items=0 ppid=2997 pid=6042 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:20:40.762000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3539303233306231326665383566666262303734376639373737313334 Dec 16 12:20:40.815362 containerd[1971]: time="2025-12-16T12:20:40.814783002Z" level=info msg="StartContainer for \"093870402c155d09cccfb1fe4b20f1af82201546e0fed3135f6ff1158b057f9a\" returns successfully" Dec 16 12:20:40.853472 containerd[1971]: time="2025-12-16T12:20:40.853401738Z" level=info msg="StartContainer for \"590230b12fe85ffbb0747f97771344e236c49fd389aa9f7b2879a6df6dd30b6a\" returns successfully" Dec 16 12:20:40.921839 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount482869863.mount: Deactivated successfully. Dec 16 12:20:41.550461 kubelet[3411]: E1216 12:20:41.550373 3411 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-54cb69c56c-bnxxh" podUID="41dcedc9-f0d1-4389-a970-074857eabb8a" Dec 16 12:20:42.547007 kubelet[3411]: E1216 12:20:42.546880 3411 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-fcb9bdb55-c27r4" podUID="a0556f5e-184b-4527-b60e-270da372abfb" Dec 16 12:20:43.777901 systemd[1]: cri-containerd-d923840943e98981b5e857f240c392215f59ed3248a0d6d81f5d5b3c6cc3da0a.scope: Deactivated successfully. Dec 16 12:20:43.778793 systemd[1]: cri-containerd-d923840943e98981b5e857f240c392215f59ed3248a0d6d81f5d5b3c6cc3da0a.scope: Consumed 5.634s CPU time, 22.3M memory peak, 64K read from disk. Dec 16 12:20:43.780000 audit: BPF prog-id=274 op=LOAD Dec 16 12:20:43.780000 audit: BPF prog-id=100 op=UNLOAD Dec 16 12:20:43.782000 audit: BPF prog-id=115 op=UNLOAD Dec 16 12:20:43.782000 audit: BPF prog-id=119 op=UNLOAD Dec 16 12:20:43.784903 containerd[1971]: time="2025-12-16T12:20:43.784683285Z" level=info msg="received container exit event container_id:\"d923840943e98981b5e857f240c392215f59ed3248a0d6d81f5d5b3c6cc3da0a\" id:\"d923840943e98981b5e857f240c392215f59ed3248a0d6d81f5d5b3c6cc3da0a\" pid:3156 exit_status:1 exited_at:{seconds:1765887643 nanos:783820197}" Dec 16 12:20:43.831736 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d923840943e98981b5e857f240c392215f59ed3248a0d6d81f5d5b3c6cc3da0a-rootfs.mount: Deactivated successfully. Dec 16 12:20:44.602186 kubelet[3411]: I1216 12:20:44.601808 3411 scope.go:117] "RemoveContainer" containerID="d923840943e98981b5e857f240c392215f59ed3248a0d6d81f5d5b3c6cc3da0a" Dec 16 12:20:44.606123 containerd[1971]: time="2025-12-16T12:20:44.605516373Z" level=info msg="CreateContainer within sandbox \"67dbfed748dc2b7843943459ec777e7789f02e9c990df6eebbc8e33edf94b39a\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Dec 16 12:20:44.624558 containerd[1971]: time="2025-12-16T12:20:44.624462801Z" level=info msg="Container 200c0dab192e5e5cc43177ab31fb92663cc95c21560dda9f2b32852f608e8393: CDI devices from CRI Config.CDIDevices: []" Dec 16 12:20:44.647353 containerd[1971]: time="2025-12-16T12:20:44.647302161Z" level=info msg="CreateContainer within sandbox \"67dbfed748dc2b7843943459ec777e7789f02e9c990df6eebbc8e33edf94b39a\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"200c0dab192e5e5cc43177ab31fb92663cc95c21560dda9f2b32852f608e8393\"" Dec 16 12:20:44.648374 containerd[1971]: time="2025-12-16T12:20:44.648313089Z" level=info msg="StartContainer for \"200c0dab192e5e5cc43177ab31fb92663cc95c21560dda9f2b32852f608e8393\"" Dec 16 12:20:44.651111 containerd[1971]: time="2025-12-16T12:20:44.650944977Z" level=info msg="connecting to shim 200c0dab192e5e5cc43177ab31fb92663cc95c21560dda9f2b32852f608e8393" address="unix:///run/containerd/s/4a05be2f28b5aba37ec9f2b295b4fa1940412bd2f5846a709a18fc40935cca78" protocol=ttrpc version=3 Dec 16 12:20:44.693437 systemd[1]: Started cri-containerd-200c0dab192e5e5cc43177ab31fb92663cc95c21560dda9f2b32852f608e8393.scope - libcontainer container 200c0dab192e5e5cc43177ab31fb92663cc95c21560dda9f2b32852f608e8393. Dec 16 12:20:44.717000 audit: BPF prog-id=275 op=LOAD Dec 16 12:20:44.718000 audit: BPF prog-id=276 op=LOAD Dec 16 12:20:44.718000 audit[6108]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178180 a2=98 a3=0 items=0 ppid=3023 pid=6108 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:20:44.718000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3230306330646162313932653565356363343331373761623331666239 Dec 16 12:20:44.718000 audit: BPF prog-id=276 op=UNLOAD Dec 16 12:20:44.718000 audit[6108]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3023 pid=6108 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:20:44.718000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3230306330646162313932653565356363343331373761623331666239 Dec 16 12:20:44.719000 audit: BPF prog-id=277 op=LOAD Dec 16 12:20:44.719000 audit[6108]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=3023 pid=6108 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:20:44.719000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3230306330646162313932653565356363343331373761623331666239 Dec 16 12:20:44.719000 audit: BPF prog-id=278 op=LOAD Dec 16 12:20:44.719000 audit[6108]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000178168 a2=98 a3=0 items=0 ppid=3023 pid=6108 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:20:44.719000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3230306330646162313932653565356363343331373761623331666239 Dec 16 12:20:44.719000 audit: BPF prog-id=278 op=UNLOAD Dec 16 12:20:44.719000 audit[6108]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3023 pid=6108 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:20:44.719000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3230306330646162313932653565356363343331373761623331666239 Dec 16 12:20:44.719000 audit: BPF prog-id=277 op=UNLOAD Dec 16 12:20:44.719000 audit[6108]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3023 pid=6108 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:20:44.719000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3230306330646162313932653565356363343331373761623331666239 Dec 16 12:20:44.719000 audit: BPF prog-id=279 op=LOAD Dec 16 12:20:44.719000 audit[6108]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178648 a2=98 a3=0 items=0 ppid=3023 pid=6108 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:20:44.719000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3230306330646162313932653565356363343331373761623331666239 Dec 16 12:20:44.776847 containerd[1971]: time="2025-12-16T12:20:44.776778862Z" level=info msg="StartContainer for \"200c0dab192e5e5cc43177ab31fb92663cc95c21560dda9f2b32852f608e8393\" returns successfully" Dec 16 12:20:44.881523 kubelet[3411]: E1216 12:20:44.881263 3411 controller.go:195] "Failed to update lease" err="Put \"https://172.31.20.6:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-20-6?timeout=10s\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 16 12:20:47.547588 kubelet[3411]: E1216 12:20:47.547502 3411 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5959d55c94-c8546" podUID="9a9f64cf-c939-425f-bc9b-14da143ab498" Dec 16 12:20:52.319268 systemd[1]: cri-containerd-093870402c155d09cccfb1fe4b20f1af82201546e0fed3135f6ff1158b057f9a.scope: Deactivated successfully. Dec 16 12:20:52.322603 containerd[1971]: time="2025-12-16T12:20:52.322531647Z" level=info msg="received container exit event container_id:\"093870402c155d09cccfb1fe4b20f1af82201546e0fed3135f6ff1158b057f9a\" id:\"093870402c155d09cccfb1fe4b20f1af82201546e0fed3135f6ff1158b057f9a\" pid:6059 exit_status:1 exited_at:{seconds:1765887652 nanos:321970047}" Dec 16 12:20:52.322000 audit: BPF prog-id=264 op=UNLOAD Dec 16 12:20:52.325628 kernel: kauditd_printk_skb: 66 callbacks suppressed Dec 16 12:20:52.325705 kernel: audit: type=1334 audit(1765887652.322:977): prog-id=264 op=UNLOAD Dec 16 12:20:52.323000 audit: BPF prog-id=268 op=UNLOAD Dec 16 12:20:52.329674 kernel: audit: type=1334 audit(1765887652.323:978): prog-id=268 op=UNLOAD Dec 16 12:20:52.365906 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-093870402c155d09cccfb1fe4b20f1af82201546e0fed3135f6ff1158b057f9a-rootfs.mount: Deactivated successfully. Dec 16 12:20:52.547845 kubelet[3411]: E1216 12:20:52.547772 3411 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-7wp4r" podUID="7f31f51a-3fe7-4796-97ca-d9a3c9b5116f" Dec 16 12:20:52.635221 kubelet[3411]: I1216 12:20:52.634911 3411 scope.go:117] "RemoveContainer" containerID="b74d125ff7b95df5bf4fa34ef892d25e892af41f4bdff6fbda5fd85d21338f5f" Dec 16 12:20:52.636269 kubelet[3411]: I1216 12:20:52.636226 3411 scope.go:117] "RemoveContainer" containerID="093870402c155d09cccfb1fe4b20f1af82201546e0fed3135f6ff1158b057f9a" Dec 16 12:20:52.637259 kubelet[3411]: E1216 12:20:52.636975 3411 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tigera-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=tigera-operator pod=tigera-operator-7dcd859c48-sjp2r_tigera-operator(f6268e6c-0326-4253-ad6f-b6289565f354)\"" pod="tigera-operator/tigera-operator-7dcd859c48-sjp2r" podUID="f6268e6c-0326-4253-ad6f-b6289565f354" Dec 16 12:20:52.641133 containerd[1971]: time="2025-12-16T12:20:52.640546037Z" level=info msg="RemoveContainer for \"b74d125ff7b95df5bf4fa34ef892d25e892af41f4bdff6fbda5fd85d21338f5f\"" Dec 16 12:20:52.651438 containerd[1971]: time="2025-12-16T12:20:52.651325001Z" level=info msg="RemoveContainer for \"b74d125ff7b95df5bf4fa34ef892d25e892af41f4bdff6fbda5fd85d21338f5f\" returns successfully" Dec 16 12:20:53.547761 kubelet[3411]: E1216 12:20:53.547663 3411 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-f4x44" podUID="9e39aa72-dd6b-4253-877f-1d57a9236239" Dec 16 12:20:54.547564 kubelet[3411]: E1216 12:20:54.547269 3411 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-fcb9bdb55-6k77x" podUID="41fe1dec-6478-42fa-9c60-8b697b125498" Dec 16 12:20:54.548537 kubelet[3411]: E1216 12:20:54.548249 3411 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-fcb9bdb55-c27r4" podUID="a0556f5e-184b-4527-b60e-270da372abfb" Dec 16 12:20:54.882731 kubelet[3411]: E1216 12:20:54.882194 3411 controller.go:195] "Failed to update lease" err="Put \"https://172.31.20.6:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-20-6?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 16 12:20:56.548277 kubelet[3411]: E1216 12:20:56.548130 3411 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-54cb69c56c-bnxxh" podUID="41dcedc9-f0d1-4389-a970-074857eabb8a" Dec 16 12:20:58.546787 kubelet[3411]: E1216 12:20:58.546723 3411 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5959d55c94-c8546" podUID="9a9f64cf-c939-425f-bc9b-14da143ab498"