Dec 16 02:07:53.175856 kernel: Booting Linux on physical CPU 0x0000000000 [0x410fd083] Dec 16 02:07:53.175904 kernel: Linux version 6.12.61-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.1_p20250801 p4) 14.3.1 20250801, GNU ld (Gentoo 2.45 p3) 2.45.0) #1 SMP PREEMPT Tue Dec 16 00:05:24 -00 2025 Dec 16 02:07:53.175930 kernel: KASLR disabled due to lack of seed Dec 16 02:07:53.175946 kernel: efi: EFI v2.7 by EDK II Dec 16 02:07:53.175992 kernel: efi: SMBIOS=0x7bed0000 SMBIOS 3.0=0x7beb0000 ACPI=0x786e0000 ACPI 2.0=0x786e0014 MEMATTR=0x7a731a98 MEMRESERVE=0x78551598 Dec 16 02:07:53.176011 kernel: secureboot: Secure boot disabled Dec 16 02:07:53.176029 kernel: ACPI: Early table checksum verification disabled Dec 16 02:07:53.176045 kernel: ACPI: RSDP 0x00000000786E0014 000024 (v02 AMAZON) Dec 16 02:07:53.176061 kernel: ACPI: XSDT 0x00000000786D00E8 000064 (v01 AMAZON AMZNFACP 00000001 01000013) Dec 16 02:07:53.176082 kernel: ACPI: FACP 0x00000000786B0000 000114 (v06 AMAZON AMZNFACP 00000001 AMZN 00000001) Dec 16 02:07:53.176099 kernel: ACPI: DSDT 0x0000000078640000 0013D2 (v02 AMAZON AMZNDSDT 00000001 AMZN 00000001) Dec 16 02:07:53.176114 kernel: ACPI: FACS 0x0000000078630000 000040 Dec 16 02:07:53.176130 kernel: ACPI: APIC 0x00000000786C0000 000108 (v04 AMAZON AMZNAPIC 00000001 AMZN 00000001) Dec 16 02:07:53.176146 kernel: ACPI: SPCR 0x00000000786A0000 000050 (v02 AMAZON AMZNSPCR 00000001 AMZN 00000001) Dec 16 02:07:53.176169 kernel: ACPI: GTDT 0x0000000078690000 000060 (v02 AMAZON AMZNGTDT 00000001 AMZN 00000001) Dec 16 02:07:53.176186 kernel: ACPI: MCFG 0x0000000078680000 00003C (v02 AMAZON AMZNMCFG 00000001 AMZN 00000001) Dec 16 02:07:53.176203 kernel: ACPI: SLIT 0x0000000078670000 00002D (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Dec 16 02:07:53.176219 kernel: ACPI: IORT 0x0000000078660000 000078 (v01 AMAZON AMZNIORT 00000001 AMZN 00000001) Dec 16 02:07:53.176236 kernel: ACPI: PPTT 0x0000000078650000 0000EC (v01 AMAZON AMZNPPTT 00000001 AMZN 00000001) Dec 16 02:07:53.176253 kernel: ACPI: SPCR: console: uart,mmio,0x90a0000,115200 Dec 16 02:07:53.176270 kernel: earlycon: uart0 at MMIO 0x00000000090a0000 (options '115200') Dec 16 02:07:53.176287 kernel: printk: legacy bootconsole [uart0] enabled Dec 16 02:07:53.176303 kernel: ACPI: Use ACPI SPCR as default console: Yes Dec 16 02:07:53.176321 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000004b5ffffff] Dec 16 02:07:53.176342 kernel: NODE_DATA(0) allocated [mem 0x4b584da00-0x4b5854fff] Dec 16 02:07:53.176359 kernel: Zone ranges: Dec 16 02:07:53.176376 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Dec 16 02:07:53.176393 kernel: DMA32 empty Dec 16 02:07:53.176409 kernel: Normal [mem 0x0000000100000000-0x00000004b5ffffff] Dec 16 02:07:53.176426 kernel: Device empty Dec 16 02:07:53.176443 kernel: Movable zone start for each node Dec 16 02:07:53.176460 kernel: Early memory node ranges Dec 16 02:07:53.176476 kernel: node 0: [mem 0x0000000040000000-0x000000007862ffff] Dec 16 02:07:53.176493 kernel: node 0: [mem 0x0000000078630000-0x000000007863ffff] Dec 16 02:07:53.176510 kernel: node 0: [mem 0x0000000078640000-0x00000000786effff] Dec 16 02:07:53.176526 kernel: node 0: [mem 0x00000000786f0000-0x000000007872ffff] Dec 16 02:07:53.176547 kernel: node 0: [mem 0x0000000078730000-0x000000007bbfffff] Dec 16 02:07:53.176580 kernel: node 0: [mem 0x000000007bc00000-0x000000007bfdffff] Dec 16 02:07:53.176599 kernel: node 0: [mem 0x000000007bfe0000-0x000000007fffffff] Dec 16 02:07:53.176616 kernel: node 0: [mem 0x0000000400000000-0x00000004b5ffffff] Dec 16 02:07:53.176642 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000004b5ffffff] Dec 16 02:07:53.176664 kernel: On node 0, zone Normal: 8192 pages in unavailable ranges Dec 16 02:07:53.176682 kernel: cma: Reserved 16 MiB at 0x000000007f000000 on node -1 Dec 16 02:07:53.176699 kernel: psci: probing for conduit method from ACPI. Dec 16 02:07:53.176717 kernel: psci: PSCIv1.0 detected in firmware. Dec 16 02:07:53.176735 kernel: psci: Using standard PSCI v0.2 function IDs Dec 16 02:07:53.176752 kernel: psci: Trusted OS migration not required Dec 16 02:07:53.176770 kernel: psci: SMC Calling Convention v1.1 Dec 16 02:07:53.176788 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000001) Dec 16 02:07:53.176805 kernel: percpu: Embedded 33 pages/cpu s98200 r8192 d28776 u135168 Dec 16 02:07:53.176827 kernel: pcpu-alloc: s98200 r8192 d28776 u135168 alloc=33*4096 Dec 16 02:07:53.176845 kernel: pcpu-alloc: [0] 0 [0] 1 Dec 16 02:07:53.176862 kernel: Detected PIPT I-cache on CPU0 Dec 16 02:07:53.176880 kernel: CPU features: detected: GIC system register CPU interface Dec 16 02:07:53.176897 kernel: CPU features: detected: Spectre-v2 Dec 16 02:07:53.176915 kernel: CPU features: detected: Spectre-v3a Dec 16 02:07:53.176932 kernel: CPU features: detected: Spectre-BHB Dec 16 02:07:53.176950 kernel: CPU features: detected: ARM erratum 1742098 Dec 16 02:07:53.177001 kernel: CPU features: detected: ARM errata 1165522, 1319367, or 1530923 Dec 16 02:07:53.177020 kernel: alternatives: applying boot alternatives Dec 16 02:07:53.177040 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=756b815c2fd7ac2947efceb2a88878d1ea9723ec85037c2b4d1a09bd798bb749 Dec 16 02:07:53.177064 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Dec 16 02:07:53.177082 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Dec 16 02:07:53.177099 kernel: Fallback order for Node 0: 0 Dec 16 02:07:53.177117 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1007616 Dec 16 02:07:53.177134 kernel: Policy zone: Normal Dec 16 02:07:53.177151 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Dec 16 02:07:53.177169 kernel: software IO TLB: area num 2. Dec 16 02:07:53.177186 kernel: software IO TLB: mapped [mem 0x000000006f800000-0x0000000073800000] (64MB) Dec 16 02:07:53.177204 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Dec 16 02:07:53.177221 kernel: rcu: Preemptible hierarchical RCU implementation. Dec 16 02:07:53.177244 kernel: rcu: RCU event tracing is enabled. Dec 16 02:07:53.177262 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Dec 16 02:07:53.177280 kernel: Trampoline variant of Tasks RCU enabled. Dec 16 02:07:53.177298 kernel: Tracing variant of Tasks RCU enabled. Dec 16 02:07:53.177316 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Dec 16 02:07:53.177334 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Dec 16 02:07:53.177351 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 16 02:07:53.177369 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 16 02:07:53.177387 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Dec 16 02:07:53.177404 kernel: GICv3: 96 SPIs implemented Dec 16 02:07:53.177422 kernel: GICv3: 0 Extended SPIs implemented Dec 16 02:07:53.177444 kernel: Root IRQ handler: gic_handle_irq Dec 16 02:07:53.177462 kernel: GICv3: GICv3 features: 16 PPIs Dec 16 02:07:53.177479 kernel: GICv3: GICD_CTRL.DS=1, SCR_EL3.FIQ=0 Dec 16 02:07:53.177497 kernel: GICv3: CPU0: found redistributor 0 region 0:0x0000000010200000 Dec 16 02:07:53.177514 kernel: ITS [mem 0x10080000-0x1009ffff] Dec 16 02:07:53.177532 kernel: ITS@0x0000000010080000: allocated 8192 Devices @4000f0000 (indirect, esz 8, psz 64K, shr 1) Dec 16 02:07:53.177551 kernel: ITS@0x0000000010080000: allocated 8192 Interrupt Collections @400100000 (flat, esz 8, psz 64K, shr 1) Dec 16 02:07:53.177568 kernel: GICv3: using LPI property table @0x0000000400110000 Dec 16 02:07:53.177586 kernel: ITS: Using hypervisor restricted LPI range [128] Dec 16 02:07:53.177603 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000400120000 Dec 16 02:07:53.177620 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Dec 16 02:07:53.177664 kernel: arch_timer: cp15 timer(s) running at 83.33MHz (virt). Dec 16 02:07:53.177683 kernel: clocksource: arch_sys_counter: mask: 0x1ffffffffffffff max_cycles: 0x13381ebeec, max_idle_ns: 440795203145 ns Dec 16 02:07:53.177700 kernel: sched_clock: 57 bits at 83MHz, resolution 12ns, wraps every 4398046511100ns Dec 16 02:07:53.177718 kernel: Console: colour dummy device 80x25 Dec 16 02:07:53.177737 kernel: printk: legacy console [tty1] enabled Dec 16 02:07:53.177755 kernel: ACPI: Core revision 20240827 Dec 16 02:07:53.177774 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 166.66 BogoMIPS (lpj=83333) Dec 16 02:07:53.177792 kernel: pid_max: default: 32768 minimum: 301 Dec 16 02:07:53.177816 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Dec 16 02:07:53.177834 kernel: landlock: Up and running. Dec 16 02:07:53.177852 kernel: SELinux: Initializing. Dec 16 02:07:53.177870 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Dec 16 02:07:53.177888 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Dec 16 02:07:53.177906 kernel: rcu: Hierarchical SRCU implementation. Dec 16 02:07:53.177924 kernel: rcu: Max phase no-delay instances is 400. Dec 16 02:07:53.177942 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Dec 16 02:07:53.177986 kernel: Remapping and enabling EFI services. Dec 16 02:07:53.178007 kernel: smp: Bringing up secondary CPUs ... Dec 16 02:07:53.178025 kernel: Detected PIPT I-cache on CPU1 Dec 16 02:07:53.178044 kernel: GICv3: CPU1: found redistributor 1 region 0:0x0000000010220000 Dec 16 02:07:53.178062 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000400130000 Dec 16 02:07:53.178081 kernel: CPU1: Booted secondary processor 0x0000000001 [0x410fd083] Dec 16 02:07:53.178099 kernel: smp: Brought up 1 node, 2 CPUs Dec 16 02:07:53.178123 kernel: SMP: Total of 2 processors activated. Dec 16 02:07:53.178141 kernel: CPU: All CPU(s) started at EL1 Dec 16 02:07:53.178171 kernel: CPU features: detected: 32-bit EL0 Support Dec 16 02:07:53.178194 kernel: CPU features: detected: 32-bit EL1 Support Dec 16 02:07:53.178213 kernel: CPU features: detected: CRC32 instructions Dec 16 02:07:53.178231 kernel: alternatives: applying system-wide alternatives Dec 16 02:07:53.178252 kernel: Memory: 3823400K/4030464K available (11200K kernel code, 2456K rwdata, 9084K rodata, 12480K init, 1038K bss, 185716K reserved, 16384K cma-reserved) Dec 16 02:07:53.178272 kernel: devtmpfs: initialized Dec 16 02:07:53.178296 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Dec 16 02:07:53.178315 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Dec 16 02:07:53.178334 kernel: 23648 pages in range for non-PLT usage Dec 16 02:07:53.178353 kernel: 515168 pages in range for PLT usage Dec 16 02:07:53.178371 kernel: pinctrl core: initialized pinctrl subsystem Dec 16 02:07:53.178394 kernel: SMBIOS 3.0.0 present. Dec 16 02:07:53.178413 kernel: DMI: Amazon EC2 a1.large/, BIOS 1.0 11/1/2018 Dec 16 02:07:53.178432 kernel: DMI: Memory slots populated: 0/0 Dec 16 02:07:53.178451 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Dec 16 02:07:53.178470 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Dec 16 02:07:53.178490 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Dec 16 02:07:53.178509 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Dec 16 02:07:53.178532 kernel: audit: initializing netlink subsys (disabled) Dec 16 02:07:53.178551 kernel: audit: type=2000 audit(0.232:1): state=initialized audit_enabled=0 res=1 Dec 16 02:07:53.178570 kernel: thermal_sys: Registered thermal governor 'step_wise' Dec 16 02:07:53.178590 kernel: cpuidle: using governor menu Dec 16 02:07:53.178610 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Dec 16 02:07:53.178629 kernel: ASID allocator initialised with 65536 entries Dec 16 02:07:53.178648 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Dec 16 02:07:53.178671 kernel: Serial: AMBA PL011 UART driver Dec 16 02:07:53.178691 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Dec 16 02:07:53.178711 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Dec 16 02:07:53.178730 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Dec 16 02:07:53.178749 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Dec 16 02:07:53.178768 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Dec 16 02:07:53.178787 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Dec 16 02:07:53.178811 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Dec 16 02:07:53.178830 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Dec 16 02:07:53.178849 kernel: ACPI: Added _OSI(Module Device) Dec 16 02:07:53.178868 kernel: ACPI: Added _OSI(Processor Device) Dec 16 02:07:53.178887 kernel: ACPI: Added _OSI(Processor Aggregator Device) Dec 16 02:07:53.178905 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Dec 16 02:07:53.178924 kernel: ACPI: Interpreter enabled Dec 16 02:07:53.178947 kernel: ACPI: Using GIC for interrupt routing Dec 16 02:07:53.178994 kernel: ACPI: MCFG table detected, 1 entries Dec 16 02:07:53.179016 kernel: ACPI: CPU0 has been hot-added Dec 16 02:07:53.179035 kernel: ACPI: CPU1 has been hot-added Dec 16 02:07:53.179054 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00]) Dec 16 02:07:53.179422 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Dec 16 02:07:53.179682 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Dec 16 02:07:53.179943 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Dec 16 02:07:53.180243 kernel: acpi PNP0A08:00: ECAM area [mem 0x20000000-0x200fffff] reserved by PNP0C02:00 Dec 16 02:07:53.180503 kernel: acpi PNP0A08:00: ECAM at [mem 0x20000000-0x200fffff] for [bus 00] Dec 16 02:07:53.180530 kernel: ACPI: Remapped I/O 0x000000001fff0000 to [io 0x0000-0xffff window] Dec 16 02:07:53.180549 kernel: acpiphp: Slot [1] registered Dec 16 02:07:53.180568 kernel: acpiphp: Slot [2] registered Dec 16 02:07:53.180594 kernel: acpiphp: Slot [3] registered Dec 16 02:07:53.180613 kernel: acpiphp: Slot [4] registered Dec 16 02:07:53.180631 kernel: acpiphp: Slot [5] registered Dec 16 02:07:53.180650 kernel: acpiphp: Slot [6] registered Dec 16 02:07:53.180669 kernel: acpiphp: Slot [7] registered Dec 16 02:07:53.180687 kernel: acpiphp: Slot [8] registered Dec 16 02:07:53.180706 kernel: acpiphp: Slot [9] registered Dec 16 02:07:53.180724 kernel: acpiphp: Slot [10] registered Dec 16 02:07:53.180749 kernel: acpiphp: Slot [11] registered Dec 16 02:07:53.180767 kernel: acpiphp: Slot [12] registered Dec 16 02:07:53.180786 kernel: acpiphp: Slot [13] registered Dec 16 02:07:53.180805 kernel: acpiphp: Slot [14] registered Dec 16 02:07:53.180823 kernel: acpiphp: Slot [15] registered Dec 16 02:07:53.180842 kernel: acpiphp: Slot [16] registered Dec 16 02:07:53.180861 kernel: acpiphp: Slot [17] registered Dec 16 02:07:53.180885 kernel: acpiphp: Slot [18] registered Dec 16 02:07:53.180904 kernel: acpiphp: Slot [19] registered Dec 16 02:07:53.180922 kernel: acpiphp: Slot [20] registered Dec 16 02:07:53.180941 kernel: acpiphp: Slot [21] registered Dec 16 02:07:53.180986 kernel: acpiphp: Slot [22] registered Dec 16 02:07:53.181008 kernel: acpiphp: Slot [23] registered Dec 16 02:07:53.181028 kernel: acpiphp: Slot [24] registered Dec 16 02:07:53.181053 kernel: acpiphp: Slot [25] registered Dec 16 02:07:53.181072 kernel: acpiphp: Slot [26] registered Dec 16 02:07:53.181091 kernel: acpiphp: Slot [27] registered Dec 16 02:07:53.181110 kernel: acpiphp: Slot [28] registered Dec 16 02:07:53.181128 kernel: acpiphp: Slot [29] registered Dec 16 02:07:53.181146 kernel: acpiphp: Slot [30] registered Dec 16 02:07:53.181165 kernel: acpiphp: Slot [31] registered Dec 16 02:07:53.181184 kernel: PCI host bridge to bus 0000:00 Dec 16 02:07:53.181471 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xffffffff window] Dec 16 02:07:53.181727 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Dec 16 02:07:53.181978 kernel: pci_bus 0000:00: root bus resource [mem 0x400000000000-0x407fffffffff window] Dec 16 02:07:53.182223 kernel: pci_bus 0000:00: root bus resource [bus 00] Dec 16 02:07:53.182522 kernel: pci 0000:00:00.0: [1d0f:0200] type 00 class 0x060000 conventional PCI endpoint Dec 16 02:07:53.182820 kernel: pci 0000:00:01.0: [1d0f:8250] type 00 class 0x070003 conventional PCI endpoint Dec 16 02:07:53.183521 kernel: pci 0000:00:01.0: BAR 0 [mem 0x80118000-0x80118fff] Dec 16 02:07:53.183815 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 PCIe Root Complex Integrated Endpoint Dec 16 02:07:53.184797 kernel: pci 0000:00:04.0: BAR 0 [mem 0x80114000-0x80117fff] Dec 16 02:07:53.185120 kernel: pci 0000:00:04.0: PME# supported from D0 D1 D2 D3hot D3cold Dec 16 02:07:53.185414 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 PCIe Root Complex Integrated Endpoint Dec 16 02:07:53.185696 kernel: pci 0000:00:05.0: BAR 0 [mem 0x80110000-0x80113fff] Dec 16 02:07:53.185979 kernel: pci 0000:00:05.0: BAR 2 [mem 0x80000000-0x800fffff pref] Dec 16 02:07:53.186249 kernel: pci 0000:00:05.0: BAR 4 [mem 0x80100000-0x8010ffff] Dec 16 02:07:53.186505 kernel: pci 0000:00:05.0: PME# supported from D0 D1 D2 D3hot D3cold Dec 16 02:07:53.186739 kernel: pci_bus 0000:00: resource 4 [mem 0x80000000-0xffffffff window] Dec 16 02:07:53.187030 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Dec 16 02:07:53.187283 kernel: pci_bus 0000:00: resource 6 [mem 0x400000000000-0x407fffffffff window] Dec 16 02:07:53.187309 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Dec 16 02:07:53.187329 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Dec 16 02:07:53.187349 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Dec 16 02:07:53.187368 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Dec 16 02:07:53.187387 kernel: iommu: Default domain type: Translated Dec 16 02:07:53.187413 kernel: iommu: DMA domain TLB invalidation policy: strict mode Dec 16 02:07:53.187432 kernel: efivars: Registered efivars operations Dec 16 02:07:53.187451 kernel: vgaarb: loaded Dec 16 02:07:53.187470 kernel: clocksource: Switched to clocksource arch_sys_counter Dec 16 02:07:53.187488 kernel: VFS: Disk quotas dquot_6.6.0 Dec 16 02:07:53.187507 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Dec 16 02:07:53.187526 kernel: pnp: PnP ACPI init Dec 16 02:07:53.187799 kernel: system 00:00: [mem 0x20000000-0x2fffffff] could not be reserved Dec 16 02:07:53.187826 kernel: pnp: PnP ACPI: found 1 devices Dec 16 02:07:53.187846 kernel: NET: Registered PF_INET protocol family Dec 16 02:07:53.187865 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Dec 16 02:07:53.187885 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Dec 16 02:07:53.187904 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Dec 16 02:07:53.187923 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Dec 16 02:07:53.187948 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Dec 16 02:07:53.188000 kernel: TCP: Hash tables configured (established 32768 bind 32768) Dec 16 02:07:53.188021 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Dec 16 02:07:53.188041 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Dec 16 02:07:53.188060 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Dec 16 02:07:53.188080 kernel: PCI: CLS 0 bytes, default 64 Dec 16 02:07:53.188099 kernel: kvm [1]: HYP mode not available Dec 16 02:07:53.188125 kernel: Initialise system trusted keyrings Dec 16 02:07:53.188143 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Dec 16 02:07:53.188162 kernel: Key type asymmetric registered Dec 16 02:07:53.188182 kernel: Asymmetric key parser 'x509' registered Dec 16 02:07:53.188201 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Dec 16 02:07:53.188220 kernel: io scheduler mq-deadline registered Dec 16 02:07:53.188239 kernel: io scheduler kyber registered Dec 16 02:07:53.188263 kernel: io scheduler bfq registered Dec 16 02:07:53.188552 kernel: pl061_gpio ARMH0061:00: PL061 GPIO chip registered Dec 16 02:07:53.188580 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Dec 16 02:07:53.188600 kernel: ACPI: button: Power Button [PWRB] Dec 16 02:07:53.188620 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input1 Dec 16 02:07:53.188639 kernel: ACPI: button: Sleep Button [SLPB] Dec 16 02:07:53.188664 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Dec 16 02:07:53.188850 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Dec 16 02:07:53.189176 kernel: serial 0000:00:01.0: enabling device (0010 -> 0012) Dec 16 02:07:53.189204 kernel: printk: legacy console [ttyS0] disabled Dec 16 02:07:53.189224 kernel: 0000:00:01.0: ttyS0 at MMIO 0x80118000 (irq = 14, base_baud = 115200) is a 16550A Dec 16 02:07:53.189244 kernel: printk: legacy console [ttyS0] enabled Dec 16 02:07:53.189263 kernel: printk: legacy bootconsole [uart0] disabled Dec 16 02:07:53.189289 kernel: thunder_xcv, ver 1.0 Dec 16 02:07:53.189308 kernel: thunder_bgx, ver 1.0 Dec 16 02:07:53.189327 kernel: nicpf, ver 1.0 Dec 16 02:07:53.189346 kernel: nicvf, ver 1.0 Dec 16 02:07:53.191273 kernel: rtc-efi rtc-efi.0: registered as rtc0 Dec 16 02:07:53.191541 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-12-16T02:07:51 UTC (1765850871) Dec 16 02:07:53.191569 kernel: hid: raw HID events driver (C) Jiri Kosina Dec 16 02:07:53.191600 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 3 (0,80000003) counters available Dec 16 02:07:53.191622 kernel: NET: Registered PF_INET6 protocol family Dec 16 02:07:53.191643 kernel: watchdog: NMI not fully supported Dec 16 02:07:53.191664 kernel: watchdog: Hard watchdog permanently disabled Dec 16 02:07:53.191686 kernel: Segment Routing with IPv6 Dec 16 02:07:53.191706 kernel: In-situ OAM (IOAM) with IPv6 Dec 16 02:07:53.191726 kernel: NET: Registered PF_PACKET protocol family Dec 16 02:07:53.191750 kernel: Key type dns_resolver registered Dec 16 02:07:53.191770 kernel: registered taskstats version 1 Dec 16 02:07:53.191790 kernel: Loading compiled-in X.509 certificates Dec 16 02:07:53.191810 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.61-flatcar: 545838337a91b65b763486e536766b3eec3ef99d' Dec 16 02:07:53.191829 kernel: Demotion targets for Node 0: null Dec 16 02:07:53.191848 kernel: Key type .fscrypt registered Dec 16 02:07:53.191868 kernel: Key type fscrypt-provisioning registered Dec 16 02:07:53.191891 kernel: ima: No TPM chip found, activating TPM-bypass! Dec 16 02:07:53.191912 kernel: ima: Allocated hash algorithm: sha1 Dec 16 02:07:53.191931 kernel: ima: No architecture policies found Dec 16 02:07:53.191951 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Dec 16 02:07:53.192015 kernel: clk: Disabling unused clocks Dec 16 02:07:53.192035 kernel: PM: genpd: Disabling unused power domains Dec 16 02:07:53.192054 kernel: Freeing unused kernel memory: 12480K Dec 16 02:07:53.192073 kernel: Run /init as init process Dec 16 02:07:53.192099 kernel: with arguments: Dec 16 02:07:53.192118 kernel: /init Dec 16 02:07:53.192136 kernel: with environment: Dec 16 02:07:53.192155 kernel: HOME=/ Dec 16 02:07:53.192173 kernel: TERM=linux Dec 16 02:07:53.192193 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Dec 16 02:07:53.192438 kernel: nvme nvme0: pci function 0000:00:04.0 Dec 16 02:07:53.192638 kernel: nvme nvme0: 2/0/0 default/read/poll queues Dec 16 02:07:53.192665 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Dec 16 02:07:53.192684 kernel: GPT:25804799 != 33554431 Dec 16 02:07:53.192703 kernel: GPT:Alternate GPT header not at the end of the disk. Dec 16 02:07:53.192722 kernel: GPT:25804799 != 33554431 Dec 16 02:07:53.192740 kernel: GPT: Use GNU Parted to correct GPT errors. Dec 16 02:07:53.192764 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Dec 16 02:07:53.192783 kernel: SCSI subsystem initialized Dec 16 02:07:53.192802 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Dec 16 02:07:53.192821 kernel: device-mapper: uevent: version 1.0.3 Dec 16 02:07:53.192840 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Dec 16 02:07:53.192859 kernel: device-mapper: verity: sha256 using shash "sha256-ce" Dec 16 02:07:53.192878 kernel: raid6: neonx8 gen() 6546 MB/s Dec 16 02:07:53.192902 kernel: raid6: neonx4 gen() 6503 MB/s Dec 16 02:07:53.192921 kernel: raid6: neonx2 gen() 5427 MB/s Dec 16 02:07:53.192939 kernel: raid6: neonx1 gen() 3893 MB/s Dec 16 02:07:53.192980 kernel: raid6: int64x8 gen() 3613 MB/s Dec 16 02:07:53.193003 kernel: raid6: int64x4 gen() 3670 MB/s Dec 16 02:07:53.193023 kernel: raid6: int64x2 gen() 3566 MB/s Dec 16 02:07:53.193042 kernel: raid6: int64x1 gen() 2699 MB/s Dec 16 02:07:53.193066 kernel: raid6: using algorithm neonx8 gen() 6546 MB/s Dec 16 02:07:53.193086 kernel: raid6: .... xor() 4720 MB/s, rmw enabled Dec 16 02:07:53.193106 kernel: raid6: using neon recovery algorithm Dec 16 02:07:53.193125 kernel: xor: measuring software checksum speed Dec 16 02:07:53.193145 kernel: 8regs : 12913 MB/sec Dec 16 02:07:53.193165 kernel: 32regs : 12396 MB/sec Dec 16 02:07:53.193183 kernel: arm64_neon : 8896 MB/sec Dec 16 02:07:53.193206 kernel: xor: using function: 8regs (12913 MB/sec) Dec 16 02:07:53.193226 kernel: Btrfs loaded, zoned=no, fsverity=no Dec 16 02:07:53.193245 kernel: BTRFS: device fsid d00a2bc5-1c68-4957-aa37-d070193fcf05 devid 1 transid 36 /dev/mapper/usr (254:0) scanned by mount (222) Dec 16 02:07:53.193264 kernel: BTRFS info (device dm-0): first mount of filesystem d00a2bc5-1c68-4957-aa37-d070193fcf05 Dec 16 02:07:53.193283 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Dec 16 02:07:53.193303 kernel: BTRFS info (device dm-0): enabling ssd optimizations Dec 16 02:07:53.193322 kernel: BTRFS info (device dm-0): disabling log replay at mount time Dec 16 02:07:53.193344 kernel: BTRFS info (device dm-0): enabling free space tree Dec 16 02:07:53.193363 kernel: loop: module loaded Dec 16 02:07:53.193383 kernel: loop0: detected capacity change from 0 to 91832 Dec 16 02:07:53.193401 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Dec 16 02:07:53.193422 systemd[1]: Successfully made /usr/ read-only. Dec 16 02:07:53.193447 systemd[1]: systemd 257.9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Dec 16 02:07:53.193473 systemd[1]: Detected virtualization amazon. Dec 16 02:07:53.193493 systemd[1]: Detected architecture arm64. Dec 16 02:07:53.193513 systemd[1]: Running in initrd. Dec 16 02:07:53.193533 systemd[1]: No hostname configured, using default hostname. Dec 16 02:07:53.193554 systemd[1]: Hostname set to . Dec 16 02:07:53.193574 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Dec 16 02:07:53.193595 systemd[1]: Queued start job for default target initrd.target. Dec 16 02:07:53.193619 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Dec 16 02:07:53.193659 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 16 02:07:53.193682 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 16 02:07:53.193704 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Dec 16 02:07:53.193726 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 16 02:07:53.193768 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Dec 16 02:07:53.193790 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Dec 16 02:07:53.193811 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 16 02:07:53.193833 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 16 02:07:53.193854 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Dec 16 02:07:53.193879 systemd[1]: Reached target paths.target - Path Units. Dec 16 02:07:53.193900 systemd[1]: Reached target slices.target - Slice Units. Dec 16 02:07:53.193921 systemd[1]: Reached target swap.target - Swaps. Dec 16 02:07:53.193942 systemd[1]: Reached target timers.target - Timer Units. Dec 16 02:07:53.197054 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Dec 16 02:07:53.197091 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 16 02:07:53.197114 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Dec 16 02:07:53.197148 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Dec 16 02:07:53.197170 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Dec 16 02:07:53.197192 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 16 02:07:53.197213 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 16 02:07:53.197235 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 16 02:07:53.197256 systemd[1]: Reached target sockets.target - Socket Units. Dec 16 02:07:53.197277 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Dec 16 02:07:53.197303 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Dec 16 02:07:53.197324 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 16 02:07:53.197345 systemd[1]: Finished network-cleanup.service - Network Cleanup. Dec 16 02:07:53.197367 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Dec 16 02:07:53.197388 systemd[1]: Starting systemd-fsck-usr.service... Dec 16 02:07:53.197410 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 16 02:07:53.197431 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 16 02:07:53.197458 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 16 02:07:53.197480 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Dec 16 02:07:53.197507 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 16 02:07:53.197530 systemd[1]: Finished systemd-fsck-usr.service. Dec 16 02:07:53.197551 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 16 02:07:53.197644 systemd-journald[361]: Collecting audit messages is enabled. Dec 16 02:07:53.197698 systemd-journald[361]: Journal started Dec 16 02:07:53.197736 systemd-journald[361]: Runtime Journal (/run/log/journal/ec24fcd88cb794ccbebcadef1b0323d1) is 8M, max 75.3M, 67.3M free. Dec 16 02:07:53.206516 systemd[1]: Started systemd-journald.service - Journal Service. Dec 16 02:07:53.206586 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Dec 16 02:07:53.210839 kernel: audit: type=1130 audit(1765850873.205:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:07:53.205000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:07:53.216704 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 02:07:53.226077 kernel: Bridge firewalling registered Dec 16 02:07:53.226119 kernel: audit: type=1130 audit(1765850873.217:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:07:53.217000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:07:53.220582 systemd-modules-load[363]: Inserted module 'br_netfilter' Dec 16 02:07:53.229878 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 16 02:07:53.236000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:07:53.239176 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 16 02:07:53.252245 kernel: audit: type=1130 audit(1765850873.236:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:07:53.252661 kernel: audit: type=1130 audit(1765850873.243:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:07:53.243000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:07:53.248530 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 16 02:07:53.263383 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 16 02:07:53.271441 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 16 02:07:53.288281 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 16 02:07:53.312153 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 16 02:07:53.316000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:07:53.325049 kernel: audit: type=1130 audit(1765850873.316:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:07:53.327298 systemd-tmpfiles[383]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Dec 16 02:07:53.333862 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 16 02:07:53.335000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:07:53.345646 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 16 02:07:53.352384 kernel: audit: type=1130 audit(1765850873.335:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:07:53.352446 kernel: audit: type=1334 audit(1765850873.340:8): prog-id=6 op=LOAD Dec 16 02:07:53.340000 audit: BPF prog-id=6 op=LOAD Dec 16 02:07:53.355792 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 16 02:07:53.357000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:07:53.368010 kernel: audit: type=1130 audit(1765850873.357:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:07:53.371089 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 16 02:07:53.375295 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Dec 16 02:07:53.369000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:07:53.388004 kernel: audit: type=1130 audit(1765850873.369:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:07:53.418121 dracut-cmdline[400]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=756b815c2fd7ac2947efceb2a88878d1ea9723ec85037c2b4d1a09bd798bb749 Dec 16 02:07:53.488253 systemd-resolved[394]: Positive Trust Anchors: Dec 16 02:07:53.488288 systemd-resolved[394]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 16 02:07:53.488297 systemd-resolved[394]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Dec 16 02:07:53.488358 systemd-resolved[394]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 16 02:07:53.620003 kernel: Loading iSCSI transport class v2.0-870. Dec 16 02:07:53.636001 kernel: iscsi: registered transport (tcp) Dec 16 02:07:53.660364 kernel: iscsi: registered transport (qla4xxx) Dec 16 02:07:53.660465 kernel: QLogic iSCSI HBA Driver Dec 16 02:07:53.706156 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Dec 16 02:07:53.738137 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Dec 16 02:07:53.744000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:07:53.748840 systemd[1]: Reached target network-pre.target - Preparation for Network. Dec 16 02:07:53.802996 kernel: random: crng init done Dec 16 02:07:53.807592 systemd-resolved[394]: Defaulting to hostname 'linux'. Dec 16 02:07:53.811665 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 16 02:07:53.820551 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 16 02:07:53.819000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:07:53.842308 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Dec 16 02:07:53.840000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:07:53.850942 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Dec 16 02:07:53.879632 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Dec 16 02:07:53.935105 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Dec 16 02:07:53.937000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:07:53.939000 audit: BPF prog-id=7 op=LOAD Dec 16 02:07:53.940000 audit: BPF prog-id=8 op=LOAD Dec 16 02:07:53.943432 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 16 02:07:54.006176 systemd-udevd[637]: Using default interface naming scheme 'v257'. Dec 16 02:07:54.030506 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 16 02:07:54.035000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:07:54.043082 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Dec 16 02:07:54.099302 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 16 02:07:54.103153 dracut-pre-trigger[715]: rd.md=0: removing MD RAID activation Dec 16 02:07:54.104000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:07:54.108000 audit: BPF prog-id=9 op=LOAD Dec 16 02:07:54.111490 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 16 02:07:54.170048 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Dec 16 02:07:54.177000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:07:54.183061 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 16 02:07:54.223822 systemd-networkd[755]: lo: Link UP Dec 16 02:07:54.223842 systemd-networkd[755]: lo: Gained carrier Dec 16 02:07:54.226079 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 16 02:07:54.228000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:07:54.230757 systemd[1]: Reached target network.target - Network. Dec 16 02:07:54.352115 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 16 02:07:54.353000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:07:54.360181 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Dec 16 02:07:54.569562 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Dec 16 02:07:54.569645 kernel: ena 0000:00:05.0: enabling device (0010 -> 0012) Dec 16 02:07:54.591976 kernel: ena 0000:00:05.0: ENA device version: 0.10 Dec 16 02:07:54.592436 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Dec 16 02:07:54.599000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:07:54.597739 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 16 02:07:54.598052 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 02:07:54.600913 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Dec 16 02:07:54.606109 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 16 02:07:54.627999 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80110000, mac addr 06:7c:41:a3:65:c9 Dec 16 02:07:54.632356 (udev-worker)[783]: Network interface NamePolicy= disabled on kernel command line. Dec 16 02:07:54.646026 kernel: nvme nvme0: using unchecked data buffer Dec 16 02:07:54.657490 systemd-networkd[755]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Dec 16 02:07:54.657514 systemd-networkd[755]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 16 02:07:54.672275 systemd-networkd[755]: eth0: Link UP Dec 16 02:07:54.676062 systemd-networkd[755]: eth0: Gained carrier Dec 16 02:07:54.676090 systemd-networkd[755]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Dec 16 02:07:54.689926 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 02:07:54.695000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:07:54.703143 systemd-networkd[755]: eth0: DHCPv4 address 172.31.29.223/20, gateway 172.31.16.1 acquired from 172.31.16.1 Dec 16 02:07:54.828628 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Dec 16 02:07:54.863396 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Dec 16 02:07:54.869886 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Dec 16 02:07:54.904826 disk-uuid[881]: Primary Header is updated. Dec 16 02:07:54.904826 disk-uuid[881]: Secondary Entries is updated. Dec 16 02:07:54.904826 disk-uuid[881]: Secondary Header is updated. Dec 16 02:07:55.016610 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Dec 16 02:07:55.064278 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Dec 16 02:07:55.397086 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Dec 16 02:07:55.405657 kernel: kauditd_printk_skb: 14 callbacks suppressed Dec 16 02:07:55.405703 kernel: audit: type=1130 audit(1765850875.398:25): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:07:55.398000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:07:55.401796 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Dec 16 02:07:55.406482 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 16 02:07:55.406790 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 16 02:07:55.418993 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Dec 16 02:07:55.458055 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Dec 16 02:07:55.459000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:07:55.467997 kernel: audit: type=1130 audit(1765850875.459:26): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:07:55.865270 systemd-networkd[755]: eth0: Gained IPv6LL Dec 16 02:07:55.976511 disk-uuid[888]: Warning: The kernel is still using the old partition table. Dec 16 02:07:55.976511 disk-uuid[888]: The new table will be used at the next reboot or after you Dec 16 02:07:55.976511 disk-uuid[888]: run partprobe(8) or kpartx(8) Dec 16 02:07:55.976511 disk-uuid[888]: The operation has completed successfully. Dec 16 02:07:55.999770 systemd[1]: disk-uuid.service: Deactivated successfully. Dec 16 02:07:56.000257 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Dec 16 02:07:56.007000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:07:56.010379 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Dec 16 02:07:56.021381 kernel: audit: type=1130 audit(1765850876.007:27): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:07:56.021434 kernel: audit: type=1131 audit(1765850876.007:28): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:07:56.007000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:07:56.074024 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 (259:5) scanned by mount (1098) Dec 16 02:07:56.078856 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem eb4bb268-dde2-45a9-b660-8899d8790a47 Dec 16 02:07:56.078933 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Dec 16 02:07:56.126954 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Dec 16 02:07:56.127051 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Dec 16 02:07:56.138008 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem eb4bb268-dde2-45a9-b660-8899d8790a47 Dec 16 02:07:56.139540 systemd[1]: Finished ignition-setup.service - Ignition (setup). Dec 16 02:07:56.143000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:07:56.146432 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Dec 16 02:07:56.153163 kernel: audit: type=1130 audit(1765850876.143:29): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:07:56.440436 ignition[1117]: Ignition 2.24.0 Dec 16 02:07:56.441099 ignition[1117]: Stage: fetch-offline Dec 16 02:07:56.441900 ignition[1117]: no configs at "/usr/lib/ignition/base.d" Dec 16 02:07:56.441931 ignition[1117]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 16 02:07:56.442826 ignition[1117]: Ignition finished successfully Dec 16 02:07:56.458108 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Dec 16 02:07:56.459000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:07:56.464477 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Dec 16 02:07:56.474615 kernel: audit: type=1130 audit(1765850876.459:30): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:07:56.512623 ignition[1123]: Ignition 2.24.0 Dec 16 02:07:56.512656 ignition[1123]: Stage: fetch Dec 16 02:07:56.513071 ignition[1123]: no configs at "/usr/lib/ignition/base.d" Dec 16 02:07:56.513109 ignition[1123]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 16 02:07:56.514460 ignition[1123]: PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 16 02:07:56.566066 ignition[1123]: PUT result: OK Dec 16 02:07:56.570693 ignition[1123]: parsed url from cmdline: "" Dec 16 02:07:56.570833 ignition[1123]: no config URL provided Dec 16 02:07:56.572402 ignition[1123]: reading system config file "/usr/lib/ignition/user.ign" Dec 16 02:07:56.572442 ignition[1123]: no config at "/usr/lib/ignition/user.ign" Dec 16 02:07:56.572495 ignition[1123]: PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 16 02:07:56.575406 ignition[1123]: PUT result: OK Dec 16 02:07:56.575562 ignition[1123]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Dec 16 02:07:56.584511 ignition[1123]: GET result: OK Dec 16 02:07:56.584701 ignition[1123]: parsing config with SHA512: d31cea13c3162c1a5cfbba6378335c8e2ae220e9fb8d5294a3f055df970d0f4896210ef6e2c82eed40fcb2ec0a3a8c5008061ed954f65c60b7eb2598de9785c9 Dec 16 02:07:56.600541 unknown[1123]: fetched base config from "system" Dec 16 02:07:56.600572 unknown[1123]: fetched base config from "system" Dec 16 02:07:56.601225 ignition[1123]: fetch: fetch complete Dec 16 02:07:56.600586 unknown[1123]: fetched user config from "aws" Dec 16 02:07:56.601237 ignition[1123]: fetch: fetch passed Dec 16 02:07:56.601323 ignition[1123]: Ignition finished successfully Dec 16 02:07:56.614427 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Dec 16 02:07:56.617328 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Dec 16 02:07:56.612000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:07:56.634015 kernel: audit: type=1130 audit(1765850876.612:31): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:07:56.668075 ignition[1129]: Ignition 2.24.0 Dec 16 02:07:56.668109 ignition[1129]: Stage: kargs Dec 16 02:07:56.668533 ignition[1129]: no configs at "/usr/lib/ignition/base.d" Dec 16 02:07:56.668558 ignition[1129]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 16 02:07:56.668859 ignition[1129]: PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 16 02:07:56.671770 ignition[1129]: PUT result: OK Dec 16 02:07:56.690428 ignition[1129]: kargs: kargs passed Dec 16 02:07:56.690581 ignition[1129]: Ignition finished successfully Dec 16 02:07:56.696744 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Dec 16 02:07:56.708303 kernel: audit: type=1130 audit(1765850876.698:32): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:07:56.698000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:07:56.709274 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Dec 16 02:07:56.757412 ignition[1135]: Ignition 2.24.0 Dec 16 02:07:56.757452 ignition[1135]: Stage: disks Dec 16 02:07:56.757916 ignition[1135]: no configs at "/usr/lib/ignition/base.d" Dec 16 02:07:56.757942 ignition[1135]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 16 02:07:56.759490 ignition[1135]: PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 16 02:07:56.763549 ignition[1135]: PUT result: OK Dec 16 02:07:56.777272 ignition[1135]: disks: disks passed Dec 16 02:07:56.777424 ignition[1135]: Ignition finished successfully Dec 16 02:07:56.782219 systemd[1]: Finished ignition-disks.service - Ignition (disks). Dec 16 02:07:56.807186 kernel: audit: type=1130 audit(1765850876.786:33): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:07:56.786000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:07:56.788382 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Dec 16 02:07:56.807247 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Dec 16 02:07:56.813408 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 16 02:07:56.813729 systemd[1]: Reached target sysinit.target - System Initialization. Dec 16 02:07:56.834441 systemd[1]: Reached target basic.target - Basic System. Dec 16 02:07:56.841246 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Dec 16 02:07:56.951939 systemd-fsck[1143]: ROOT: clean, 15/1631200 files, 112378/1617920 blocks Dec 16 02:07:56.956633 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Dec 16 02:07:56.962000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:07:56.966459 systemd[1]: Mounting sysroot.mount - /sysroot... Dec 16 02:07:56.972190 kernel: audit: type=1130 audit(1765850876.962:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:07:57.138007 kernel: EXT4-fs (nvme0n1p9): mounted filesystem 0e69f709-36a9-4e15-b0c9-c7e150185653 r/w with ordered data mode. Quota mode: none. Dec 16 02:07:57.139907 systemd[1]: Mounted sysroot.mount - /sysroot. Dec 16 02:07:57.144533 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Dec 16 02:07:57.214566 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 16 02:07:57.221553 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Dec 16 02:07:57.230363 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Dec 16 02:07:57.230439 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Dec 16 02:07:57.230492 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Dec 16 02:07:57.260847 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Dec 16 02:07:57.267213 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Dec 16 02:07:57.278022 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 (259:5) scanned by mount (1162) Dec 16 02:07:57.285899 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem eb4bb268-dde2-45a9-b660-8899d8790a47 Dec 16 02:07:57.286056 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Dec 16 02:07:57.293045 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Dec 16 02:07:57.293107 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Dec 16 02:07:57.296390 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 16 02:07:57.680579 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Dec 16 02:07:57.680000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:07:57.686848 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Dec 16 02:07:57.694930 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Dec 16 02:07:57.723613 systemd[1]: sysroot-oem.mount: Deactivated successfully. Dec 16 02:07:57.728174 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem eb4bb268-dde2-45a9-b660-8899d8790a47 Dec 16 02:07:57.761483 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Dec 16 02:07:57.759000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:07:57.777092 ignition[1259]: INFO : Ignition 2.24.0 Dec 16 02:07:57.777092 ignition[1259]: INFO : Stage: mount Dec 16 02:07:57.781455 ignition[1259]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 16 02:07:57.781455 ignition[1259]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 16 02:07:57.781455 ignition[1259]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 16 02:07:57.789442 ignition[1259]: INFO : PUT result: OK Dec 16 02:07:57.796651 ignition[1259]: INFO : mount: mount passed Dec 16 02:07:57.798476 ignition[1259]: INFO : Ignition finished successfully Dec 16 02:07:57.806012 systemd[1]: Finished ignition-mount.service - Ignition (mount). Dec 16 02:07:57.808000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:07:57.811698 systemd[1]: Starting ignition-files.service - Ignition (files)... Dec 16 02:07:58.143240 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 16 02:07:58.193990 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 (259:5) scanned by mount (1270) Dec 16 02:07:58.198691 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem eb4bb268-dde2-45a9-b660-8899d8790a47 Dec 16 02:07:58.198742 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Dec 16 02:07:58.205676 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Dec 16 02:07:58.205754 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Dec 16 02:07:58.209225 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 16 02:07:58.257312 ignition[1287]: INFO : Ignition 2.24.0 Dec 16 02:07:58.257312 ignition[1287]: INFO : Stage: files Dec 16 02:07:58.261094 ignition[1287]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 16 02:07:58.261094 ignition[1287]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 16 02:07:58.265921 ignition[1287]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 16 02:07:58.269593 ignition[1287]: INFO : PUT result: OK Dec 16 02:07:58.274679 ignition[1287]: DEBUG : files: compiled without relabeling support, skipping Dec 16 02:07:58.279407 ignition[1287]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Dec 16 02:07:58.279407 ignition[1287]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Dec 16 02:07:58.289554 ignition[1287]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Dec 16 02:07:58.294227 ignition[1287]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Dec 16 02:07:58.300808 ignition[1287]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Dec 16 02:07:58.299198 unknown[1287]: wrote ssh authorized keys file for user: core Dec 16 02:07:58.306505 ignition[1287]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Dec 16 02:07:58.306505 ignition[1287]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-arm64.tar.gz: attempt #1 Dec 16 02:07:58.399888 ignition[1287]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Dec 16 02:07:58.524338 ignition[1287]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Dec 16 02:07:58.528923 ignition[1287]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Dec 16 02:07:58.528923 ignition[1287]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Dec 16 02:07:58.528923 ignition[1287]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Dec 16 02:07:58.528923 ignition[1287]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Dec 16 02:07:58.528923 ignition[1287]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 16 02:07:58.528923 ignition[1287]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 16 02:07:58.528923 ignition[1287]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 16 02:07:58.528923 ignition[1287]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 16 02:07:58.559419 ignition[1287]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Dec 16 02:07:58.559419 ignition[1287]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Dec 16 02:07:58.559419 ignition[1287]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Dec 16 02:07:58.559419 ignition[1287]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Dec 16 02:07:58.559419 ignition[1287]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Dec 16 02:07:58.559419 ignition[1287]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-arm64.raw: attempt #1 Dec 16 02:07:59.019733 ignition[1287]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Dec 16 02:07:59.446766 ignition[1287]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Dec 16 02:07:59.446766 ignition[1287]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Dec 16 02:07:59.455061 ignition[1287]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 16 02:07:59.461544 ignition[1287]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 16 02:07:59.461544 ignition[1287]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Dec 16 02:07:59.461544 ignition[1287]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Dec 16 02:07:59.477224 ignition[1287]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Dec 16 02:07:59.477224 ignition[1287]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Dec 16 02:07:59.477224 ignition[1287]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Dec 16 02:07:59.477224 ignition[1287]: INFO : files: files passed Dec 16 02:07:59.477224 ignition[1287]: INFO : Ignition finished successfully Dec 16 02:07:59.483000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:07:59.469675 systemd[1]: Finished ignition-files.service - Ignition (files). Dec 16 02:07:59.487612 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Dec 16 02:07:59.511860 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Dec 16 02:07:59.524658 systemd[1]: ignition-quench.service: Deactivated successfully. Dec 16 02:07:59.527028 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Dec 16 02:07:59.531000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:07:59.531000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:07:59.561843 initrd-setup-root-after-ignition[1319]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 16 02:07:59.566619 initrd-setup-root-after-ignition[1319]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Dec 16 02:07:59.570683 initrd-setup-root-after-ignition[1323]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 16 02:07:59.576843 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 16 02:07:59.581000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:07:59.582838 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Dec 16 02:07:59.586797 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Dec 16 02:07:59.695648 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Dec 16 02:07:59.695979 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Dec 16 02:07:59.700000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:07:59.700000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:07:59.702814 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Dec 16 02:07:59.707643 systemd[1]: Reached target initrd.target - Initrd Default Target. Dec 16 02:07:59.713174 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Dec 16 02:07:59.718905 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Dec 16 02:07:59.782084 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 16 02:07:59.783000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:07:59.788188 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Dec 16 02:07:59.830582 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Dec 16 02:07:59.831153 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Dec 16 02:07:59.836535 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 16 02:07:59.839934 systemd[1]: Stopped target timers.target - Timer Units. Dec 16 02:07:59.848000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:07:59.846917 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Dec 16 02:07:59.847226 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 16 02:07:59.855445 systemd[1]: Stopped target initrd.target - Initrd Default Target. Dec 16 02:07:59.860022 systemd[1]: Stopped target basic.target - Basic System. Dec 16 02:07:59.867459 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Dec 16 02:07:59.872585 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Dec 16 02:07:59.877770 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Dec 16 02:07:59.883061 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Dec 16 02:07:59.886807 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Dec 16 02:07:59.894104 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Dec 16 02:07:59.900019 systemd[1]: Stopped target sysinit.target - System Initialization. Dec 16 02:07:59.903699 systemd[1]: Stopped target local-fs.target - Local File Systems. Dec 16 02:07:59.909905 systemd[1]: Stopped target swap.target - Swaps. Dec 16 02:07:59.910000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:07:59.912293 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Dec 16 02:07:59.912536 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Dec 16 02:07:59.927000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:07:59.913657 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Dec 16 02:07:59.914241 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 16 02:07:59.914826 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Dec 16 02:07:59.920670 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 16 02:07:59.941000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:07:59.921058 systemd[1]: dracut-initqueue.service: Deactivated successfully. Dec 16 02:07:59.921366 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Dec 16 02:07:59.950000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:07:59.934936 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Dec 16 02:07:59.935261 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 16 02:07:59.943567 systemd[1]: ignition-files.service: Deactivated successfully. Dec 16 02:07:59.943926 systemd[1]: Stopped ignition-files.service - Ignition (files). Dec 16 02:07:59.961495 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Dec 16 02:07:59.971128 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Dec 16 02:07:59.974718 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Dec 16 02:07:59.979430 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 16 02:07:59.985000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:07:59.986726 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Dec 16 02:07:59.988492 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Dec 16 02:07:59.995000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:07:59.996591 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Dec 16 02:07:59.997236 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Dec 16 02:08:00.004000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:08:00.027725 systemd[1]: initrd-cleanup.service: Deactivated successfully. Dec 16 02:08:00.028645 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Dec 16 02:08:00.034000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:08:00.038000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:08:00.048541 systemd[1]: sysroot-boot.mount: Deactivated successfully. Dec 16 02:08:00.059011 ignition[1343]: INFO : Ignition 2.24.0 Dec 16 02:08:00.059011 ignition[1343]: INFO : Stage: umount Dec 16 02:08:00.059011 ignition[1343]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 16 02:08:00.059011 ignition[1343]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 16 02:08:00.059011 ignition[1343]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 16 02:08:00.066000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:08:00.066997 systemd[1]: sysroot-boot.service: Deactivated successfully. Dec 16 02:08:00.076419 ignition[1343]: INFO : PUT result: OK Dec 16 02:08:00.067242 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Dec 16 02:08:00.086301 ignition[1343]: INFO : umount: umount passed Dec 16 02:08:00.086301 ignition[1343]: INFO : Ignition finished successfully Dec 16 02:08:00.091132 systemd[1]: ignition-mount.service: Deactivated successfully. Dec 16 02:08:00.091564 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Dec 16 02:08:00.099000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:08:00.101292 systemd[1]: ignition-disks.service: Deactivated successfully. Dec 16 02:08:00.101578 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Dec 16 02:08:00.106000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:08:00.108245 systemd[1]: ignition-kargs.service: Deactivated successfully. Dec 16 02:08:00.108478 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Dec 16 02:08:00.113000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:08:00.115079 systemd[1]: ignition-fetch.service: Deactivated successfully. Dec 16 02:08:00.115298 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Dec 16 02:08:00.120000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:08:00.121881 systemd[1]: Stopped target network.target - Network. Dec 16 02:08:00.125712 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Dec 16 02:08:00.125825 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Dec 16 02:08:00.132000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:08:00.133339 systemd[1]: Stopped target paths.target - Path Units. Dec 16 02:08:00.137133 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Dec 16 02:08:00.137216 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 16 02:08:00.142081 systemd[1]: Stopped target slices.target - Slice Units. Dec 16 02:08:00.147103 systemd[1]: Stopped target sockets.target - Socket Units. Dec 16 02:08:00.151152 systemd[1]: iscsid.socket: Deactivated successfully. Dec 16 02:08:00.151242 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Dec 16 02:08:00.158678 systemd[1]: iscsiuio.socket: Deactivated successfully. Dec 16 02:08:00.158748 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 16 02:08:00.166000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:08:00.161074 systemd[1]: systemd-journald-audit.socket: Deactivated successfully. Dec 16 02:08:00.161127 systemd[1]: Closed systemd-journald-audit.socket - Journal Audit Socket. Dec 16 02:08:00.173000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup-pre comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:08:00.176000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:08:00.163840 systemd[1]: ignition-setup.service: Deactivated successfully. Dec 16 02:08:00.163938 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Dec 16 02:08:00.168032 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Dec 16 02:08:00.168193 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Dec 16 02:08:00.175086 systemd[1]: initrd-setup-root.service: Deactivated successfully. Dec 16 02:08:00.175185 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Dec 16 02:08:00.178511 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Dec 16 02:08:00.184987 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Dec 16 02:08:00.209911 systemd[1]: systemd-networkd.service: Deactivated successfully. Dec 16 02:08:00.212279 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Dec 16 02:08:00.219000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:08:00.219000 audit: BPF prog-id=9 op=UNLOAD Dec 16 02:08:00.223323 systemd[1]: systemd-resolved.service: Deactivated successfully. Dec 16 02:08:00.225813 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Dec 16 02:08:00.229000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:08:00.231000 audit: BPF prog-id=6 op=UNLOAD Dec 16 02:08:00.233139 systemd[1]: Stopped target network-pre.target - Preparation for Network. Dec 16 02:08:00.238181 systemd[1]: systemd-networkd.socket: Deactivated successfully. Dec 16 02:08:00.238267 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Dec 16 02:08:00.247157 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Dec 16 02:08:00.249456 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Dec 16 02:08:00.249606 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 16 02:08:00.263000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:08:00.265831 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 16 02:08:00.265939 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Dec 16 02:08:00.271000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:08:00.276047 systemd[1]: systemd-modules-load.service: Deactivated successfully. Dec 16 02:08:00.276158 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Dec 16 02:08:00.284000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:08:00.285915 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 16 02:08:00.308749 systemd[1]: systemd-udevd.service: Deactivated successfully. Dec 16 02:08:00.313438 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 16 02:08:00.315000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:08:00.317669 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Dec 16 02:08:00.317753 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Dec 16 02:08:00.321845 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Dec 16 02:08:00.333000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:08:00.321916 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Dec 16 02:08:00.325644 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Dec 16 02:08:00.339000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:08:00.325784 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Dec 16 02:08:00.336231 systemd[1]: dracut-cmdline.service: Deactivated successfully. Dec 16 02:08:00.346000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:08:00.354000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:08:00.336483 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Dec 16 02:08:00.342799 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 16 02:08:00.342904 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 16 02:08:00.368000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:08:00.350183 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Dec 16 02:08:00.352679 systemd[1]: systemd-network-generator.service: Deactivated successfully. Dec 16 02:08:00.352817 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Dec 16 02:08:00.355823 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Dec 16 02:08:00.374000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:08:00.374000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:08:00.375000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:08:00.355942 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 16 02:08:00.369379 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Dec 16 02:08:00.369496 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 16 02:08:00.375153 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Dec 16 02:08:00.375255 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Dec 16 02:08:00.375882 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 16 02:08:00.414532 kernel: kauditd_printk_skb: 45 callbacks suppressed Dec 16 02:08:00.414573 kernel: audit: type=1131 audit(1765850880.403:80): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:08:00.403000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:08:00.375982 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 02:08:00.378075 systemd[1]: network-cleanup.service: Deactivated successfully. Dec 16 02:08:00.402588 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Dec 16 02:08:00.429639 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Dec 16 02:08:00.430078 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Dec 16 02:08:00.439000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:08:00.440388 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Dec 16 02:08:00.458231 kernel: audit: type=1130 audit(1765850880.439:81): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:08:00.458288 kernel: audit: type=1131 audit(1765850880.439:82): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:08:00.439000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:08:00.459102 systemd[1]: Starting initrd-switch-root.service - Switch Root... Dec 16 02:08:00.507025 systemd[1]: Switching root. Dec 16 02:08:00.552047 systemd-journald[361]: Journal stopped Dec 16 02:08:02.730760 systemd-journald[361]: Received SIGTERM from PID 1 (systemd). Dec 16 02:08:02.730907 kernel: audit: type=1335 audit(1765850880.557:83): pid=361 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=kernel comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" nl-mcgrp=1 op=disconnect res=1 Dec 16 02:08:02.731012 kernel: SELinux: policy capability network_peer_controls=1 Dec 16 02:08:02.731064 kernel: SELinux: policy capability open_perms=1 Dec 16 02:08:02.731110 kernel: SELinux: policy capability extended_socket_class=1 Dec 16 02:08:02.731157 kernel: SELinux: policy capability always_check_network=0 Dec 16 02:08:02.731189 kernel: SELinux: policy capability cgroup_seclabel=1 Dec 16 02:08:02.731221 kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 16 02:08:02.731254 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Dec 16 02:08:02.731284 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Dec 16 02:08:02.731315 kernel: SELinux: policy capability userspace_initial_context=0 Dec 16 02:08:02.731347 kernel: audit: type=1403 audit(1765850880.865:84): auid=4294967295 ses=4294967295 lsm=selinux res=1 Dec 16 02:08:02.731381 systemd[1]: Successfully loaded SELinux policy in 88.560ms. Dec 16 02:08:02.731439 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 15.815ms. Dec 16 02:08:02.731477 systemd[1]: systemd 257.9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Dec 16 02:08:02.731511 systemd[1]: Detected virtualization amazon. Dec 16 02:08:02.731547 systemd[1]: Detected architecture arm64. Dec 16 02:08:02.731586 systemd[1]: Detected first boot. Dec 16 02:08:02.731627 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Dec 16 02:08:02.731662 kernel: audit: type=1334 audit(1765850880.964:85): prog-id=10 op=LOAD Dec 16 02:08:02.731692 kernel: audit: type=1334 audit(1765850880.964:86): prog-id=10 op=UNLOAD Dec 16 02:08:02.731723 kernel: audit: type=1334 audit(1765850880.964:87): prog-id=11 op=LOAD Dec 16 02:08:02.731754 kernel: audit: type=1334 audit(1765850880.964:88): prog-id=11 op=UNLOAD Dec 16 02:08:02.731786 zram_generator::config[1390]: No configuration found. Dec 16 02:08:02.731825 kernel: NET: Registered PF_VSOCK protocol family Dec 16 02:08:02.731861 systemd[1]: Populated /etc with preset unit settings. Dec 16 02:08:02.731894 kernel: audit: type=1334 audit(1765850881.937:89): prog-id=12 op=LOAD Dec 16 02:08:02.731925 systemd[1]: initrd-switch-root.service: Deactivated successfully. Dec 16 02:08:02.732034 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Dec 16 02:08:02.732077 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Dec 16 02:08:02.732112 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Dec 16 02:08:02.732145 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Dec 16 02:08:02.732183 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Dec 16 02:08:02.732220 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Dec 16 02:08:02.732251 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Dec 16 02:08:02.732281 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Dec 16 02:08:02.732311 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Dec 16 02:08:02.732340 systemd[1]: Created slice user.slice - User and Session Slice. Dec 16 02:08:02.732394 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 16 02:08:02.732431 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 16 02:08:02.732464 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Dec 16 02:08:02.732496 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Dec 16 02:08:02.732526 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Dec 16 02:08:02.732559 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 16 02:08:02.732589 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Dec 16 02:08:02.732625 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 16 02:08:02.732659 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 16 02:08:02.732690 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Dec 16 02:08:02.732721 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Dec 16 02:08:02.732751 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Dec 16 02:08:02.732785 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Dec 16 02:08:02.732819 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 16 02:08:02.732867 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 16 02:08:02.732904 systemd[1]: Reached target remote-veritysetup.target - Remote Verity Protected Volumes. Dec 16 02:08:02.732939 systemd[1]: Reached target slices.target - Slice Units. Dec 16 02:08:02.733003 systemd[1]: Reached target swap.target - Swaps. Dec 16 02:08:02.733043 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Dec 16 02:08:02.733075 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Dec 16 02:08:02.733109 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Dec 16 02:08:02.733153 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Dec 16 02:08:02.733183 systemd[1]: Listening on systemd-mountfsd.socket - DDI File System Mounter Socket. Dec 16 02:08:02.733212 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 16 02:08:02.733243 systemd[1]: Listening on systemd-nsresourced.socket - Namespace Resource Manager Socket. Dec 16 02:08:02.733273 systemd[1]: Listening on systemd-oomd.socket - Userspace Out-Of-Memory (OOM) Killer Socket. Dec 16 02:08:02.733303 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 16 02:08:02.733332 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 16 02:08:02.733366 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Dec 16 02:08:02.733396 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Dec 16 02:08:02.733426 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Dec 16 02:08:02.733467 systemd[1]: Mounting media.mount - External Media Directory... Dec 16 02:08:02.733499 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Dec 16 02:08:02.733547 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Dec 16 02:08:02.733591 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Dec 16 02:08:02.733631 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Dec 16 02:08:02.733662 systemd[1]: Reached target machines.target - Containers. Dec 16 02:08:02.733696 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Dec 16 02:08:02.733730 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 16 02:08:02.733760 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 16 02:08:02.733793 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Dec 16 02:08:02.733822 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 16 02:08:02.733865 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 16 02:08:02.733896 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 16 02:08:02.733925 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Dec 16 02:08:02.734160 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 16 02:08:02.734209 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Dec 16 02:08:02.734240 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Dec 16 02:08:02.734273 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Dec 16 02:08:02.734313 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Dec 16 02:08:02.734345 systemd[1]: Stopped systemd-fsck-usr.service. Dec 16 02:08:02.734376 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 16 02:08:02.734408 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 16 02:08:02.734444 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 16 02:08:02.734474 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Dec 16 02:08:02.734505 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Dec 16 02:08:02.736088 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Dec 16 02:08:02.736135 kernel: fuse: init (API version 7.41) Dec 16 02:08:02.736168 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 16 02:08:02.736198 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Dec 16 02:08:02.736241 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Dec 16 02:08:02.736272 systemd[1]: Mounted media.mount - External Media Directory. Dec 16 02:08:02.736302 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Dec 16 02:08:02.736337 kernel: ACPI: bus type drm_connector registered Dec 16 02:08:02.736365 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Dec 16 02:08:02.736395 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Dec 16 02:08:02.736429 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 16 02:08:02.736459 systemd[1]: modprobe@configfs.service: Deactivated successfully. Dec 16 02:08:02.736489 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Dec 16 02:08:02.736524 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 16 02:08:02.736553 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 16 02:08:02.736586 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 16 02:08:02.736616 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 16 02:08:02.736650 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 16 02:08:02.736686 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 16 02:08:02.736716 systemd[1]: modprobe@fuse.service: Deactivated successfully. Dec 16 02:08:02.736746 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Dec 16 02:08:02.736776 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 16 02:08:02.736812 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 16 02:08:02.736842 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 16 02:08:02.736875 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Dec 16 02:08:02.736905 systemd[1]: Listening on systemd-importd.socket - Disk Image Download Service Socket. Dec 16 02:08:02.736939 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Dec 16 02:08:02.737000 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Dec 16 02:08:02.737040 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Dec 16 02:08:02.737073 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 16 02:08:02.737106 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Dec 16 02:08:02.737139 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 16 02:08:02.737175 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Dec 16 02:08:02.737206 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Dec 16 02:08:02.737237 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 16 02:08:02.737268 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Dec 16 02:08:02.737297 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 16 02:08:02.737329 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 16 02:08:02.737361 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Dec 16 02:08:02.737396 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 16 02:08:02.737477 systemd-journald[1471]: Collecting audit messages is enabled. Dec 16 02:08:02.737529 systemd-journald[1471]: Journal started Dec 16 02:08:02.737607 systemd-journald[1471]: Runtime Journal (/run/log/journal/ec24fcd88cb794ccbebcadef1b0323d1) is 8M, max 75.3M, 67.3M free. Dec 16 02:08:02.097000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Dec 16 02:08:02.328000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:08:02.334000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:08:02.341000 audit: BPF prog-id=14 op=UNLOAD Dec 16 02:08:02.341000 audit: BPF prog-id=13 op=UNLOAD Dec 16 02:08:02.343000 audit: BPF prog-id=15 op=LOAD Dec 16 02:08:02.344000 audit: BPF prog-id=16 op=LOAD Dec 16 02:08:02.344000 audit: BPF prog-id=17 op=LOAD Dec 16 02:08:02.484000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:08:02.494000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:08:02.495000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:08:02.743021 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Dec 16 02:08:02.504000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:08:02.504000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:08:02.516000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:08:02.516000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:08:02.527000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:08:02.527000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:08:02.540000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:08:02.540000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:08:02.550000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:08:02.550000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:08:02.560000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:08:02.572000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:08:02.726000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Dec 16 02:08:02.726000 audit[1471]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=60 a0=3 a1=ffffcef19d30 a2=4000 a3=0 items=0 ppid=1 pid=1471 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:08:02.726000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Dec 16 02:08:01.924213 systemd[1]: Queued start job for default target multi-user.target. Dec 16 02:08:01.941151 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Dec 16 02:08:01.942123 systemd[1]: systemd-journald.service: Deactivated successfully. Dec 16 02:08:02.754000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:08:02.769215 systemd[1]: Started systemd-journald.service - Journal Service. Dec 16 02:08:02.760000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:08:02.767059 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Dec 16 02:08:02.769000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-load-credentials comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:08:02.772746 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Dec 16 02:08:02.778405 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Dec 16 02:08:02.782300 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Dec 16 02:08:02.788000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:08:02.842809 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Dec 16 02:08:02.846815 systemd[1]: Reached target network-pre.target - Preparation for Network. Dec 16 02:08:02.852844 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Dec 16 02:08:02.862463 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Dec 16 02:08:02.883998 kernel: loop1: detected capacity change from 0 to 100192 Dec 16 02:08:02.891750 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 16 02:08:02.896000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:08:02.920272 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 16 02:08:02.922000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:08:02.937113 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Dec 16 02:08:02.945000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:08:02.950676 systemd-tmpfiles[1500]: ACLs are not supported, ignoring. Dec 16 02:08:02.950717 systemd-tmpfiles[1500]: ACLs are not supported, ignoring. Dec 16 02:08:02.966216 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 16 02:08:02.968000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:08:02.972740 systemd-journald[1471]: Time spent on flushing to /var/log/journal/ec24fcd88cb794ccbebcadef1b0323d1 is 71.539ms for 1068 entries. Dec 16 02:08:02.972740 systemd-journald[1471]: System Journal (/var/log/journal/ec24fcd88cb794ccbebcadef1b0323d1) is 8M, max 588.1M, 580.1M free. Dec 16 02:08:03.085312 systemd-journald[1471]: Received client request to flush runtime journal. Dec 16 02:08:03.085421 kernel: loop2: detected capacity change from 0 to 45344 Dec 16 02:08:03.085493 kernel: loop3: detected capacity change from 0 to 61504 Dec 16 02:08:03.000000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:08:03.059000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:08:03.065000 audit: BPF prog-id=18 op=LOAD Dec 16 02:08:03.066000 audit: BPF prog-id=19 op=LOAD Dec 16 02:08:03.066000 audit: BPF prog-id=20 op=LOAD Dec 16 02:08:03.076000 audit: BPF prog-id=21 op=LOAD Dec 16 02:08:02.974711 systemd[1]: Starting systemd-sysusers.service - Create System Users... Dec 16 02:08:02.989570 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Dec 16 02:08:02.997721 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Dec 16 02:08:03.058133 systemd[1]: Finished systemd-sysusers.service - Create System Users. Dec 16 02:08:03.072285 systemd[1]: Starting systemd-oomd.service - Userspace Out-Of-Memory (OOM) Killer... Dec 16 02:08:03.080474 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 16 02:08:03.086373 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 16 02:08:03.108104 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Dec 16 02:08:03.110000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:08:03.113000 audit: BPF prog-id=22 op=LOAD Dec 16 02:08:03.115000 audit: BPF prog-id=23 op=LOAD Dec 16 02:08:03.115000 audit: BPF prog-id=24 op=LOAD Dec 16 02:08:03.119284 systemd[1]: Starting systemd-nsresourced.service - Namespace Resource Manager... Dec 16 02:08:03.125000 audit: BPF prog-id=25 op=LOAD Dec 16 02:08:03.125000 audit: BPF prog-id=26 op=LOAD Dec 16 02:08:03.125000 audit: BPF prog-id=27 op=LOAD Dec 16 02:08:03.129288 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Dec 16 02:08:03.180496 systemd-tmpfiles[1545]: ACLs are not supported, ignoring. Dec 16 02:08:03.181102 systemd-tmpfiles[1545]: ACLs are not supported, ignoring. Dec 16 02:08:03.193392 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 16 02:08:03.195000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:08:03.291781 systemd-nsresourced[1548]: Not setting up BPF subsystem, as functionality has been disabled at compile time. Dec 16 02:08:03.292939 systemd[1]: Started systemd-userdbd.service - User Database Manager. Dec 16 02:08:03.295000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:08:03.315000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-nsresourced comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:08:03.313481 systemd[1]: Started systemd-nsresourced.service - Namespace Resource Manager. Dec 16 02:08:03.437001 kernel: loop4: detected capacity change from 0 to 211168 Dec 16 02:08:03.507312 systemd-oomd[1542]: No swap; memory pressure usage will be degraded Dec 16 02:08:03.513317 systemd[1]: Started systemd-oomd.service - Userspace Out-Of-Memory (OOM) Killer. Dec 16 02:08:03.515000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-oomd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:08:03.526162 systemd-resolved[1544]: Positive Trust Anchors: Dec 16 02:08:03.526199 systemd-resolved[1544]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 16 02:08:03.526209 systemd-resolved[1544]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Dec 16 02:08:03.526274 systemd-resolved[1544]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 16 02:08:03.545301 systemd-resolved[1544]: Defaulting to hostname 'linux'. Dec 16 02:08:03.547813 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 16 02:08:03.549000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:08:03.550579 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 16 02:08:03.713028 kernel: loop5: detected capacity change from 0 to 100192 Dec 16 02:08:03.733091 kernel: loop6: detected capacity change from 0 to 45344 Dec 16 02:08:03.769030 kernel: loop7: detected capacity change from 0 to 61504 Dec 16 02:08:03.787010 kernel: loop1: detected capacity change from 0 to 211168 Dec 16 02:08:03.810742 (sd-merge)[1569]: Using extensions 'containerd-flatcar.raw', 'docker-flatcar.raw', 'kubernetes.raw', 'oem-ami.raw'. Dec 16 02:08:03.818945 (sd-merge)[1569]: Merged extensions into '/usr'. Dec 16 02:08:03.829885 systemd[1]: Reload requested from client PID 1497 ('systemd-sysext') (unit systemd-sysext.service)... Dec 16 02:08:03.830604 systemd[1]: Reloading... Dec 16 02:08:04.023035 zram_generator::config[1604]: No configuration found. Dec 16 02:08:04.453661 systemd[1]: Reloading finished in 621 ms. Dec 16 02:08:04.495137 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Dec 16 02:08:04.497000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:08:04.498721 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Dec 16 02:08:04.500000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:08:04.519385 systemd[1]: Starting ensure-sysext.service... Dec 16 02:08:04.525276 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 16 02:08:04.527000 audit: BPF prog-id=8 op=UNLOAD Dec 16 02:08:04.527000 audit: BPF prog-id=7 op=UNLOAD Dec 16 02:08:04.528000 audit: BPF prog-id=28 op=LOAD Dec 16 02:08:04.528000 audit: BPF prog-id=29 op=LOAD Dec 16 02:08:04.534279 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 16 02:08:04.537000 audit: BPF prog-id=30 op=LOAD Dec 16 02:08:04.537000 audit: BPF prog-id=15 op=UNLOAD Dec 16 02:08:04.539000 audit: BPF prog-id=31 op=LOAD Dec 16 02:08:04.539000 audit: BPF prog-id=32 op=LOAD Dec 16 02:08:04.539000 audit: BPF prog-id=16 op=UNLOAD Dec 16 02:08:04.539000 audit: BPF prog-id=17 op=UNLOAD Dec 16 02:08:04.544000 audit: BPF prog-id=33 op=LOAD Dec 16 02:08:04.544000 audit: BPF prog-id=25 op=UNLOAD Dec 16 02:08:04.544000 audit: BPF prog-id=34 op=LOAD Dec 16 02:08:04.545000 audit: BPF prog-id=35 op=LOAD Dec 16 02:08:04.545000 audit: BPF prog-id=26 op=UNLOAD Dec 16 02:08:04.545000 audit: BPF prog-id=27 op=UNLOAD Dec 16 02:08:04.547000 audit: BPF prog-id=36 op=LOAD Dec 16 02:08:04.547000 audit: BPF prog-id=18 op=UNLOAD Dec 16 02:08:04.547000 audit: BPF prog-id=37 op=LOAD Dec 16 02:08:04.547000 audit: BPF prog-id=38 op=LOAD Dec 16 02:08:04.547000 audit: BPF prog-id=19 op=UNLOAD Dec 16 02:08:04.547000 audit: BPF prog-id=20 op=UNLOAD Dec 16 02:08:04.555000 audit: BPF prog-id=39 op=LOAD Dec 16 02:08:04.563000 audit: BPF prog-id=21 op=UNLOAD Dec 16 02:08:04.565000 audit: BPF prog-id=40 op=LOAD Dec 16 02:08:04.565000 audit: BPF prog-id=22 op=UNLOAD Dec 16 02:08:04.565000 audit: BPF prog-id=41 op=LOAD Dec 16 02:08:04.565000 audit: BPF prog-id=42 op=LOAD Dec 16 02:08:04.565000 audit: BPF prog-id=23 op=UNLOAD Dec 16 02:08:04.565000 audit: BPF prog-id=24 op=UNLOAD Dec 16 02:08:04.580618 systemd[1]: Reload requested from client PID 1651 ('systemctl') (unit ensure-sysext.service)... Dec 16 02:08:04.580651 systemd[1]: Reloading... Dec 16 02:08:04.613946 systemd-udevd[1653]: Using default interface naming scheme 'v257'. Dec 16 02:08:04.638005 systemd-tmpfiles[1652]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Dec 16 02:08:04.638081 systemd-tmpfiles[1652]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Dec 16 02:08:04.639646 systemd-tmpfiles[1652]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Dec 16 02:08:04.646613 systemd-tmpfiles[1652]: ACLs are not supported, ignoring. Dec 16 02:08:04.646766 systemd-tmpfiles[1652]: ACLs are not supported, ignoring. Dec 16 02:08:04.669980 systemd-tmpfiles[1652]: Detected autofs mount point /boot during canonicalization of boot. Dec 16 02:08:04.670006 systemd-tmpfiles[1652]: Skipping /boot Dec 16 02:08:04.705609 systemd-tmpfiles[1652]: Detected autofs mount point /boot during canonicalization of boot. Dec 16 02:08:04.705842 systemd-tmpfiles[1652]: Skipping /boot Dec 16 02:08:04.819998 zram_generator::config[1694]: No configuration found. Dec 16 02:08:04.895297 (udev-worker)[1672]: Network interface NamePolicy= disabled on kernel command line. Dec 16 02:08:05.464656 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Dec 16 02:08:05.465005 systemd[1]: Reloading finished in 881 ms. Dec 16 02:08:05.543424 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 16 02:08:05.553832 kernel: kauditd_printk_skb: 93 callbacks suppressed Dec 16 02:08:05.553994 kernel: audit: type=1130 audit(1765850885.546:181): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:08:05.546000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:08:05.580248 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 16 02:08:05.582000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:08:05.596300 kernel: audit: type=1130 audit(1765850885.582:182): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:08:05.596421 kernel: audit: type=1334 audit(1765850885.593:183): prog-id=43 op=LOAD Dec 16 02:08:05.593000 audit: BPF prog-id=43 op=LOAD Dec 16 02:08:05.599820 kernel: audit: type=1334 audit(1765850885.593:184): prog-id=36 op=UNLOAD Dec 16 02:08:05.593000 audit: BPF prog-id=36 op=UNLOAD Dec 16 02:08:05.601845 kernel: audit: type=1334 audit(1765850885.598:185): prog-id=44 op=LOAD Dec 16 02:08:05.598000 audit: BPF prog-id=44 op=LOAD Dec 16 02:08:05.604782 kernel: audit: type=1334 audit(1765850885.599:186): prog-id=45 op=LOAD Dec 16 02:08:05.599000 audit: BPF prog-id=45 op=LOAD Dec 16 02:08:05.607684 kernel: audit: type=1334 audit(1765850885.599:187): prog-id=37 op=UNLOAD Dec 16 02:08:05.599000 audit: BPF prog-id=37 op=UNLOAD Dec 16 02:08:05.612177 kernel: audit: type=1334 audit(1765850885.599:188): prog-id=38 op=UNLOAD Dec 16 02:08:05.599000 audit: BPF prog-id=38 op=UNLOAD Dec 16 02:08:05.615847 kernel: audit: type=1334 audit(1765850885.601:189): prog-id=46 op=LOAD Dec 16 02:08:05.601000 audit: BPF prog-id=46 op=LOAD Dec 16 02:08:05.619000 kernel: audit: type=1334 audit(1765850885.601:190): prog-id=39 op=UNLOAD Dec 16 02:08:05.601000 audit: BPF prog-id=39 op=UNLOAD Dec 16 02:08:05.608000 audit: BPF prog-id=47 op=LOAD Dec 16 02:08:05.608000 audit: BPF prog-id=40 op=UNLOAD Dec 16 02:08:05.608000 audit: BPF prog-id=48 op=LOAD Dec 16 02:08:05.608000 audit: BPF prog-id=49 op=LOAD Dec 16 02:08:05.608000 audit: BPF prog-id=41 op=UNLOAD Dec 16 02:08:05.608000 audit: BPF prog-id=42 op=UNLOAD Dec 16 02:08:05.612000 audit: BPF prog-id=50 op=LOAD Dec 16 02:08:05.612000 audit: BPF prog-id=30 op=UNLOAD Dec 16 02:08:05.614000 audit: BPF prog-id=51 op=LOAD Dec 16 02:08:05.617000 audit: BPF prog-id=52 op=LOAD Dec 16 02:08:05.617000 audit: BPF prog-id=31 op=UNLOAD Dec 16 02:08:05.617000 audit: BPF prog-id=32 op=UNLOAD Dec 16 02:08:05.621000 audit: BPF prog-id=53 op=LOAD Dec 16 02:08:05.622000 audit: BPF prog-id=33 op=UNLOAD Dec 16 02:08:05.623000 audit: BPF prog-id=54 op=LOAD Dec 16 02:08:05.623000 audit: BPF prog-id=55 op=LOAD Dec 16 02:08:05.623000 audit: BPF prog-id=34 op=UNLOAD Dec 16 02:08:05.623000 audit: BPF prog-id=35 op=UNLOAD Dec 16 02:08:05.625000 audit: BPF prog-id=56 op=LOAD Dec 16 02:08:05.625000 audit: BPF prog-id=57 op=LOAD Dec 16 02:08:05.625000 audit: BPF prog-id=28 op=UNLOAD Dec 16 02:08:05.625000 audit: BPF prog-id=29 op=UNLOAD Dec 16 02:08:05.721126 systemd[1]: Finished ensure-sysext.service. Dec 16 02:08:05.723000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:08:05.777385 systemd[1]: Starting audit-rules.service - Load Audit Rules... Dec 16 02:08:05.785410 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Dec 16 02:08:05.788473 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 16 02:08:05.793394 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 16 02:08:05.797946 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 16 02:08:05.807385 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 16 02:08:05.813939 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 16 02:08:05.816669 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 16 02:08:05.816874 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Dec 16 02:08:05.822333 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Dec 16 02:08:05.825151 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 16 02:08:05.828042 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Dec 16 02:08:05.834000 audit: BPF prog-id=58 op=LOAD Dec 16 02:08:05.836887 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 16 02:08:05.839344 systemd[1]: Reached target time-set.target - System Time Set. Dec 16 02:08:05.848633 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Dec 16 02:08:05.856338 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 16 02:08:05.870539 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Dec 16 02:08:05.880372 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Dec 16 02:08:05.970000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:08:05.968195 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Dec 16 02:08:05.973532 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 16 02:08:05.974569 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 16 02:08:05.974000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:08:05.974000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:08:05.996000 audit[1873]: SYSTEM_BOOT pid=1873 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Dec 16 02:08:05.996788 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 16 02:08:05.998564 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 16 02:08:06.002000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:08:06.002000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:08:06.027083 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 16 02:08:06.061000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:08:06.061053 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Dec 16 02:08:06.066000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:08:06.066000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:08:06.065476 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 16 02:08:06.066596 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 16 02:08:06.070357 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 16 02:08:06.085087 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Dec 16 02:08:06.092397 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 16 02:08:06.089000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:08:06.096000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:08:06.096000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:08:06.093834 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 16 02:08:06.205000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Dec 16 02:08:06.205000 audit[1909]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffe7630640 a2=420 a3=0 items=0 ppid=1864 pid=1909 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:08:06.205000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Dec 16 02:08:06.210753 augenrules[1909]: No rules Dec 16 02:08:06.213748 systemd[1]: audit-rules.service: Deactivated successfully. Dec 16 02:08:06.215420 systemd[1]: Finished audit-rules.service - Load Audit Rules. Dec 16 02:08:06.219808 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Dec 16 02:08:06.223476 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 16 02:08:06.233137 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 02:08:06.239200 systemd-networkd[1872]: lo: Link UP Dec 16 02:08:06.239731 systemd-networkd[1872]: lo: Gained carrier Dec 16 02:08:06.243275 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 16 02:08:06.246394 systemd[1]: Reached target network.target - Network. Dec 16 02:08:06.249017 systemd-networkd[1872]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Dec 16 02:08:06.249170 systemd-networkd[1872]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 16 02:08:06.252058 systemd-networkd[1872]: eth0: Link UP Dec 16 02:08:06.252661 systemd-networkd[1872]: eth0: Gained carrier Dec 16 02:08:06.252804 systemd-networkd[1872]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Dec 16 02:08:06.254054 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Dec 16 02:08:06.260449 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Dec 16 02:08:06.271118 systemd-networkd[1872]: eth0: DHCPv4 address 172.31.29.223/20, gateway 172.31.16.1 acquired from 172.31.16.1 Dec 16 02:08:06.306988 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Dec 16 02:08:06.703986 ldconfig[1870]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Dec 16 02:08:06.718074 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Dec 16 02:08:06.723626 systemd[1]: Starting systemd-update-done.service - Update is Completed... Dec 16 02:08:06.757841 systemd[1]: Finished systemd-update-done.service - Update is Completed. Dec 16 02:08:06.761135 systemd[1]: Reached target sysinit.target - System Initialization. Dec 16 02:08:06.764044 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Dec 16 02:08:06.767144 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Dec 16 02:08:06.770811 systemd[1]: Started logrotate.timer - Daily rotation of log files. Dec 16 02:08:06.773553 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Dec 16 02:08:06.776569 systemd[1]: Started systemd-sysupdate-reboot.timer - Reboot Automatically After System Update. Dec 16 02:08:06.780544 systemd[1]: Started systemd-sysupdate.timer - Automatic System Update. Dec 16 02:08:06.783611 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Dec 16 02:08:06.786564 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Dec 16 02:08:06.786642 systemd[1]: Reached target paths.target - Path Units. Dec 16 02:08:06.788749 systemd[1]: Reached target timers.target - Timer Units. Dec 16 02:08:06.792288 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Dec 16 02:08:06.797882 systemd[1]: Starting docker.socket - Docker Socket for the API... Dec 16 02:08:06.804938 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Dec 16 02:08:06.808484 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Dec 16 02:08:06.811762 systemd[1]: Reached target ssh-access.target - SSH Access Available. Dec 16 02:08:06.818460 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Dec 16 02:08:06.821583 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Dec 16 02:08:06.825538 systemd[1]: Listening on docker.socket - Docker Socket for the API. Dec 16 02:08:06.828818 systemd[1]: Reached target sockets.target - Socket Units. Dec 16 02:08:06.831170 systemd[1]: Reached target basic.target - Basic System. Dec 16 02:08:06.833637 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Dec 16 02:08:06.833700 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Dec 16 02:08:06.835801 systemd[1]: Starting containerd.service - containerd container runtime... Dec 16 02:08:06.842107 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Dec 16 02:08:06.849403 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Dec 16 02:08:06.859629 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Dec 16 02:08:06.867338 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Dec 16 02:08:06.880209 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Dec 16 02:08:06.882651 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Dec 16 02:08:06.890552 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Dec 16 02:08:06.898684 systemd[1]: Started ntpd.service - Network Time Service. Dec 16 02:08:06.910830 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Dec 16 02:08:06.919149 jq[1930]: false Dec 16 02:08:06.922324 systemd[1]: Starting setup-oem.service - Setup OEM... Dec 16 02:08:06.929511 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Dec 16 02:08:06.948238 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Dec 16 02:08:06.962245 systemd[1]: Starting systemd-logind.service - User Login Management... Dec 16 02:08:06.966263 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Dec 16 02:08:06.967332 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Dec 16 02:08:06.976165 systemd[1]: Starting update-engine.service - Update Engine... Dec 16 02:08:07.003711 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Dec 16 02:08:07.021215 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Dec 16 02:08:07.024751 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Dec 16 02:08:07.025259 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Dec 16 02:08:07.090125 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Dec 16 02:08:07.120751 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Dec 16 02:08:07.124059 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Dec 16 02:08:07.143381 coreos-metadata[1927]: Dec 16 02:08:07.143 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Dec 16 02:08:07.149992 coreos-metadata[1927]: Dec 16 02:08:07.147 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Dec 16 02:08:07.150165 extend-filesystems[1931]: Found /dev/nvme0n1p6 Dec 16 02:08:07.155141 coreos-metadata[1927]: Dec 16 02:08:07.152 INFO Fetch successful Dec 16 02:08:07.155141 coreos-metadata[1927]: Dec 16 02:08:07.152 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Dec 16 02:08:07.155141 coreos-metadata[1927]: Dec 16 02:08:07.154 INFO Fetch successful Dec 16 02:08:07.161422 coreos-metadata[1927]: Dec 16 02:08:07.158 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Dec 16 02:08:07.161422 coreos-metadata[1927]: Dec 16 02:08:07.158 INFO Fetch successful Dec 16 02:08:07.161422 coreos-metadata[1927]: Dec 16 02:08:07.158 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Dec 16 02:08:07.161422 coreos-metadata[1927]: Dec 16 02:08:07.161 INFO Fetch successful Dec 16 02:08:07.161422 coreos-metadata[1927]: Dec 16 02:08:07.161 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Dec 16 02:08:07.171462 coreos-metadata[1927]: Dec 16 02:08:07.167 INFO Fetch failed with 404: resource not found Dec 16 02:08:07.171462 coreos-metadata[1927]: Dec 16 02:08:07.167 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Dec 16 02:08:07.171462 coreos-metadata[1927]: Dec 16 02:08:07.168 INFO Fetch successful Dec 16 02:08:07.171462 coreos-metadata[1927]: Dec 16 02:08:07.168 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Dec 16 02:08:07.173653 coreos-metadata[1927]: Dec 16 02:08:07.171 INFO Fetch successful Dec 16 02:08:07.173653 coreos-metadata[1927]: Dec 16 02:08:07.171 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Dec 16 02:08:07.179042 coreos-metadata[1927]: Dec 16 02:08:07.175 INFO Fetch successful Dec 16 02:08:07.179042 coreos-metadata[1927]: Dec 16 02:08:07.175 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Dec 16 02:08:07.184162 coreos-metadata[1927]: Dec 16 02:08:07.179 INFO Fetch successful Dec 16 02:08:07.184162 coreos-metadata[1927]: Dec 16 02:08:07.181 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Dec 16 02:08:07.184162 coreos-metadata[1927]: Dec 16 02:08:07.183 INFO Fetch successful Dec 16 02:08:07.194159 extend-filesystems[1931]: Found /dev/nvme0n1p9 Dec 16 02:08:07.210449 jq[1942]: true Dec 16 02:08:07.213252 systemd[1]: motdgen.service: Deactivated successfully. Dec 16 02:08:07.215163 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Dec 16 02:08:07.227480 extend-filesystems[1931]: Checking size of /dev/nvme0n1p9 Dec 16 02:08:07.270026 systemd[1]: Started dbus.service - D-Bus System Message Bus. Dec 16 02:08:07.269550 dbus-daemon[1928]: [system] SELinux support is enabled Dec 16 02:08:07.303847 tar[1951]: linux-arm64/LICENSE Dec 16 02:08:07.303847 tar[1951]: linux-arm64/helm Dec 16 02:08:07.280256 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Dec 16 02:08:07.280308 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Dec 16 02:08:07.283310 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Dec 16 02:08:07.283343 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Dec 16 02:08:07.312364 jq[1981]: true Dec 16 02:08:07.333053 ntpd[1933]: ntpd 4.2.8p18@1.4062-o Mon Dec 15 23:39:58 UTC 2025 (1): Starting Dec 16 02:08:07.341545 ntpd[1933]: 16 Dec 02:08:07 ntpd[1933]: ntpd 4.2.8p18@1.4062-o Mon Dec 15 23:39:58 UTC 2025 (1): Starting Dec 16 02:08:07.341545 ntpd[1933]: 16 Dec 02:08:07 ntpd[1933]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Dec 16 02:08:07.341545 ntpd[1933]: 16 Dec 02:08:07 ntpd[1933]: ---------------------------------------------------- Dec 16 02:08:07.341545 ntpd[1933]: 16 Dec 02:08:07 ntpd[1933]: ntp-4 is maintained by Network Time Foundation, Dec 16 02:08:07.341545 ntpd[1933]: 16 Dec 02:08:07 ntpd[1933]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Dec 16 02:08:07.341545 ntpd[1933]: 16 Dec 02:08:07 ntpd[1933]: corporation. Support and training for ntp-4 are Dec 16 02:08:07.341545 ntpd[1933]: 16 Dec 02:08:07 ntpd[1933]: available at https://www.nwtime.org/support Dec 16 02:08:07.341545 ntpd[1933]: 16 Dec 02:08:07 ntpd[1933]: ---------------------------------------------------- Dec 16 02:08:07.333161 ntpd[1933]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Dec 16 02:08:07.333179 ntpd[1933]: ---------------------------------------------------- Dec 16 02:08:07.333197 ntpd[1933]: ntp-4 is maintained by Network Time Foundation, Dec 16 02:08:07.333214 ntpd[1933]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Dec 16 02:08:07.333231 ntpd[1933]: corporation. Support and training for ntp-4 are Dec 16 02:08:07.333248 ntpd[1933]: available at https://www.nwtime.org/support Dec 16 02:08:07.333264 ntpd[1933]: ---------------------------------------------------- Dec 16 02:08:07.346338 dbus-daemon[1928]: [system] Activating via systemd: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.2' (uid=244 pid=1872 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Dec 16 02:08:07.355562 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Dec 16 02:08:07.369260 extend-filesystems[1931]: Resized partition /dev/nvme0n1p9 Dec 16 02:08:07.371430 ntpd[1933]: proto: precision = 0.096 usec (-23) Dec 16 02:08:07.371888 ntpd[1933]: 16 Dec 02:08:07 ntpd[1933]: proto: precision = 0.096 usec (-23) Dec 16 02:08:07.374627 extend-filesystems[1996]: resize2fs 1.47.3 (8-Jul-2025) Dec 16 02:08:07.397132 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 1617920 to 2604027 blocks Dec 16 02:08:07.397235 ntpd[1933]: 16 Dec 02:08:07 ntpd[1933]: basedate set to 2025-12-03 Dec 16 02:08:07.397235 ntpd[1933]: 16 Dec 02:08:07 ntpd[1933]: gps base set to 2025-12-07 (week 2396) Dec 16 02:08:07.397235 ntpd[1933]: 16 Dec 02:08:07 ntpd[1933]: Listen and drop on 0 v6wildcard [::]:123 Dec 16 02:08:07.397235 ntpd[1933]: 16 Dec 02:08:07 ntpd[1933]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Dec 16 02:08:07.397235 ntpd[1933]: 16 Dec 02:08:07 ntpd[1933]: Listen normally on 2 lo 127.0.0.1:123 Dec 16 02:08:07.397235 ntpd[1933]: 16 Dec 02:08:07 ntpd[1933]: Listen normally on 3 eth0 172.31.29.223:123 Dec 16 02:08:07.397235 ntpd[1933]: 16 Dec 02:08:07 ntpd[1933]: Listen normally on 4 lo [::1]:123 Dec 16 02:08:07.397235 ntpd[1933]: 16 Dec 02:08:07 ntpd[1933]: bind(21) AF_INET6 [fe80::47c:41ff:fea3:65c9%2]:123 flags 0x811 failed: Cannot assign requested address Dec 16 02:08:07.397235 ntpd[1933]: 16 Dec 02:08:07 ntpd[1933]: unable to create socket on eth0 (5) for [fe80::47c:41ff:fea3:65c9%2]:123 Dec 16 02:08:07.397235 ntpd[1933]: 16 Dec 02:08:07 ntpd[1933]: cannot bind address fe80::47c:41ff:fea3:65c9%2 Dec 16 02:08:07.397235 ntpd[1933]: 16 Dec 02:08:07 ntpd[1933]: Listening on routing socket on fd #21 for interface updates Dec 16 02:08:07.388310 ntpd[1933]: basedate set to 2025-12-03 Dec 16 02:08:07.388349 ntpd[1933]: gps base set to 2025-12-07 (week 2396) Dec 16 02:08:07.388574 ntpd[1933]: Listen and drop on 0 v6wildcard [::]:123 Dec 16 02:08:07.388628 ntpd[1933]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Dec 16 02:08:07.389009 ntpd[1933]: Listen normally on 2 lo 127.0.0.1:123 Dec 16 02:08:07.389079 ntpd[1933]: Listen normally on 3 eth0 172.31.29.223:123 Dec 16 02:08:07.389131 ntpd[1933]: Listen normally on 4 lo [::1]:123 Dec 16 02:08:07.389183 ntpd[1933]: bind(21) AF_INET6 [fe80::47c:41ff:fea3:65c9%2]:123 flags 0x811 failed: Cannot assign requested address Dec 16 02:08:07.389224 ntpd[1933]: unable to create socket on eth0 (5) for [fe80::47c:41ff:fea3:65c9%2]:123 Dec 16 02:08:07.389251 ntpd[1933]: cannot bind address fe80::47c:41ff:fea3:65c9%2 Dec 16 02:08:07.389302 ntpd[1933]: Listening on routing socket on fd #21 for interface updates Dec 16 02:08:07.403038 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 2604027 Dec 16 02:08:07.403121 update_engine[1941]: I20251216 02:08:07.399402 1941 main.cc:92] Flatcar Update Engine starting Dec 16 02:08:07.418676 extend-filesystems[1996]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Dec 16 02:08:07.418676 extend-filesystems[1996]: old_desc_blocks = 1, new_desc_blocks = 2 Dec 16 02:08:07.418676 extend-filesystems[1996]: The filesystem on /dev/nvme0n1p9 is now 2604027 (4k) blocks long. Dec 16 02:08:07.433693 extend-filesystems[1931]: Resized filesystem in /dev/nvme0n1p9 Dec 16 02:08:07.439181 update_engine[1941]: I20251216 02:08:07.433246 1941 update_check_scheduler.cc:74] Next update check in 7m34s Dec 16 02:08:07.432426 systemd[1]: extend-filesystems.service: Deactivated successfully. Dec 16 02:08:07.433014 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Dec 16 02:08:07.461368 systemd[1]: Finished setup-oem.service - Setup OEM. Dec 16 02:08:07.464492 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Dec 16 02:08:07.473444 systemd[1]: Started update-engine.service - Update Engine. Dec 16 02:08:07.479326 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Dec 16 02:08:07.490515 ntpd[1933]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Dec 16 02:08:07.491736 ntpd[1933]: 16 Dec 02:08:07 ntpd[1933]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Dec 16 02:08:07.491736 ntpd[1933]: 16 Dec 02:08:07 ntpd[1933]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Dec 16 02:08:07.490582 ntpd[1933]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Dec 16 02:08:07.513154 systemd-networkd[1872]: eth0: Gained IPv6LL Dec 16 02:08:07.541984 systemd[1]: Started locksmithd.service - Cluster reboot manager. Dec 16 02:08:07.545996 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Dec 16 02:08:07.555973 systemd[1]: Reached target network-online.target - Network is Online. Dec 16 02:08:07.565607 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Dec 16 02:08:07.575406 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 02:08:07.583241 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Dec 16 02:08:07.700019 systemd-logind[1940]: Watching system buttons on /dev/input/event0 (Power Button) Dec 16 02:08:07.700144 systemd-logind[1940]: Watching system buttons on /dev/input/event1 (Sleep Button) Dec 16 02:08:07.704327 systemd-logind[1940]: New seat seat0. Dec 16 02:08:07.706998 systemd[1]: Started systemd-logind.service - User Login Management. Dec 16 02:08:07.743077 bash[2050]: Updated "/home/core/.ssh/authorized_keys" Dec 16 02:08:07.801123 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Dec 16 02:08:07.806939 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Dec 16 02:08:07.825622 systemd[1]: Starting sshkeys.service... Dec 16 02:08:07.966080 containerd[1959]: time="2025-12-16T02:08:07Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Dec 16 02:08:07.972984 containerd[1959]: time="2025-12-16T02:08:07.968749369Z" level=info msg="starting containerd" revision=fcd43222d6b07379a4be9786bda52438f0dd16a1 version=v2.1.5 Dec 16 02:08:07.989083 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Dec 16 02:08:08.006268 dbus-daemon[1928]: [system] Successfully activated service 'org.freedesktop.hostname1' Dec 16 02:08:08.022582 dbus-daemon[1928]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.6' (uid=0 pid=1993 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Dec 16 02:08:08.060302 systemd[1]: Starting polkit.service - Authorization Manager... Dec 16 02:08:08.076117 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Dec 16 02:08:08.085717 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Dec 16 02:08:08.106912 containerd[1959]: time="2025-12-16T02:08:08.106807198Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="16.212µs" Dec 16 02:08:08.106912 containerd[1959]: time="2025-12-16T02:08:08.106877902Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Dec 16 02:08:08.107177 containerd[1959]: time="2025-12-16T02:08:08.106986310Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Dec 16 02:08:08.107177 containerd[1959]: time="2025-12-16T02:08:08.107020534Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Dec 16 02:08:08.107383 containerd[1959]: time="2025-12-16T02:08:08.107317234Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Dec 16 02:08:08.107383 containerd[1959]: time="2025-12-16T02:08:08.107370370Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Dec 16 02:08:08.107550 containerd[1959]: time="2025-12-16T02:08:08.107508382Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Dec 16 02:08:08.107607 containerd[1959]: time="2025-12-16T02:08:08.107547586Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Dec 16 02:08:08.116995 containerd[1959]: time="2025-12-16T02:08:08.116294374Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Dec 16 02:08:08.116995 containerd[1959]: time="2025-12-16T02:08:08.116365462Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Dec 16 02:08:08.116995 containerd[1959]: time="2025-12-16T02:08:08.116400046Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Dec 16 02:08:08.116995 containerd[1959]: time="2025-12-16T02:08:08.116426086Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Dec 16 02:08:08.116995 containerd[1959]: time="2025-12-16T02:08:08.116871298Z" level=info msg="skip loading plugin" error="EROFS unsupported, please `modprobe erofs`: skip plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Dec 16 02:08:08.116995 containerd[1959]: time="2025-12-16T02:08:08.116915026Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Dec 16 02:08:08.119351 containerd[1959]: time="2025-12-16T02:08:08.119270038Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Dec 16 02:08:08.131943 containerd[1959]: time="2025-12-16T02:08:08.131817178Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Dec 16 02:08:08.132106 containerd[1959]: time="2025-12-16T02:08:08.132043678Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Dec 16 02:08:08.132106 containerd[1959]: time="2025-12-16T02:08:08.132075214Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Dec 16 02:08:08.132195 containerd[1959]: time="2025-12-16T02:08:08.132143350Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Dec 16 02:08:08.138307 containerd[1959]: time="2025-12-16T02:08:08.137178922Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Dec 16 02:08:08.148575 containerd[1959]: time="2025-12-16T02:08:08.148499278Z" level=info msg="metadata content store policy set" policy=shared Dec 16 02:08:08.163368 containerd[1959]: time="2025-12-16T02:08:08.163288330Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Dec 16 02:08:08.163583 containerd[1959]: time="2025-12-16T02:08:08.163405942Z" level=info msg="loading plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Dec 16 02:08:08.163769 containerd[1959]: time="2025-12-16T02:08:08.163699678Z" level=info msg="skip loading plugin" error="could not find mkfs.erofs: exec: \"mkfs.erofs\": executable file not found in $PATH: skip plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Dec 16 02:08:08.163831 containerd[1959]: time="2025-12-16T02:08:08.163763278Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Dec 16 02:08:08.163876 containerd[1959]: time="2025-12-16T02:08:08.163798150Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Dec 16 02:08:08.163876 containerd[1959]: time="2025-12-16T02:08:08.163852978Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Dec 16 02:08:08.164021 containerd[1959]: time="2025-12-16T02:08:08.163885510Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Dec 16 02:08:08.164021 containerd[1959]: time="2025-12-16T02:08:08.163940842Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Dec 16 02:08:08.164021 containerd[1959]: time="2025-12-16T02:08:08.164005270Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Dec 16 02:08:08.164141 containerd[1959]: time="2025-12-16T02:08:08.164037694Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Dec 16 02:08:08.164141 containerd[1959]: time="2025-12-16T02:08:08.164093998Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Dec 16 02:08:08.164243 containerd[1959]: time="2025-12-16T02:08:08.164127298Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Dec 16 02:08:08.164243 containerd[1959]: time="2025-12-16T02:08:08.164178622Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Dec 16 02:08:08.164329 containerd[1959]: time="2025-12-16T02:08:08.164213038Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Dec 16 02:08:08.167147 containerd[1959]: time="2025-12-16T02:08:08.166218070Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Dec 16 02:08:08.168925 containerd[1959]: time="2025-12-16T02:08:08.168159802Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Dec 16 02:08:08.174386 containerd[1959]: time="2025-12-16T02:08:08.171341506Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Dec 16 02:08:08.174386 containerd[1959]: time="2025-12-16T02:08:08.171423130Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Dec 16 02:08:08.174386 containerd[1959]: time="2025-12-16T02:08:08.171478954Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Dec 16 02:08:08.174386 containerd[1959]: time="2025-12-16T02:08:08.171507898Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Dec 16 02:08:08.174386 containerd[1959]: time="2025-12-16T02:08:08.171564874Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Dec 16 02:08:08.174386 containerd[1959]: time="2025-12-16T02:08:08.171596758Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Dec 16 02:08:08.174386 containerd[1959]: time="2025-12-16T02:08:08.171685798Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Dec 16 02:08:08.174386 containerd[1959]: time="2025-12-16T02:08:08.172860286Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Dec 16 02:08:08.174386 containerd[1959]: time="2025-12-16T02:08:08.173139538Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Dec 16 02:08:08.178086 containerd[1959]: time="2025-12-16T02:08:08.175432126Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Dec 16 02:08:08.178199 containerd[1959]: time="2025-12-16T02:08:08.178097098Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Dec 16 02:08:08.179789 containerd[1959]: time="2025-12-16T02:08:08.178997098Z" level=info msg="Start snapshots syncer" Dec 16 02:08:08.181280 containerd[1959]: time="2025-12-16T02:08:08.180234910Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Dec 16 02:08:08.189935 containerd[1959]: time="2025-12-16T02:08:08.189433234Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"cgroupWritable\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"\",\"binDirs\":[\"/opt/cni/bin\"],\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogLineSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Dec 16 02:08:08.193165 containerd[1959]: time="2025-12-16T02:08:08.193053298Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Dec 16 02:08:08.201996 containerd[1959]: time="2025-12-16T02:08:08.196038370Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Dec 16 02:08:08.201996 containerd[1959]: time="2025-12-16T02:08:08.201360058Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Dec 16 02:08:08.201996 containerd[1959]: time="2025-12-16T02:08:08.201498646Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Dec 16 02:08:08.201996 containerd[1959]: time="2025-12-16T02:08:08.201532306Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Dec 16 02:08:08.201996 containerd[1959]: time="2025-12-16T02:08:08.201561154Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Dec 16 02:08:08.201996 containerd[1959]: time="2025-12-16T02:08:08.201597538Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Dec 16 02:08:08.201996 containerd[1959]: time="2025-12-16T02:08:08.201626926Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Dec 16 02:08:08.201996 containerd[1959]: time="2025-12-16T02:08:08.201653842Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Dec 16 02:08:08.201996 containerd[1959]: time="2025-12-16T02:08:08.201682234Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Dec 16 02:08:08.201996 containerd[1959]: time="2025-12-16T02:08:08.201710170Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Dec 16 02:08:08.201996 containerd[1959]: time="2025-12-16T02:08:08.201786430Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Dec 16 02:08:08.201996 containerd[1959]: time="2025-12-16T02:08:08.201819886Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Dec 16 02:08:08.201996 containerd[1959]: time="2025-12-16T02:08:08.201842626Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Dec 16 02:08:08.202625 containerd[1959]: time="2025-12-16T02:08:08.201870118Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Dec 16 02:08:08.202625 containerd[1959]: time="2025-12-16T02:08:08.201891670Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Dec 16 02:08:08.202625 containerd[1959]: time="2025-12-16T02:08:08.201917278Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Dec 16 02:08:08.202987 containerd[1959]: time="2025-12-16T02:08:08.201945646Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Dec 16 02:08:08.204202 containerd[1959]: time="2025-12-16T02:08:08.204137806Z" level=info msg="runtime interface created" Dec 16 02:08:08.205262 containerd[1959]: time="2025-12-16T02:08:08.204977962Z" level=info msg="created NRI interface" Dec 16 02:08:08.205262 containerd[1959]: time="2025-12-16T02:08:08.205081534Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Dec 16 02:08:08.205262 containerd[1959]: time="2025-12-16T02:08:08.205122478Z" level=info msg="Connect containerd service" Dec 16 02:08:08.205262 containerd[1959]: time="2025-12-16T02:08:08.205187890Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Dec 16 02:08:08.205578 amazon-ssm-agent[2021]: Initializing new seelog logger Dec 16 02:08:08.212310 amazon-ssm-agent[2021]: New Seelog Logger Creation Complete Dec 16 02:08:08.212310 amazon-ssm-agent[2021]: 2025/12/16 02:08:08 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Dec 16 02:08:08.212310 amazon-ssm-agent[2021]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Dec 16 02:08:08.212310 amazon-ssm-agent[2021]: 2025/12/16 02:08:08 processing appconfig overrides Dec 16 02:08:08.216323 containerd[1959]: time="2025-12-16T02:08:08.215416223Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 16 02:08:08.226094 amazon-ssm-agent[2021]: 2025/12/16 02:08:08 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Dec 16 02:08:08.226094 amazon-ssm-agent[2021]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Dec 16 02:08:08.226245 amazon-ssm-agent[2021]: 2025/12/16 02:08:08 processing appconfig overrides Dec 16 02:08:08.226536 amazon-ssm-agent[2021]: 2025/12/16 02:08:08 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Dec 16 02:08:08.226536 amazon-ssm-agent[2021]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Dec 16 02:08:08.238518 amazon-ssm-agent[2021]: 2025/12/16 02:08:08 processing appconfig overrides Dec 16 02:08:08.244248 amazon-ssm-agent[2021]: 2025-12-16 02:08:08.2239 INFO Proxy environment variables: Dec 16 02:08:08.259535 amazon-ssm-agent[2021]: 2025/12/16 02:08:08 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Dec 16 02:08:08.260048 amazon-ssm-agent[2021]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Dec 16 02:08:08.260343 amazon-ssm-agent[2021]: 2025/12/16 02:08:08 processing appconfig overrides Dec 16 02:08:08.345998 amazon-ssm-agent[2021]: 2025-12-16 02:08:08.2240 INFO no_proxy: Dec 16 02:08:08.446539 amazon-ssm-agent[2021]: 2025-12-16 02:08:08.2240 INFO https_proxy: Dec 16 02:08:08.547060 amazon-ssm-agent[2021]: 2025-12-16 02:08:08.2240 INFO http_proxy: Dec 16 02:08:08.548917 coreos-metadata[2105]: Dec 16 02:08:08.548 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Dec 16 02:08:08.556000 coreos-metadata[2105]: Dec 16 02:08:08.550 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Dec 16 02:08:08.556000 coreos-metadata[2105]: Dec 16 02:08:08.551 INFO Fetch successful Dec 16 02:08:08.556000 coreos-metadata[2105]: Dec 16 02:08:08.551 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Dec 16 02:08:08.556000 coreos-metadata[2105]: Dec 16 02:08:08.552 INFO Fetch successful Dec 16 02:08:08.555775 unknown[2105]: wrote ssh authorized keys file for user: core Dec 16 02:08:08.649144 amazon-ssm-agent[2021]: 2025-12-16 02:08:08.2262 INFO Checking if agent identity type OnPrem can be assumed Dec 16 02:08:08.658248 update-ssh-keys[2157]: Updated "/home/core/.ssh/authorized_keys" Dec 16 02:08:08.664468 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Dec 16 02:08:08.675358 systemd[1]: Finished sshkeys.service. Dec 16 02:08:08.747762 amazon-ssm-agent[2021]: 2025-12-16 02:08:08.2263 INFO Checking if agent identity type EC2 can be assumed Dec 16 02:08:08.799486 locksmithd[2012]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Dec 16 02:08:08.812982 containerd[1959]: time="2025-12-16T02:08:08.812639869Z" level=info msg="Start subscribing containerd event" Dec 16 02:08:08.812982 containerd[1959]: time="2025-12-16T02:08:08.812753605Z" level=info msg="Start recovering state" Dec 16 02:08:08.813148 containerd[1959]: time="2025-12-16T02:08:08.813014233Z" level=info msg="Start event monitor" Dec 16 02:08:08.813148 containerd[1959]: time="2025-12-16T02:08:08.813052405Z" level=info msg="Start cni network conf syncer for default" Dec 16 02:08:08.813148 containerd[1959]: time="2025-12-16T02:08:08.813071809Z" level=info msg="Start streaming server" Dec 16 02:08:08.813148 containerd[1959]: time="2025-12-16T02:08:08.813093469Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Dec 16 02:08:08.813148 containerd[1959]: time="2025-12-16T02:08:08.813110677Z" level=info msg="runtime interface starting up..." Dec 16 02:08:08.813148 containerd[1959]: time="2025-12-16T02:08:08.813125077Z" level=info msg="starting plugins..." Dec 16 02:08:08.813436 containerd[1959]: time="2025-12-16T02:08:08.813155101Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Dec 16 02:08:08.819433 containerd[1959]: time="2025-12-16T02:08:08.818882054Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Dec 16 02:08:08.819433 containerd[1959]: time="2025-12-16T02:08:08.819389282Z" level=info msg=serving... address=/run/containerd/containerd.sock Dec 16 02:08:08.819931 systemd[1]: Started containerd.service - containerd container runtime. Dec 16 02:08:08.838164 containerd[1959]: time="2025-12-16T02:08:08.837668666Z" level=info msg="containerd successfully booted in 0.875757s" Dec 16 02:08:08.850787 amazon-ssm-agent[2021]: 2025-12-16 02:08:08.8182 INFO Agent will take identity from EC2 Dec 16 02:08:08.928026 polkitd[2104]: Started polkitd version 126 Dec 16 02:08:08.950501 amazon-ssm-agent[2021]: 2025-12-16 02:08:08.8379 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.3.0.0 Dec 16 02:08:08.957711 polkitd[2104]: Loading rules from directory /etc/polkit-1/rules.d Dec 16 02:08:08.961285 polkitd[2104]: Loading rules from directory /run/polkit-1/rules.d Dec 16 02:08:08.961558 polkitd[2104]: Error opening rules directory: Error opening directory “/run/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Dec 16 02:08:08.962233 polkitd[2104]: Loading rules from directory /usr/local/share/polkit-1/rules.d Dec 16 02:08:08.962287 polkitd[2104]: Error opening rules directory: Error opening directory “/usr/local/share/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Dec 16 02:08:08.962374 polkitd[2104]: Loading rules from directory /usr/share/polkit-1/rules.d Dec 16 02:08:08.967285 polkitd[2104]: Finished loading, compiling and executing 2 rules Dec 16 02:08:08.967945 systemd[1]: Started polkit.service - Authorization Manager. Dec 16 02:08:08.971185 dbus-daemon[1928]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Dec 16 02:08:08.975088 polkitd[2104]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Dec 16 02:08:09.011817 systemd-hostnamed[1993]: Hostname set to (transient) Dec 16 02:08:09.012040 systemd-resolved[1544]: System hostname changed to 'ip-172-31-29-223'. Dec 16 02:08:09.049303 amazon-ssm-agent[2021]: 2025-12-16 02:08:08.8380 INFO [amazon-ssm-agent] OS: linux, Arch: arm64 Dec 16 02:08:09.148536 amazon-ssm-agent[2021]: 2025-12-16 02:08:08.8380 INFO [amazon-ssm-agent] Starting Core Agent Dec 16 02:08:09.248179 amazon-ssm-agent[2021]: 2025-12-16 02:08:08.8380 INFO [amazon-ssm-agent] Registrar detected. Attempting registration Dec 16 02:08:09.349999 amazon-ssm-agent[2021]: 2025-12-16 02:08:08.8380 INFO [Registrar] Starting registrar module Dec 16 02:08:09.452041 amazon-ssm-agent[2021]: 2025-12-16 02:08:08.8466 INFO [EC2Identity] Checking disk for registration info Dec 16 02:08:09.483891 tar[1951]: linux-arm64/README.md Dec 16 02:08:09.525167 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Dec 16 02:08:09.550428 amazon-ssm-agent[2021]: 2025-12-16 02:08:08.8467 INFO [EC2Identity] No registration info found for ec2 instance, attempting registration Dec 16 02:08:09.599061 sshd_keygen[1958]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Dec 16 02:08:09.650092 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Dec 16 02:08:09.658129 amazon-ssm-agent[2021]: 2025-12-16 02:08:08.8467 INFO [EC2Identity] Generating registration keypair Dec 16 02:08:09.660525 systemd[1]: Starting issuegen.service - Generate /run/issue... Dec 16 02:08:09.669256 systemd[1]: Started sshd@0-172.31.29.223:22-139.178.89.65:33214.service - OpenSSH per-connection server daemon (139.178.89.65:33214). Dec 16 02:08:09.730393 systemd[1]: issuegen.service: Deactivated successfully. Dec 16 02:08:09.731512 systemd[1]: Finished issuegen.service - Generate /run/issue. Dec 16 02:08:09.740426 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Dec 16 02:08:09.782665 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Dec 16 02:08:09.791411 systemd[1]: Started getty@tty1.service - Getty on tty1. Dec 16 02:08:09.799644 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Dec 16 02:08:09.803038 systemd[1]: Reached target getty.target - Login Prompts. Dec 16 02:08:09.996560 amazon-ssm-agent[2021]: 2025-12-16 02:08:09.9961 INFO [EC2Identity] Checking write access before registering Dec 16 02:08:10.025288 sshd[2187]: Accepted publickey for core from 139.178.89.65 port 33214 ssh2: RSA SHA256:GQgi8hrngD5IAzSBvjpWGNrbDxS4+WSDV6E9Am09kRw Dec 16 02:08:10.031057 sshd-session[2187]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 02:08:10.043065 amazon-ssm-agent[2021]: 2025/12/16 02:08:10 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Dec 16 02:08:10.043065 amazon-ssm-agent[2021]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Dec 16 02:08:10.043065 amazon-ssm-agent[2021]: 2025/12/16 02:08:10 processing appconfig overrides Dec 16 02:08:10.051835 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Dec 16 02:08:10.057951 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Dec 16 02:08:10.085107 systemd-logind[1940]: New session 1 of user core. Dec 16 02:08:10.093214 amazon-ssm-agent[2021]: 2025-12-16 02:08:09.9971 INFO [EC2Identity] Registering EC2 instance with Systems Manager Dec 16 02:08:10.093214 amazon-ssm-agent[2021]: 2025-12-16 02:08:10.0408 INFO [EC2Identity] EC2 registration was successful. Dec 16 02:08:10.093214 amazon-ssm-agent[2021]: 2025-12-16 02:08:10.0409 INFO [amazon-ssm-agent] Registration attempted. Resuming core agent startup. Dec 16 02:08:10.093214 amazon-ssm-agent[2021]: 2025-12-16 02:08:10.0411 INFO [CredentialRefresher] credentialRefresher has started Dec 16 02:08:10.093214 amazon-ssm-agent[2021]: 2025-12-16 02:08:10.0411 INFO [CredentialRefresher] Starting credentials refresher loop Dec 16 02:08:10.093214 amazon-ssm-agent[2021]: 2025-12-16 02:08:10.0872 INFO EC2RoleProvider Successfully connected with instance profile role credentials Dec 16 02:08:10.093214 amazon-ssm-agent[2021]: 2025-12-16 02:08:10.0875 INFO [CredentialRefresher] Credentials ready Dec 16 02:08:10.097340 amazon-ssm-agent[2021]: 2025-12-16 02:08:10.0877 INFO [CredentialRefresher] Next credential rotation will be in 29.9999913219 minutes Dec 16 02:08:10.113377 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Dec 16 02:08:10.123548 systemd[1]: Starting user@500.service - User Manager for UID 500... Dec 16 02:08:10.159322 (systemd)[2201]: pam_unix(systemd-user:session): session opened for user core(uid=500) by core(uid=0) Dec 16 02:08:10.166904 systemd-logind[1940]: New session 2 of user core. Dec 16 02:08:10.334316 ntpd[1933]: Listen normally on 6 eth0 [fe80::47c:41ff:fea3:65c9%2]:123 Dec 16 02:08:10.335589 ntpd[1933]: 16 Dec 02:08:10 ntpd[1933]: Listen normally on 6 eth0 [fe80::47c:41ff:fea3:65c9%2]:123 Dec 16 02:08:10.490729 systemd[2201]: Queued start job for default target default.target. Dec 16 02:08:10.501606 systemd[2201]: Created slice app.slice - User Application Slice. Dec 16 02:08:10.501685 systemd[2201]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of User's Temporary Directories. Dec 16 02:08:10.501716 systemd[2201]: Reached target paths.target - Paths. Dec 16 02:08:10.501815 systemd[2201]: Reached target timers.target - Timers. Dec 16 02:08:10.504393 systemd[2201]: Starting dbus.socket - D-Bus User Message Bus Socket... Dec 16 02:08:10.506226 systemd[2201]: Starting systemd-tmpfiles-setup.service - Create User Files and Directories... Dec 16 02:08:10.537704 systemd[2201]: Finished systemd-tmpfiles-setup.service - Create User Files and Directories. Dec 16 02:08:10.550127 systemd[2201]: Listening on dbus.socket - D-Bus User Message Bus Socket. Dec 16 02:08:10.550439 systemd[2201]: Reached target sockets.target - Sockets. Dec 16 02:08:10.550571 systemd[2201]: Reached target basic.target - Basic System. Dec 16 02:08:10.550667 systemd[2201]: Reached target default.target - Main User Target. Dec 16 02:08:10.550737 systemd[2201]: Startup finished in 369ms. Dec 16 02:08:10.551296 systemd[1]: Started user@500.service - User Manager for UID 500. Dec 16 02:08:10.566339 systemd[1]: Started session-1.scope - Session 1 of User core. Dec 16 02:08:10.669715 systemd[1]: Started sshd@1-172.31.29.223:22-139.178.89.65:56514.service - OpenSSH per-connection server daemon (139.178.89.65:56514). Dec 16 02:08:10.877916 sshd[2215]: Accepted publickey for core from 139.178.89.65 port 56514 ssh2: RSA SHA256:GQgi8hrngD5IAzSBvjpWGNrbDxS4+WSDV6E9Am09kRw Dec 16 02:08:10.880360 sshd-session[2215]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 02:08:10.893100 systemd-logind[1940]: New session 3 of user core. Dec 16 02:08:10.903382 systemd[1]: Started session-3.scope - Session 3 of User core. Dec 16 02:08:10.977515 sshd[2219]: Connection closed by 139.178.89.65 port 56514 Dec 16 02:08:10.977157 sshd-session[2215]: pam_unix(sshd:session): session closed for user core Dec 16 02:08:10.985735 systemd[1]: sshd@1-172.31.29.223:22-139.178.89.65:56514.service: Deactivated successfully. Dec 16 02:08:10.989653 systemd[1]: session-3.scope: Deactivated successfully. Dec 16 02:08:10.997470 systemd-logind[1940]: Session 3 logged out. Waiting for processes to exit. Dec 16 02:08:11.020277 systemd[1]: Started sshd@2-172.31.29.223:22-139.178.89.65:56522.service - OpenSSH per-connection server daemon (139.178.89.65:56522). Dec 16 02:08:11.026609 systemd-logind[1940]: Removed session 3. Dec 16 02:08:11.118313 amazon-ssm-agent[2021]: 2025-12-16 02:08:11.1181 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Dec 16 02:08:11.218709 amazon-ssm-agent[2021]: 2025-12-16 02:08:11.1215 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2230) started Dec 16 02:08:11.224929 sshd[2225]: Accepted publickey for core from 139.178.89.65 port 56522 ssh2: RSA SHA256:GQgi8hrngD5IAzSBvjpWGNrbDxS4+WSDV6E9Am09kRw Dec 16 02:08:11.228581 sshd-session[2225]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 02:08:11.239385 systemd-logind[1940]: New session 4 of user core. Dec 16 02:08:11.254404 systemd[1]: Started session-4.scope - Session 4 of User core. Dec 16 02:08:11.319172 amazon-ssm-agent[2021]: 2025-12-16 02:08:11.1216 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Dec 16 02:08:11.337224 sshd[2232]: Connection closed by 139.178.89.65 port 56522 Dec 16 02:08:11.338323 sshd-session[2225]: pam_unix(sshd:session): session closed for user core Dec 16 02:08:11.353200 systemd[1]: sshd@2-172.31.29.223:22-139.178.89.65:56522.service: Deactivated successfully. Dec 16 02:08:11.365052 systemd[1]: session-4.scope: Deactivated successfully. Dec 16 02:08:11.369600 systemd-logind[1940]: Session 4 logged out. Waiting for processes to exit. Dec 16 02:08:11.374633 systemd-logind[1940]: Removed session 4. Dec 16 02:08:12.552198 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 02:08:12.557622 systemd[1]: Reached target multi-user.target - Multi-User System. Dec 16 02:08:12.560715 systemd[1]: Startup finished in 2.841s (kernel) + 8.245s (initrd) + 11.783s (userspace) = 22.870s. Dec 16 02:08:12.579859 (kubelet)[2252]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 16 02:08:13.955637 systemd-resolved[1544]: Clock change detected. Flushing caches. Dec 16 02:08:14.237769 kubelet[2252]: E1216 02:08:14.237687 2252 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 16 02:08:14.242149 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 16 02:08:14.242479 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 16 02:08:14.243553 systemd[1]: kubelet.service: Consumed 1.542s CPU time, 257.6M memory peak. Dec 16 02:08:20.996835 systemd[1]: Started sshd@3-172.31.29.223:22-139.178.89.65:59818.service - OpenSSH per-connection server daemon (139.178.89.65:59818). Dec 16 02:08:21.198778 sshd[2263]: Accepted publickey for core from 139.178.89.65 port 59818 ssh2: RSA SHA256:GQgi8hrngD5IAzSBvjpWGNrbDxS4+WSDV6E9Am09kRw Dec 16 02:08:21.201560 sshd-session[2263]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 02:08:21.211915 systemd-logind[1940]: New session 5 of user core. Dec 16 02:08:21.216362 systemd[1]: Started session-5.scope - Session 5 of User core. Dec 16 02:08:21.283693 sshd[2267]: Connection closed by 139.178.89.65 port 59818 Dec 16 02:08:21.284834 sshd-session[2263]: pam_unix(sshd:session): session closed for user core Dec 16 02:08:21.293407 systemd[1]: sshd@3-172.31.29.223:22-139.178.89.65:59818.service: Deactivated successfully. Dec 16 02:08:21.297643 systemd[1]: session-5.scope: Deactivated successfully. Dec 16 02:08:21.300357 systemd-logind[1940]: Session 5 logged out. Waiting for processes to exit. Dec 16 02:08:21.303687 systemd-logind[1940]: Removed session 5. Dec 16 02:08:21.321003 systemd[1]: Started sshd@4-172.31.29.223:22-139.178.89.65:59832.service - OpenSSH per-connection server daemon (139.178.89.65:59832). Dec 16 02:08:21.525299 sshd[2273]: Accepted publickey for core from 139.178.89.65 port 59832 ssh2: RSA SHA256:GQgi8hrngD5IAzSBvjpWGNrbDxS4+WSDV6E9Am09kRw Dec 16 02:08:21.527882 sshd-session[2273]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 02:08:21.536735 systemd-logind[1940]: New session 6 of user core. Dec 16 02:08:21.556425 systemd[1]: Started session-6.scope - Session 6 of User core. Dec 16 02:08:21.615432 sshd[2277]: Connection closed by 139.178.89.65 port 59832 Dec 16 02:08:21.616534 sshd-session[2273]: pam_unix(sshd:session): session closed for user core Dec 16 02:08:21.624137 systemd[1]: sshd@4-172.31.29.223:22-139.178.89.65:59832.service: Deactivated successfully. Dec 16 02:08:21.629758 systemd[1]: session-6.scope: Deactivated successfully. Dec 16 02:08:21.632710 systemd-logind[1940]: Session 6 logged out. Waiting for processes to exit. Dec 16 02:08:21.635195 systemd-logind[1940]: Removed session 6. Dec 16 02:08:21.655607 systemd[1]: Started sshd@5-172.31.29.223:22-139.178.89.65:59840.service - OpenSSH per-connection server daemon (139.178.89.65:59840). Dec 16 02:08:21.857200 sshd[2283]: Accepted publickey for core from 139.178.89.65 port 59840 ssh2: RSA SHA256:GQgi8hrngD5IAzSBvjpWGNrbDxS4+WSDV6E9Am09kRw Dec 16 02:08:21.859738 sshd-session[2283]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 02:08:21.868068 systemd-logind[1940]: New session 7 of user core. Dec 16 02:08:21.881399 systemd[1]: Started session-7.scope - Session 7 of User core. Dec 16 02:08:21.948597 sshd[2287]: Connection closed by 139.178.89.65 port 59840 Dec 16 02:08:21.949030 sshd-session[2283]: pam_unix(sshd:session): session closed for user core Dec 16 02:08:21.958561 systemd-logind[1940]: Session 7 logged out. Waiting for processes to exit. Dec 16 02:08:21.959630 systemd[1]: sshd@5-172.31.29.223:22-139.178.89.65:59840.service: Deactivated successfully. Dec 16 02:08:21.963889 systemd[1]: session-7.scope: Deactivated successfully. Dec 16 02:08:21.967976 systemd-logind[1940]: Removed session 7. Dec 16 02:08:21.998678 systemd[1]: Started sshd@6-172.31.29.223:22-139.178.89.65:59846.service - OpenSSH per-connection server daemon (139.178.89.65:59846). Dec 16 02:08:22.195279 sshd[2293]: Accepted publickey for core from 139.178.89.65 port 59846 ssh2: RSA SHA256:GQgi8hrngD5IAzSBvjpWGNrbDxS4+WSDV6E9Am09kRw Dec 16 02:08:22.198650 sshd-session[2293]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 02:08:22.209212 systemd-logind[1940]: New session 8 of user core. Dec 16 02:08:22.217480 systemd[1]: Started session-8.scope - Session 8 of User core. Dec 16 02:08:22.281347 sudo[2298]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Dec 16 02:08:22.282011 sudo[2298]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 16 02:08:22.293520 sudo[2298]: pam_unix(sudo:session): session closed for user root Dec 16 02:08:22.318942 sshd[2297]: Connection closed by 139.178.89.65 port 59846 Dec 16 02:08:22.318737 sshd-session[2293]: pam_unix(sshd:session): session closed for user core Dec 16 02:08:22.327278 systemd[1]: sshd@6-172.31.29.223:22-139.178.89.65:59846.service: Deactivated successfully. Dec 16 02:08:22.331475 systemd[1]: session-8.scope: Deactivated successfully. Dec 16 02:08:22.333522 systemd-logind[1940]: Session 8 logged out. Waiting for processes to exit. Dec 16 02:08:22.337083 systemd-logind[1940]: Removed session 8. Dec 16 02:08:22.356765 systemd[1]: Started sshd@7-172.31.29.223:22-139.178.89.65:59856.service - OpenSSH per-connection server daemon (139.178.89.65:59856). Dec 16 02:08:22.555655 sshd[2305]: Accepted publickey for core from 139.178.89.65 port 59856 ssh2: RSA SHA256:GQgi8hrngD5IAzSBvjpWGNrbDxS4+WSDV6E9Am09kRw Dec 16 02:08:22.558540 sshd-session[2305]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 02:08:22.568432 systemd-logind[1940]: New session 9 of user core. Dec 16 02:08:22.580492 systemd[1]: Started session-9.scope - Session 9 of User core. Dec 16 02:08:22.633726 sudo[2311]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Dec 16 02:08:22.634555 sudo[2311]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 16 02:08:22.638765 sudo[2311]: pam_unix(sudo:session): session closed for user root Dec 16 02:08:22.653449 sudo[2310]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Dec 16 02:08:22.654811 sudo[2310]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 16 02:08:22.669744 systemd[1]: Starting audit-rules.service - Load Audit Rules... Dec 16 02:08:22.737000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Dec 16 02:08:22.739785 kernel: kauditd_printk_skb: 39 callbacks suppressed Dec 16 02:08:22.739884 kernel: audit: type=1305 audit(1765850902.737:228): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Dec 16 02:08:22.740292 augenrules[2335]: No rules Dec 16 02:08:22.737000 audit[2335]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffcc08fff0 a2=420 a3=0 items=0 ppid=2316 pid=2335 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:08:22.746040 systemd[1]: audit-rules.service: Deactivated successfully. Dec 16 02:08:22.746847 systemd[1]: Finished audit-rules.service - Load Audit Rules. Dec 16 02:08:22.749422 kernel: audit: type=1300 audit(1765850902.737:228): arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffcc08fff0 a2=420 a3=0 items=0 ppid=2316 pid=2335 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:08:22.737000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Dec 16 02:08:22.752453 kernel: audit: type=1327 audit(1765850902.737:228): proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Dec 16 02:08:22.752750 sudo[2310]: pam_unix(sudo:session): session closed for user root Dec 16 02:08:22.746000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:08:22.757502 kernel: audit: type=1130 audit(1765850902.746:229): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:08:22.746000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:08:22.762094 kernel: audit: type=1131 audit(1765850902.746:230): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:08:22.751000 audit[2310]: USER_END pid=2310 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 16 02:08:22.767225 kernel: audit: type=1106 audit(1765850902.751:231): pid=2310 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 16 02:08:22.767325 kernel: audit: type=1104 audit(1765850902.751:232): pid=2310 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 16 02:08:22.751000 audit[2310]: CRED_DISP pid=2310 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 16 02:08:22.779943 sshd[2309]: Connection closed by 139.178.89.65 port 59856 Dec 16 02:08:22.780916 sshd-session[2305]: pam_unix(sshd:session): session closed for user core Dec 16 02:08:22.782000 audit[2305]: USER_END pid=2305 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 02:08:22.797468 kernel: audit: type=1106 audit(1765850902.782:233): pid=2305 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 02:08:22.797616 kernel: audit: type=1104 audit(1765850902.782:234): pid=2305 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 02:08:22.782000 audit[2305]: CRED_DISP pid=2305 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 02:08:22.792579 systemd[1]: sshd@7-172.31.29.223:22-139.178.89.65:59856.service: Deactivated successfully. Dec 16 02:08:22.796447 systemd[1]: session-9.scope: Deactivated successfully. Dec 16 02:08:22.792000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-172.31.29.223:22-139.178.89.65:59856 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:08:22.805103 kernel: audit: type=1131 audit(1765850902.792:235): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-172.31.29.223:22-139.178.89.65:59856 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:08:22.805305 systemd-logind[1940]: Session 9 logged out. Waiting for processes to exit. Dec 16 02:08:22.823528 systemd[1]: Started sshd@8-172.31.29.223:22-139.178.89.65:59860.service - OpenSSH per-connection server daemon (139.178.89.65:59860). Dec 16 02:08:22.822000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-172.31.29.223:22-139.178.89.65:59860 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:08:22.826102 systemd-logind[1940]: Removed session 9. Dec 16 02:08:23.018000 audit[2344]: USER_ACCT pid=2344 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 02:08:23.021251 sshd[2344]: Accepted publickey for core from 139.178.89.65 port 59860 ssh2: RSA SHA256:GQgi8hrngD5IAzSBvjpWGNrbDxS4+WSDV6E9Am09kRw Dec 16 02:08:23.020000 audit[2344]: CRED_ACQ pid=2344 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 02:08:23.020000 audit[2344]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffe1cd00c0 a2=3 a3=0 items=0 ppid=1 pid=2344 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:08:23.020000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 02:08:23.022913 sshd-session[2344]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 02:08:23.032150 systemd-logind[1940]: New session 10 of user core. Dec 16 02:08:23.041479 systemd[1]: Started session-10.scope - Session 10 of User core. Dec 16 02:08:23.047000 audit[2344]: USER_START pid=2344 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 02:08:23.051000 audit[2348]: CRED_ACQ pid=2348 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 02:08:23.091000 audit[2349]: USER_ACCT pid=2349 uid=500 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_unix,pam_faillock acct="core" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 16 02:08:23.093829 sudo[2349]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Dec 16 02:08:23.093000 audit[2349]: CRED_REFR pid=2349 uid=500 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 16 02:08:23.094883 sudo[2349]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 16 02:08:23.093000 audit[2349]: USER_START pid=2349 uid=500 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 16 02:08:23.628486 systemd[1]: Starting docker.service - Docker Application Container Engine... Dec 16 02:08:23.660673 (dockerd)[2367]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Dec 16 02:08:24.064608 dockerd[2367]: time="2025-12-16T02:08:24.064474142Z" level=info msg="Starting up" Dec 16 02:08:24.067522 dockerd[2367]: time="2025-12-16T02:08:24.067438598Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Dec 16 02:08:24.090800 dockerd[2367]: time="2025-12-16T02:08:24.090723086Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Dec 16 02:08:24.153416 systemd[1]: var-lib-docker-metacopy\x2dcheck931243990-merged.mount: Deactivated successfully. Dec 16 02:08:24.166958 dockerd[2367]: time="2025-12-16T02:08:24.166852982Z" level=info msg="Loading containers: start." Dec 16 02:08:24.183440 kernel: Initializing XFRM netlink socket Dec 16 02:08:24.274000 audit[2416]: NETFILTER_CFG table=nat:2 family=2 entries=2 op=nft_register_chain pid=2416 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 02:08:24.274000 audit[2416]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=116 a0=3 a1=ffffd8519100 a2=0 a3=0 items=0 ppid=2367 pid=2416 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:08:24.274000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Dec 16 02:08:24.279000 audit[2418]: NETFILTER_CFG table=filter:3 family=2 entries=2 op=nft_register_chain pid=2418 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 02:08:24.279000 audit[2418]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=124 a0=3 a1=fffff22e8130 a2=0 a3=0 items=0 ppid=2367 pid=2418 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:08:24.279000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Dec 16 02:08:24.284000 audit[2420]: NETFILTER_CFG table=filter:4 family=2 entries=1 op=nft_register_chain pid=2420 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 02:08:24.284000 audit[2420]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffe31e2630 a2=0 a3=0 items=0 ppid=2367 pid=2420 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:08:24.284000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D464F5257415244 Dec 16 02:08:24.288000 audit[2422]: NETFILTER_CFG table=filter:5 family=2 entries=1 op=nft_register_chain pid=2422 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 02:08:24.288000 audit[2422]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffd4da3080 a2=0 a3=0 items=0 ppid=2367 pid=2422 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:08:24.288000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D425249444745 Dec 16 02:08:24.292000 audit[2424]: NETFILTER_CFG table=filter:6 family=2 entries=1 op=nft_register_chain pid=2424 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 02:08:24.292000 audit[2424]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=fffffdeb69b0 a2=0 a3=0 items=0 ppid=2367 pid=2424 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:08:24.292000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D4354 Dec 16 02:08:24.297000 audit[2426]: NETFILTER_CFG table=filter:7 family=2 entries=1 op=nft_register_chain pid=2426 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 02:08:24.297000 audit[2426]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=112 a0=3 a1=ffffd4106ea0 a2=0 a3=0 items=0 ppid=2367 pid=2426 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:08:24.297000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Dec 16 02:08:24.304000 audit[2428]: NETFILTER_CFG table=filter:8 family=2 entries=1 op=nft_register_chain pid=2428 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 02:08:24.304000 audit[2428]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=112 a0=3 a1=ffffd3c47260 a2=0 a3=0 items=0 ppid=2367 pid=2428 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:08:24.304000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Dec 16 02:08:24.309000 audit[2430]: NETFILTER_CFG table=nat:9 family=2 entries=2 op=nft_register_chain pid=2430 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 02:08:24.309000 audit[2430]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=384 a0=3 a1=fffff9bd09f0 a2=0 a3=0 items=0 ppid=2367 pid=2430 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:08:24.309000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Dec 16 02:08:24.354000 audit[2433]: NETFILTER_CFG table=nat:10 family=2 entries=2 op=nft_register_chain pid=2433 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 02:08:24.354000 audit[2433]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=472 a0=3 a1=fffffe593c30 a2=0 a3=0 items=0 ppid=2367 pid=2433 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:08:24.354000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003132372E302E302E302F38 Dec 16 02:08:24.358000 audit[2435]: NETFILTER_CFG table=filter:11 family=2 entries=2 op=nft_register_chain pid=2435 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 02:08:24.358000 audit[2435]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=340 a0=3 a1=ffffc912b300 a2=0 a3=0 items=0 ppid=2367 pid=2435 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:08:24.358000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D464F5257415244 Dec 16 02:08:24.362000 audit[2437]: NETFILTER_CFG table=filter:12 family=2 entries=1 op=nft_register_rule pid=2437 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 02:08:24.362000 audit[2437]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=236 a0=3 a1=fffff193cda0 a2=0 a3=0 items=0 ppid=2367 pid=2437 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:08:24.362000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D425249444745 Dec 16 02:08:24.367000 audit[2439]: NETFILTER_CFG table=filter:13 family=2 entries=1 op=nft_register_rule pid=2439 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 02:08:24.367000 audit[2439]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=248 a0=3 a1=fffff06578f0 a2=0 a3=0 items=0 ppid=2367 pid=2439 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:08:24.367000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Dec 16 02:08:24.372000 audit[2441]: NETFILTER_CFG table=filter:14 family=2 entries=1 op=nft_register_rule pid=2441 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 02:08:24.372000 audit[2441]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=232 a0=3 a1=ffffc7575600 a2=0 a3=0 items=0 ppid=2367 pid=2441 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:08:24.372000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D4354 Dec 16 02:08:24.448000 audit[2471]: NETFILTER_CFG table=nat:15 family=10 entries=2 op=nft_register_chain pid=2471 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 02:08:24.448000 audit[2471]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=116 a0=3 a1=ffffcc1ca200 a2=0 a3=0 items=0 ppid=2367 pid=2471 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:08:24.448000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Dec 16 02:08:24.453000 audit[2473]: NETFILTER_CFG table=filter:16 family=10 entries=2 op=nft_register_chain pid=2473 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 02:08:24.453000 audit[2473]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=124 a0=3 a1=fffffdfd3df0 a2=0 a3=0 items=0 ppid=2367 pid=2473 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:08:24.453000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Dec 16 02:08:24.458000 audit[2475]: NETFILTER_CFG table=filter:17 family=10 entries=1 op=nft_register_chain pid=2475 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 02:08:24.458000 audit[2475]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffcc68ff20 a2=0 a3=0 items=0 ppid=2367 pid=2475 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:08:24.458000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D464F5257415244 Dec 16 02:08:24.463000 audit[2477]: NETFILTER_CFG table=filter:18 family=10 entries=1 op=nft_register_chain pid=2477 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 02:08:24.463000 audit[2477]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffd2026500 a2=0 a3=0 items=0 ppid=2367 pid=2477 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:08:24.463000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D425249444745 Dec 16 02:08:24.467000 audit[2479]: NETFILTER_CFG table=filter:19 family=10 entries=1 op=nft_register_chain pid=2479 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 02:08:24.467000 audit[2479]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=fffffad04d50 a2=0 a3=0 items=0 ppid=2367 pid=2479 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:08:24.467000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D4354 Dec 16 02:08:24.471000 audit[2481]: NETFILTER_CFG table=filter:20 family=10 entries=1 op=nft_register_chain pid=2481 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 02:08:24.471000 audit[2481]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=112 a0=3 a1=fffff92b7100 a2=0 a3=0 items=0 ppid=2367 pid=2481 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:08:24.471000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Dec 16 02:08:24.476994 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Dec 16 02:08:24.476000 audit[2483]: NETFILTER_CFG table=filter:21 family=10 entries=1 op=nft_register_chain pid=2483 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 02:08:24.476000 audit[2483]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=112 a0=3 a1=ffffc87776d0 a2=0 a3=0 items=0 ppid=2367 pid=2483 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:08:24.476000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Dec 16 02:08:24.482722 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 02:08:24.484000 audit[2486]: NETFILTER_CFG table=nat:22 family=10 entries=2 op=nft_register_chain pid=2486 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 02:08:24.484000 audit[2486]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=384 a0=3 a1=fffff44105e0 a2=0 a3=0 items=0 ppid=2367 pid=2486 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:08:24.484000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Dec 16 02:08:24.491000 audit[2488]: NETFILTER_CFG table=nat:23 family=10 entries=2 op=nft_register_chain pid=2488 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 02:08:24.491000 audit[2488]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=484 a0=3 a1=ffffe44c2340 a2=0 a3=0 items=0 ppid=2367 pid=2488 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:08:24.491000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003A3A312F313238 Dec 16 02:08:24.497000 audit[2490]: NETFILTER_CFG table=filter:24 family=10 entries=2 op=nft_register_chain pid=2490 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 02:08:24.497000 audit[2490]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=340 a0=3 a1=fffff5076b10 a2=0 a3=0 items=0 ppid=2367 pid=2490 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:08:24.497000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D464F5257415244 Dec 16 02:08:24.504000 audit[2493]: NETFILTER_CFG table=filter:25 family=10 entries=1 op=nft_register_rule pid=2493 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 02:08:24.504000 audit[2493]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=236 a0=3 a1=fffff0c86e00 a2=0 a3=0 items=0 ppid=2367 pid=2493 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:08:24.504000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D425249444745 Dec 16 02:08:24.510000 audit[2496]: NETFILTER_CFG table=filter:26 family=10 entries=1 op=nft_register_rule pid=2496 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 02:08:24.510000 audit[2496]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=248 a0=3 a1=fffffb8a52f0 a2=0 a3=0 items=0 ppid=2367 pid=2496 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:08:24.510000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Dec 16 02:08:24.517000 audit[2498]: NETFILTER_CFG table=filter:27 family=10 entries=1 op=nft_register_rule pid=2498 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 02:08:24.517000 audit[2498]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=232 a0=3 a1=fffffe8f7dd0 a2=0 a3=0 items=0 ppid=2367 pid=2498 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:08:24.517000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D4354 Dec 16 02:08:24.534000 audit[2503]: NETFILTER_CFG table=filter:28 family=2 entries=1 op=nft_register_chain pid=2503 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 02:08:24.534000 audit[2503]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffd8cca2f0 a2=0 a3=0 items=0 ppid=2367 pid=2503 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:08:24.534000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Dec 16 02:08:24.540000 audit[2505]: NETFILTER_CFG table=filter:29 family=2 entries=1 op=nft_register_rule pid=2505 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 02:08:24.540000 audit[2505]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=212 a0=3 a1=ffffdb6098b0 a2=0 a3=0 items=0 ppid=2367 pid=2505 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:08:24.540000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Dec 16 02:08:24.547000 audit[2507]: NETFILTER_CFG table=filter:30 family=2 entries=1 op=nft_register_rule pid=2507 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 02:08:24.547000 audit[2507]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=224 a0=3 a1=ffffebac8040 a2=0 a3=0 items=0 ppid=2367 pid=2507 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:08:24.547000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Dec 16 02:08:24.553000 audit[2509]: NETFILTER_CFG table=filter:31 family=10 entries=1 op=nft_register_chain pid=2509 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 02:08:24.553000 audit[2509]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffe6ea7c00 a2=0 a3=0 items=0 ppid=2367 pid=2509 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:08:24.553000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Dec 16 02:08:24.558000 audit[2511]: NETFILTER_CFG table=filter:32 family=10 entries=1 op=nft_register_rule pid=2511 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 02:08:24.558000 audit[2511]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=212 a0=3 a1=fffff15ef670 a2=0 a3=0 items=0 ppid=2367 pid=2511 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:08:24.558000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Dec 16 02:08:24.563000 audit[2513]: NETFILTER_CFG table=filter:33 family=10 entries=1 op=nft_register_rule pid=2513 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 02:08:24.563000 audit[2513]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=224 a0=3 a1=ffffd648de00 a2=0 a3=0 items=0 ppid=2367 pid=2513 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:08:24.563000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Dec 16 02:08:24.584345 (udev-worker)[2389]: Network interface NamePolicy= disabled on kernel command line. Dec 16 02:08:24.608000 audit[2517]: NETFILTER_CFG table=nat:34 family=2 entries=2 op=nft_register_chain pid=2517 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 02:08:24.608000 audit[2517]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=520 a0=3 a1=ffffe57fe4c0 a2=0 a3=0 items=0 ppid=2367 pid=2517 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:08:24.608000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4900504F5354524F5554494E47002D73003137322E31372E302E302F31360000002D6F00646F636B657230002D6A004D415351554552414445 Dec 16 02:08:24.618000 audit[2519]: NETFILTER_CFG table=nat:35 family=2 entries=1 op=nft_register_rule pid=2519 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 02:08:24.618000 audit[2519]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=288 a0=3 a1=fffffcb344b0 a2=0 a3=0 items=0 ppid=2367 pid=2519 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:08:24.618000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4900444F434B4552002D6900646F636B657230002D6A0052455455524E Dec 16 02:08:24.643000 audit[2527]: NETFILTER_CFG table=filter:36 family=2 entries=1 op=nft_register_rule pid=2527 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 02:08:24.643000 audit[2527]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=300 a0=3 a1=ffffd74197d0 a2=0 a3=0 items=0 ppid=2367 pid=2527 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:08:24.643000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D464F5257415244002D6900646F636B657230002D6A00414343455054 Dec 16 02:08:24.664000 audit[2533]: NETFILTER_CFG table=filter:37 family=2 entries=1 op=nft_register_rule pid=2533 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 02:08:24.664000 audit[2533]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=376 a0=3 a1=ffffec8dc940 a2=0 a3=0 items=0 ppid=2367 pid=2533 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:08:24.664000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45520000002D6900646F636B657230002D6F00646F636B657230002D6A0044524F50 Dec 16 02:08:24.672000 audit[2535]: NETFILTER_CFG table=filter:38 family=2 entries=1 op=nft_register_rule pid=2535 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 02:08:24.672000 audit[2535]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=512 a0=3 a1=ffffc77cd9e0 a2=0 a3=0 items=0 ppid=2367 pid=2535 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:08:24.672000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D4354002D6F00646F636B657230002D6D00636F6E6E747261636B002D2D637473746174650052454C415445442C45535441424C4953484544002D6A00414343455054 Dec 16 02:08:24.680000 audit[2537]: NETFILTER_CFG table=filter:39 family=2 entries=1 op=nft_register_rule pid=2537 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 02:08:24.680000 audit[2537]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=312 a0=3 a1=ffffd031fa50 a2=0 a3=0 items=0 ppid=2367 pid=2537 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:08:24.680000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D425249444745002D6F00646F636B657230002D6A00444F434B4552 Dec 16 02:08:24.686000 audit[2539]: NETFILTER_CFG table=filter:40 family=2 entries=1 op=nft_register_rule pid=2539 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 02:08:24.686000 audit[2539]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=428 a0=3 a1=ffffc1625db0 a2=0 a3=0 items=0 ppid=2367 pid=2539 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:08:24.686000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D49534F4C4154494F4E2D53544147452D31002D6900646F636B6572300000002D6F00646F636B657230002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D32 Dec 16 02:08:24.693000 audit[2541]: NETFILTER_CFG table=filter:41 family=2 entries=1 op=nft_register_rule pid=2541 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 02:08:24.693000 audit[2541]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=312 a0=3 a1=fffff0dfefc0 a2=0 a3=0 items=0 ppid=2367 pid=2541 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:08:24.693000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D32002D6F00646F636B657230002D6A0044524F50 Dec 16 02:08:24.695631 systemd-networkd[1872]: docker0: Link UP Dec 16 02:08:24.713696 dockerd[2367]: time="2025-12-16T02:08:24.713517533Z" level=info msg="Loading containers: done." Dec 16 02:08:24.749697 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1892698008-merged.mount: Deactivated successfully. Dec 16 02:08:24.785472 dockerd[2367]: time="2025-12-16T02:08:24.785386265Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Dec 16 02:08:24.786146 dockerd[2367]: time="2025-12-16T02:08:24.786041405Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Dec 16 02:08:24.786894 dockerd[2367]: time="2025-12-16T02:08:24.786769649Z" level=info msg="Initializing buildkit" Dec 16 02:08:24.856101 dockerd[2367]: time="2025-12-16T02:08:24.855157410Z" level=info msg="Completed buildkit initialization" Dec 16 02:08:24.876187 dockerd[2367]: time="2025-12-16T02:08:24.875814906Z" level=info msg="Daemon has completed initialization" Dec 16 02:08:24.877500 dockerd[2367]: time="2025-12-16T02:08:24.877279734Z" level=info msg="API listen on /run/docker.sock" Dec 16 02:08:24.878268 systemd[1]: Started docker.service - Docker Application Container Engine. Dec 16 02:08:24.877000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=docker comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:08:24.949977 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 02:08:24.949000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:08:24.965666 (kubelet)[2582]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 16 02:08:25.046369 kubelet[2582]: E1216 02:08:25.046266 2582 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 16 02:08:25.053950 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 16 02:08:25.054368 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 16 02:08:25.054000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Dec 16 02:08:25.055410 systemd[1]: kubelet.service: Consumed 360ms CPU time, 105.9M memory peak. Dec 16 02:08:26.819037 containerd[1959]: time="2025-12-16T02:08:26.818561287Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.7\"" Dec 16 02:08:27.704601 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1022668966.mount: Deactivated successfully. Dec 16 02:08:29.107337 containerd[1959]: time="2025-12-16T02:08:29.107272591Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 02:08:29.109991 containerd[1959]: time="2025-12-16T02:08:29.109915147Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.7: active requests=0, bytes read=26569720" Dec 16 02:08:29.111528 containerd[1959]: time="2025-12-16T02:08:29.111473431Z" level=info msg="ImageCreate event name:\"sha256:6d7bc8e445519fe4d49eee834f33f3e165eef4d3c0919ba08c67cdf8db905b7e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 02:08:29.117538 containerd[1959]: time="2025-12-16T02:08:29.117460915Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:9585226cb85d1dc0f0ef5f7a75f04e4bc91ddd82de249533bd293aa3cf958dab\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 02:08:29.119888 containerd[1959]: time="2025-12-16T02:08:29.119795551Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.7\" with image id \"sha256:6d7bc8e445519fe4d49eee834f33f3e165eef4d3c0919ba08c67cdf8db905b7e\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.7\", repo digest \"registry.k8s.io/kube-apiserver@sha256:9585226cb85d1dc0f0ef5f7a75f04e4bc91ddd82de249533bd293aa3cf958dab\", size \"27383880\" in 2.301128232s" Dec 16 02:08:29.119888 containerd[1959]: time="2025-12-16T02:08:29.119874079Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.7\" returns image reference \"sha256:6d7bc8e445519fe4d49eee834f33f3e165eef4d3c0919ba08c67cdf8db905b7e\"" Dec 16 02:08:29.122583 containerd[1959]: time="2025-12-16T02:08:29.122514175Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.7\"" Dec 16 02:08:31.355141 containerd[1959]: time="2025-12-16T02:08:31.354167758Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 02:08:31.355971 containerd[1959]: time="2025-12-16T02:08:31.355869430Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.7: active requests=0, bytes read=23547679" Dec 16 02:08:31.359700 containerd[1959]: time="2025-12-16T02:08:31.359603950Z" level=info msg="ImageCreate event name:\"sha256:a94595d0240bcc5e538b4b33bbc890512a731425be69643cbee284072f7d8f64\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 02:08:31.365103 containerd[1959]: time="2025-12-16T02:08:31.364953862Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:f69d77ca0626b5a4b7b432c18de0952941181db7341c80eb89731f46d1d0c230\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 02:08:31.368660 containerd[1959]: time="2025-12-16T02:08:31.367312198Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.7\" with image id \"sha256:a94595d0240bcc5e538b4b33bbc890512a731425be69643cbee284072f7d8f64\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.7\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:f69d77ca0626b5a4b7b432c18de0952941181db7341c80eb89731f46d1d0c230\", size \"25137562\" in 2.244501227s" Dec 16 02:08:31.368660 containerd[1959]: time="2025-12-16T02:08:31.367383598Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.7\" returns image reference \"sha256:a94595d0240bcc5e538b4b33bbc890512a731425be69643cbee284072f7d8f64\"" Dec 16 02:08:31.368660 containerd[1959]: time="2025-12-16T02:08:31.368625706Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.7\"" Dec 16 02:08:33.075165 containerd[1959]: time="2025-12-16T02:08:33.074502682Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 02:08:33.076925 containerd[1959]: time="2025-12-16T02:08:33.076616530Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.7: active requests=0, bytes read=18289931" Dec 16 02:08:33.077992 containerd[1959]: time="2025-12-16T02:08:33.077907790Z" level=info msg="ImageCreate event name:\"sha256:94005b6be50f054c8a4ef3f0d6976644e8b3c6a8bf15a9e8a2eeac3e8331b010\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 02:08:33.087083 containerd[1959]: time="2025-12-16T02:08:33.085280687Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:21bda321d8b4d48eb059fbc1593203d55d8b3bc7acd0584e04e55504796d78d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 02:08:33.087820 containerd[1959]: time="2025-12-16T02:08:33.087757475Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.7\" with image id \"sha256:94005b6be50f054c8a4ef3f0d6976644e8b3c6a8bf15a9e8a2eeac3e8331b010\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.7\", repo digest \"registry.k8s.io/kube-scheduler@sha256:21bda321d8b4d48eb059fbc1593203d55d8b3bc7acd0584e04e55504796d78d0\", size \"19882566\" in 1.719030369s" Dec 16 02:08:33.088026 containerd[1959]: time="2025-12-16T02:08:33.087992159Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.7\" returns image reference \"sha256:94005b6be50f054c8a4ef3f0d6976644e8b3c6a8bf15a9e8a2eeac3e8331b010\"" Dec 16 02:08:33.088851 containerd[1959]: time="2025-12-16T02:08:33.088804463Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.7\"" Dec 16 02:08:34.411253 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount900049679.mount: Deactivated successfully. Dec 16 02:08:35.048503 containerd[1959]: time="2025-12-16T02:08:35.047344944Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 02:08:35.049395 containerd[1959]: time="2025-12-16T02:08:35.049330692Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.7: active requests=0, bytes read=0" Dec 16 02:08:35.051796 containerd[1959]: time="2025-12-16T02:08:35.051752556Z" level=info msg="ImageCreate event name:\"sha256:78ccb937011a53894db229033fd54e237d478ec85315f8b08e5dcaa0f737111b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 02:08:35.056160 containerd[1959]: time="2025-12-16T02:08:35.056097888Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:ec25702b19026e9c0d339bc1c3bd231435a59f28b5fccb21e1b1078a357380f5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 02:08:35.057479 containerd[1959]: time="2025-12-16T02:08:35.057429384Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.7\" with image id \"sha256:78ccb937011a53894db229033fd54e237d478ec85315f8b08e5dcaa0f737111b\", repo tag \"registry.k8s.io/kube-proxy:v1.33.7\", repo digest \"registry.k8s.io/kube-proxy@sha256:ec25702b19026e9c0d339bc1c3bd231435a59f28b5fccb21e1b1078a357380f5\", size \"28257692\" in 1.968417069s" Dec 16 02:08:35.057668 containerd[1959]: time="2025-12-16T02:08:35.057638640Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.7\" returns image reference \"sha256:78ccb937011a53894db229033fd54e237d478ec85315f8b08e5dcaa0f737111b\"" Dec 16 02:08:35.058400 containerd[1959]: time="2025-12-16T02:08:35.058351380Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Dec 16 02:08:35.227121 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Dec 16 02:08:35.229922 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 02:08:35.628690 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 02:08:35.631360 kernel: kauditd_printk_skb: 134 callbacks suppressed Dec 16 02:08:35.631514 kernel: audit: type=1130 audit(1765850915.628:288): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:08:35.628000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:08:35.649753 (kubelet)[2676]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 16 02:08:35.747556 kubelet[2676]: E1216 02:08:35.747161 2676 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 16 02:08:35.753924 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 16 02:08:35.754703 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 16 02:08:35.756134 systemd[1]: kubelet.service: Consumed 346ms CPU time, 107.4M memory peak. Dec 16 02:08:35.754000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Dec 16 02:08:35.763412 kernel: audit: type=1131 audit(1765850915.754:289): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Dec 16 02:08:35.960450 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2199803419.mount: Deactivated successfully. Dec 16 02:08:37.302924 containerd[1959]: time="2025-12-16T02:08:37.302835555Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 02:08:37.305346 containerd[1959]: time="2025-12-16T02:08:37.305250267Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=18338344" Dec 16 02:08:37.308188 containerd[1959]: time="2025-12-16T02:08:37.308089647Z" level=info msg="ImageCreate event name:\"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 02:08:37.315027 containerd[1959]: time="2025-12-16T02:08:37.314921980Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 02:08:37.318810 containerd[1959]: time="2025-12-16T02:08:37.318718156Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"19148915\" in 2.260306272s" Dec 16 02:08:37.318810 containerd[1959]: time="2025-12-16T02:08:37.318808768Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\"" Dec 16 02:08:37.319984 containerd[1959]: time="2025-12-16T02:08:37.319917664Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Dec 16 02:08:37.802334 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1490885634.mount: Deactivated successfully. Dec 16 02:08:37.818426 containerd[1959]: time="2025-12-16T02:08:37.818335470Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 16 02:08:37.821528 containerd[1959]: time="2025-12-16T02:08:37.821398950Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Dec 16 02:08:37.824235 containerd[1959]: time="2025-12-16T02:08:37.824135142Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 16 02:08:37.829560 containerd[1959]: time="2025-12-16T02:08:37.829461270Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 16 02:08:37.832522 containerd[1959]: time="2025-12-16T02:08:37.832426218Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 512.44631ms" Dec 16 02:08:37.832522 containerd[1959]: time="2025-12-16T02:08:37.832511070Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Dec 16 02:08:37.833792 containerd[1959]: time="2025-12-16T02:08:37.833324682Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Dec 16 02:08:38.427567 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1059318742.mount: Deactivated successfully. Dec 16 02:08:38.654889 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Dec 16 02:08:38.655000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hostnamed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:08:38.668129 kernel: audit: type=1131 audit(1765850918.655:290): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hostnamed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:08:38.682000 audit: BPF prog-id=62 op=UNLOAD Dec 16 02:08:38.686109 kernel: audit: type=1334 audit(1765850918.682:291): prog-id=62 op=UNLOAD Dec 16 02:08:41.109086 containerd[1959]: time="2025-12-16T02:08:41.109002558Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 02:08:41.113474 containerd[1959]: time="2025-12-16T02:08:41.113195226Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=68134789" Dec 16 02:08:41.114613 containerd[1959]: time="2025-12-16T02:08:41.114555630Z" level=info msg="ImageCreate event name:\"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 02:08:41.121201 containerd[1959]: time="2025-12-16T02:08:41.121113894Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 02:08:41.122891 containerd[1959]: time="2025-12-16T02:08:41.122650170Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"70026017\" in 3.289264384s" Dec 16 02:08:41.122891 containerd[1959]: time="2025-12-16T02:08:41.122704962Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\"" Dec 16 02:08:45.977027 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Dec 16 02:08:45.981409 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 02:08:46.445367 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 02:08:46.444000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:08:46.455100 kernel: audit: type=1130 audit(1765850926.444:292): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:08:46.458746 (kubelet)[2827]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 16 02:08:46.532443 kubelet[2827]: E1216 02:08:46.532370 2827 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 16 02:08:46.537123 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 16 02:08:46.537425 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 16 02:08:46.538516 systemd[1]: kubelet.service: Consumed 300ms CPU time, 106.8M memory peak. Dec 16 02:08:46.537000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Dec 16 02:08:46.545154 kernel: audit: type=1131 audit(1765850926.537:293): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Dec 16 02:08:48.854183 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 02:08:48.855251 systemd[1]: kubelet.service: Consumed 300ms CPU time, 106.8M memory peak. Dec 16 02:08:48.854000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:08:48.854000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:08:48.860196 kernel: audit: type=1130 audit(1765850928.854:294): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:08:48.863508 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 02:08:48.866789 kernel: audit: type=1131 audit(1765850928.854:295): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:08:48.921417 systemd[1]: Reload requested from client PID 2841 ('systemctl') (unit session-10.scope)... Dec 16 02:08:48.921450 systemd[1]: Reloading... Dec 16 02:08:49.207511 zram_generator::config[2891]: No configuration found. Dec 16 02:08:49.695834 systemd[1]: Reloading finished in 773 ms. Dec 16 02:08:49.749791 kernel: audit: type=1334 audit(1765850929.743:296): prog-id=66 op=LOAD Dec 16 02:08:49.749961 kernel: audit: type=1334 audit(1765850929.743:297): prog-id=47 op=UNLOAD Dec 16 02:08:49.743000 audit: BPF prog-id=66 op=LOAD Dec 16 02:08:49.743000 audit: BPF prog-id=47 op=UNLOAD Dec 16 02:08:49.752614 kernel: audit: type=1334 audit(1765850929.743:298): prog-id=67 op=LOAD Dec 16 02:08:49.743000 audit: BPF prog-id=67 op=LOAD Dec 16 02:08:49.755605 kernel: audit: type=1334 audit(1765850929.745:299): prog-id=68 op=LOAD Dec 16 02:08:49.745000 audit: BPF prog-id=68 op=LOAD Dec 16 02:08:49.758693 kernel: audit: type=1334 audit(1765850929.745:300): prog-id=48 op=UNLOAD Dec 16 02:08:49.745000 audit: BPF prog-id=48 op=UNLOAD Dec 16 02:08:49.761403 kernel: audit: type=1334 audit(1765850929.745:301): prog-id=49 op=UNLOAD Dec 16 02:08:49.745000 audit: BPF prog-id=49 op=UNLOAD Dec 16 02:08:49.748000 audit: BPF prog-id=69 op=LOAD Dec 16 02:08:49.748000 audit: BPF prog-id=46 op=UNLOAD Dec 16 02:08:49.751000 audit: BPF prog-id=70 op=LOAD Dec 16 02:08:49.751000 audit: BPF prog-id=58 op=UNLOAD Dec 16 02:08:49.754000 audit: BPF prog-id=71 op=LOAD Dec 16 02:08:49.754000 audit: BPF prog-id=50 op=UNLOAD Dec 16 02:08:49.754000 audit: BPF prog-id=72 op=LOAD Dec 16 02:08:49.755000 audit: BPF prog-id=73 op=LOAD Dec 16 02:08:49.755000 audit: BPF prog-id=51 op=UNLOAD Dec 16 02:08:49.755000 audit: BPF prog-id=52 op=UNLOAD Dec 16 02:08:49.757000 audit: BPF prog-id=74 op=LOAD Dec 16 02:08:49.757000 audit: BPF prog-id=65 op=UNLOAD Dec 16 02:08:49.763000 audit: BPF prog-id=75 op=LOAD Dec 16 02:08:49.765000 audit: BPF prog-id=59 op=UNLOAD Dec 16 02:08:49.765000 audit: BPF prog-id=76 op=LOAD Dec 16 02:08:49.765000 audit: BPF prog-id=77 op=LOAD Dec 16 02:08:49.765000 audit: BPF prog-id=60 op=UNLOAD Dec 16 02:08:49.765000 audit: BPF prog-id=61 op=UNLOAD Dec 16 02:08:49.766000 audit: BPF prog-id=78 op=LOAD Dec 16 02:08:49.766000 audit: BPF prog-id=53 op=UNLOAD Dec 16 02:08:49.767000 audit: BPF prog-id=79 op=LOAD Dec 16 02:08:49.767000 audit: BPF prog-id=80 op=LOAD Dec 16 02:08:49.767000 audit: BPF prog-id=54 op=UNLOAD Dec 16 02:08:49.767000 audit: BPF prog-id=55 op=UNLOAD Dec 16 02:08:49.769000 audit: BPF prog-id=81 op=LOAD Dec 16 02:08:49.769000 audit: BPF prog-id=82 op=LOAD Dec 16 02:08:49.769000 audit: BPF prog-id=56 op=UNLOAD Dec 16 02:08:49.769000 audit: BPF prog-id=57 op=UNLOAD Dec 16 02:08:49.770000 audit: BPF prog-id=83 op=LOAD Dec 16 02:08:49.770000 audit: BPF prog-id=43 op=UNLOAD Dec 16 02:08:49.770000 audit: BPF prog-id=84 op=LOAD Dec 16 02:08:49.770000 audit: BPF prog-id=85 op=LOAD Dec 16 02:08:49.771000 audit: BPF prog-id=44 op=UNLOAD Dec 16 02:08:49.771000 audit: BPF prog-id=45 op=UNLOAD Dec 16 02:08:49.799134 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Dec 16 02:08:49.799317 systemd[1]: kubelet.service: Failed with result 'signal'. Dec 16 02:08:49.800043 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 02:08:49.799000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Dec 16 02:08:49.800174 systemd[1]: kubelet.service: Consumed 233ms CPU time, 95.1M memory peak. Dec 16 02:08:49.803506 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 02:08:50.151819 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 02:08:50.152000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:08:50.168567 (kubelet)[2951]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 16 02:08:50.241824 kubelet[2951]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 16 02:08:50.241824 kubelet[2951]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Dec 16 02:08:50.241824 kubelet[2951]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 16 02:08:50.242424 kubelet[2951]: I1216 02:08:50.241880 2951 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 16 02:08:51.533009 kubelet[2951]: I1216 02:08:51.532940 2951 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Dec 16 02:08:51.535084 kubelet[2951]: I1216 02:08:51.533782 2951 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 16 02:08:51.535084 kubelet[2951]: I1216 02:08:51.534354 2951 server.go:956] "Client rotation is on, will bootstrap in background" Dec 16 02:08:51.587722 kubelet[2951]: E1216 02:08:51.587657 2951 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://172.31.29.223:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.29.223:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Dec 16 02:08:51.589977 kubelet[2951]: I1216 02:08:51.589923 2951 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 16 02:08:51.604894 kubelet[2951]: I1216 02:08:51.604862 2951 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Dec 16 02:08:51.613741 kubelet[2951]: I1216 02:08:51.613685 2951 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 16 02:08:51.614471 kubelet[2951]: I1216 02:08:51.614410 2951 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 16 02:08:51.614766 kubelet[2951]: I1216 02:08:51.614469 2951 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-29-223","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 16 02:08:51.615001 kubelet[2951]: I1216 02:08:51.614924 2951 topology_manager.go:138] "Creating topology manager with none policy" Dec 16 02:08:51.615001 kubelet[2951]: I1216 02:08:51.614952 2951 container_manager_linux.go:303] "Creating device plugin manager" Dec 16 02:08:51.615399 kubelet[2951]: I1216 02:08:51.615356 2951 state_mem.go:36] "Initialized new in-memory state store" Dec 16 02:08:51.621895 kubelet[2951]: I1216 02:08:51.621822 2951 kubelet.go:480] "Attempting to sync node with API server" Dec 16 02:08:51.621895 kubelet[2951]: I1216 02:08:51.621876 2951 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 16 02:08:51.624768 kubelet[2951]: I1216 02:08:51.624704 2951 kubelet.go:386] "Adding apiserver pod source" Dec 16 02:08:51.624768 kubelet[2951]: I1216 02:08:51.624767 2951 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 16 02:08:51.628810 kubelet[2951]: E1216 02:08:51.628759 2951 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://172.31.29.223:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-29-223&limit=500&resourceVersion=0\": dial tcp 172.31.29.223:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Dec 16 02:08:51.631384 kubelet[2951]: E1216 02:08:51.631322 2951 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://172.31.29.223:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.29.223:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Dec 16 02:08:51.631767 kubelet[2951]: I1216 02:08:51.631726 2951 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.1.5" apiVersion="v1" Dec 16 02:08:51.633271 kubelet[2951]: I1216 02:08:51.633226 2951 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Dec 16 02:08:51.633726 kubelet[2951]: W1216 02:08:51.633694 2951 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Dec 16 02:08:51.650202 kubelet[2951]: I1216 02:08:51.650140 2951 watchdog_linux.go:99] "Systemd watchdog is not enabled" Dec 16 02:08:51.650365 kubelet[2951]: I1216 02:08:51.650235 2951 server.go:1289] "Started kubelet" Dec 16 02:08:51.650907 kubelet[2951]: I1216 02:08:51.650816 2951 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Dec 16 02:08:51.653084 kubelet[2951]: I1216 02:08:51.652935 2951 server.go:317] "Adding debug handlers to kubelet server" Dec 16 02:08:51.654332 kubelet[2951]: I1216 02:08:51.654216 2951 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 16 02:08:51.654923 kubelet[2951]: I1216 02:08:51.654862 2951 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 16 02:08:51.658110 kubelet[2951]: E1216 02:08:51.655217 2951 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.29.223:6443/api/v1/namespaces/default/events\": dial tcp 172.31.29.223:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-29-223.18819011034d3873 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-29-223,UID:ip-172-31-29-223,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-29-223,},FirstTimestamp:2025-12-16 02:08:51.650181235 +0000 UTC m=+1.474716837,LastTimestamp:2025-12-16 02:08:51.650181235 +0000 UTC m=+1.474716837,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-29-223,}" Dec 16 02:08:51.662185 kubelet[2951]: I1216 02:08:51.661746 2951 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 16 02:08:51.664391 kubelet[2951]: I1216 02:08:51.664115 2951 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Dec 16 02:08:51.671103 kubelet[2951]: I1216 02:08:51.670450 2951 volume_manager.go:297] "Starting Kubelet Volume Manager" Dec 16 02:08:51.671322 kubelet[2951]: E1216 02:08:51.671291 2951 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-29-223\" not found" Dec 16 02:08:51.671767 kubelet[2951]: E1216 02:08:51.671693 2951 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.29.223:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-29-223?timeout=10s\": dial tcp 172.31.29.223:6443: connect: connection refused" interval="200ms" Dec 16 02:08:51.672271 kubelet[2951]: I1216 02:08:51.672209 2951 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Dec 16 02:08:51.672602 kubelet[2951]: I1216 02:08:51.672559 2951 reconciler.go:26] "Reconciler: start to sync state" Dec 16 02:08:51.672741 kubelet[2951]: I1216 02:08:51.672715 2951 factory.go:223] Registration of the systemd container factory successfully Dec 16 02:08:51.672978 kubelet[2951]: I1216 02:08:51.672944 2951 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 16 02:08:51.673882 kubelet[2951]: E1216 02:08:51.673841 2951 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 16 02:08:51.674669 kubelet[2951]: E1216 02:08:51.674472 2951 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://172.31.29.223:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.29.223:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Dec 16 02:08:51.679102 kubelet[2951]: I1216 02:08:51.676975 2951 factory.go:223] Registration of the containerd container factory successfully Dec 16 02:08:51.683000 audit[2968]: NETFILTER_CFG table=mangle:42 family=2 entries=2 op=nft_register_chain pid=2968 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 02:08:51.686080 kernel: kauditd_printk_skb: 36 callbacks suppressed Dec 16 02:08:51.686247 kernel: audit: type=1325 audit(1765850931.683:338): table=mangle:42 family=2 entries=2 op=nft_register_chain pid=2968 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 02:08:51.683000 audit[2968]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=136 a0=3 a1=ffffc1cabd90 a2=0 a3=0 items=0 ppid=2951 pid=2968 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:08:51.697089 kernel: audit: type=1300 audit(1765850931.683:338): arch=c00000b7 syscall=211 success=yes exit=136 a0=3 a1=ffffc1cabd90 a2=0 a3=0 items=0 ppid=2951 pid=2968 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:08:51.683000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Dec 16 02:08:51.701274 kernel: audit: type=1327 audit(1765850931.683:338): proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Dec 16 02:08:51.702000 audit[2971]: NETFILTER_CFG table=filter:43 family=2 entries=1 op=nft_register_chain pid=2971 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 02:08:51.707443 kernel: audit: type=1325 audit(1765850931.702:339): table=filter:43 family=2 entries=1 op=nft_register_chain pid=2971 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 02:08:51.702000 audit[2971]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffe2f3d060 a2=0 a3=0 items=0 ppid=2951 pid=2971 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:08:51.715878 kernel: audit: type=1300 audit(1765850931.702:339): arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffe2f3d060 a2=0 a3=0 items=0 ppid=2951 pid=2971 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:08:51.702000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Dec 16 02:08:51.720157 kernel: audit: type=1327 audit(1765850931.702:339): proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Dec 16 02:08:51.722562 kubelet[2951]: I1216 02:08:51.722522 2951 cpu_manager.go:221] "Starting CPU manager" policy="none" Dec 16 02:08:51.722772 kubelet[2951]: I1216 02:08:51.722747 2951 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Dec 16 02:08:51.723119 kubelet[2951]: I1216 02:08:51.723090 2951 state_mem.go:36] "Initialized new in-memory state store" Dec 16 02:08:51.724000 audit[2973]: NETFILTER_CFG table=filter:44 family=2 entries=2 op=nft_register_chain pid=2973 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 02:08:51.724000 audit[2973]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=340 a0=3 a1=ffffecd9ad80 a2=0 a3=0 items=0 ppid=2951 pid=2973 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:08:51.729612 kubelet[2951]: I1216 02:08:51.728882 2951 policy_none.go:49] "None policy: Start" Dec 16 02:08:51.729612 kubelet[2951]: I1216 02:08:51.728919 2951 memory_manager.go:186] "Starting memorymanager" policy="None" Dec 16 02:08:51.729612 kubelet[2951]: I1216 02:08:51.728945 2951 state_mem.go:35] "Initializing new in-memory state store" Dec 16 02:08:51.735339 kernel: audit: type=1325 audit(1765850931.724:340): table=filter:44 family=2 entries=2 op=nft_register_chain pid=2973 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 02:08:51.735496 kernel: audit: type=1300 audit(1765850931.724:340): arch=c00000b7 syscall=211 success=yes exit=340 a0=3 a1=ffffecd9ad80 a2=0 a3=0 items=0 ppid=2951 pid=2973 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:08:51.724000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Dec 16 02:08:51.739555 kernel: audit: type=1327 audit(1765850931.724:340): proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Dec 16 02:08:51.739000 audit[2975]: NETFILTER_CFG table=filter:45 family=2 entries=2 op=nft_register_chain pid=2975 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 02:08:51.743946 kernel: audit: type=1325 audit(1765850931.739:341): table=filter:45 family=2 entries=2 op=nft_register_chain pid=2975 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 02:08:51.739000 audit[2975]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=340 a0=3 a1=fffff0bafea0 a2=0 a3=0 items=0 ppid=2951 pid=2975 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:08:51.739000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Dec 16 02:08:51.746887 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Dec 16 02:08:51.765742 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Dec 16 02:08:51.771631 kubelet[2951]: E1216 02:08:51.771594 2951 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-29-223\" not found" Dec 16 02:08:51.773000 audit[2978]: NETFILTER_CFG table=filter:46 family=2 entries=1 op=nft_register_rule pid=2978 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 02:08:51.773000 audit[2978]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=924 a0=3 a1=ffffe11e92f0 a2=0 a3=0 items=0 ppid=2951 pid=2978 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:08:51.773000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E7400626C6F636B20696E636F6D696E67206C6F63616C6E657420636F6E6E656374696F6E73002D2D647374003132372E302E302E302F38 Dec 16 02:08:51.777739 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Dec 16 02:08:51.780108 kubelet[2951]: I1216 02:08:51.778183 2951 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Dec 16 02:08:51.782000 audit[2980]: NETFILTER_CFG table=mangle:47 family=10 entries=2 op=nft_register_chain pid=2980 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 02:08:51.782000 audit[2980]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=136 a0=3 a1=fffff8a3f8c0 a2=0 a3=0 items=0 ppid=2951 pid=2980 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:08:51.782000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Dec 16 02:08:51.785987 kubelet[2951]: I1216 02:08:51.785887 2951 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Dec 16 02:08:51.785987 kubelet[2951]: I1216 02:08:51.785932 2951 status_manager.go:230] "Starting to sync pod status with apiserver" Dec 16 02:08:51.785987 kubelet[2951]: I1216 02:08:51.785974 2951 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Dec 16 02:08:51.785987 kubelet[2951]: I1216 02:08:51.785989 2951 kubelet.go:2436] "Starting kubelet main sync loop" Dec 16 02:08:51.786280 kubelet[2951]: E1216 02:08:51.786136 2951 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 16 02:08:51.785000 audit[2981]: NETFILTER_CFG table=mangle:48 family=2 entries=1 op=nft_register_chain pid=2981 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 02:08:51.785000 audit[2981]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffc02a77b0 a2=0 a3=0 items=0 ppid=2951 pid=2981 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:08:51.785000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Dec 16 02:08:51.792000 audit[2982]: NETFILTER_CFG table=mangle:49 family=10 entries=1 op=nft_register_chain pid=2982 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 02:08:51.792000 audit[2982]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffd7bc5860 a2=0 a3=0 items=0 ppid=2951 pid=2982 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:08:51.792000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Dec 16 02:08:51.793000 audit[2983]: NETFILTER_CFG table=nat:50 family=2 entries=1 op=nft_register_chain pid=2983 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 02:08:51.793000 audit[2983]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffff174d9a0 a2=0 a3=0 items=0 ppid=2951 pid=2983 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:08:51.793000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Dec 16 02:08:51.799000 audit[2985]: NETFILTER_CFG table=filter:51 family=2 entries=1 op=nft_register_chain pid=2985 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 02:08:51.799000 audit[2985]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffedee3ae0 a2=0 a3=0 items=0 ppid=2951 pid=2985 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:08:51.799000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Dec 16 02:08:51.802264 kubelet[2951]: E1216 02:08:51.797928 2951 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://172.31.29.223:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.29.223:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Dec 16 02:08:51.802000 audit[2984]: NETFILTER_CFG table=nat:52 family=10 entries=1 op=nft_register_chain pid=2984 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 02:08:51.802000 audit[2984]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffe59989c0 a2=0 a3=0 items=0 ppid=2951 pid=2984 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:08:51.802000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Dec 16 02:08:51.804630 kubelet[2951]: E1216 02:08:51.803368 2951 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Dec 16 02:08:51.804630 kubelet[2951]: I1216 02:08:51.803687 2951 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 16 02:08:51.804630 kubelet[2951]: I1216 02:08:51.803707 2951 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 16 02:08:51.807476 kubelet[2951]: E1216 02:08:51.807424 2951 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Dec 16 02:08:51.809969 kubelet[2951]: E1216 02:08:51.809481 2951 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-29-223\" not found" Dec 16 02:08:51.809969 kubelet[2951]: I1216 02:08:51.808493 2951 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 16 02:08:51.812000 audit[2986]: NETFILTER_CFG table=filter:53 family=10 entries=1 op=nft_register_chain pid=2986 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 02:08:51.812000 audit[2986]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffea56d150 a2=0 a3=0 items=0 ppid=2951 pid=2986 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:08:51.812000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Dec 16 02:08:51.873492 kubelet[2951]: E1216 02:08:51.873417 2951 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.29.223:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-29-223?timeout=10s\": dial tcp 172.31.29.223:6443: connect: connection refused" interval="400ms" Dec 16 02:08:51.906871 kubelet[2951]: I1216 02:08:51.906801 2951 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-29-223" Dec 16 02:08:51.907764 kubelet[2951]: E1216 02:08:51.907698 2951 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.29.223:6443/api/v1/nodes\": dial tcp 172.31.29.223:6443: connect: connection refused" node="ip-172-31-29-223" Dec 16 02:08:51.929110 systemd[1]: Created slice kubepods-burstable-pod1fa16be315968bf2e0dd0dbdf6fc0e5d.slice - libcontainer container kubepods-burstable-pod1fa16be315968bf2e0dd0dbdf6fc0e5d.slice. Dec 16 02:08:51.943934 kubelet[2951]: E1216 02:08:51.943849 2951 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-29-223\" not found" node="ip-172-31-29-223" Dec 16 02:08:51.956437 systemd[1]: Created slice kubepods-burstable-pod6583efaf04c10e2801f8e3607711d5cd.slice - libcontainer container kubepods-burstable-pod6583efaf04c10e2801f8e3607711d5cd.slice. Dec 16 02:08:51.976689 kubelet[2951]: E1216 02:08:51.976386 2951 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-29-223\" not found" node="ip-172-31-29-223" Dec 16 02:08:51.985273 systemd[1]: Created slice kubepods-burstable-pod619bf629a131c61c5c1ba9aacbb8f191.slice - libcontainer container kubepods-burstable-pod619bf629a131c61c5c1ba9aacbb8f191.slice. Dec 16 02:08:51.994087 kubelet[2951]: E1216 02:08:51.993967 2951 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-29-223\" not found" node="ip-172-31-29-223" Dec 16 02:08:52.074328 kubelet[2951]: I1216 02:08:52.074190 2951 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1fa16be315968bf2e0dd0dbdf6fc0e5d-k8s-certs\") pod \"kube-controller-manager-ip-172-31-29-223\" (UID: \"1fa16be315968bf2e0dd0dbdf6fc0e5d\") " pod="kube-system/kube-controller-manager-ip-172-31-29-223" Dec 16 02:08:52.074861 kubelet[2951]: I1216 02:08:52.074536 2951 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/1fa16be315968bf2e0dd0dbdf6fc0e5d-kubeconfig\") pod \"kube-controller-manager-ip-172-31-29-223\" (UID: \"1fa16be315968bf2e0dd0dbdf6fc0e5d\") " pod="kube-system/kube-controller-manager-ip-172-31-29-223" Dec 16 02:08:52.074861 kubelet[2951]: I1216 02:08:52.074696 2951 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/619bf629a131c61c5c1ba9aacbb8f191-k8s-certs\") pod \"kube-apiserver-ip-172-31-29-223\" (UID: \"619bf629a131c61c5c1ba9aacbb8f191\") " pod="kube-system/kube-apiserver-ip-172-31-29-223" Dec 16 02:08:52.075602 kubelet[2951]: I1216 02:08:52.075108 2951 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1fa16be315968bf2e0dd0dbdf6fc0e5d-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-29-223\" (UID: \"1fa16be315968bf2e0dd0dbdf6fc0e5d\") " pod="kube-system/kube-controller-manager-ip-172-31-29-223" Dec 16 02:08:52.075602 kubelet[2951]: I1216 02:08:52.075373 2951 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6583efaf04c10e2801f8e3607711d5cd-kubeconfig\") pod \"kube-scheduler-ip-172-31-29-223\" (UID: \"6583efaf04c10e2801f8e3607711d5cd\") " pod="kube-system/kube-scheduler-ip-172-31-29-223" Dec 16 02:08:52.075909 kubelet[2951]: I1216 02:08:52.075740 2951 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/619bf629a131c61c5c1ba9aacbb8f191-ca-certs\") pod \"kube-apiserver-ip-172-31-29-223\" (UID: \"619bf629a131c61c5c1ba9aacbb8f191\") " pod="kube-system/kube-apiserver-ip-172-31-29-223" Dec 16 02:08:52.076157 kubelet[2951]: I1216 02:08:52.075987 2951 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/619bf629a131c61c5c1ba9aacbb8f191-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-29-223\" (UID: \"619bf629a131c61c5c1ba9aacbb8f191\") " pod="kube-system/kube-apiserver-ip-172-31-29-223" Dec 16 02:08:52.076157 kubelet[2951]: I1216 02:08:52.076081 2951 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1fa16be315968bf2e0dd0dbdf6fc0e5d-ca-certs\") pod \"kube-controller-manager-ip-172-31-29-223\" (UID: \"1fa16be315968bf2e0dd0dbdf6fc0e5d\") " pod="kube-system/kube-controller-manager-ip-172-31-29-223" Dec 16 02:08:52.076157 kubelet[2951]: I1216 02:08:52.076123 2951 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/1fa16be315968bf2e0dd0dbdf6fc0e5d-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-29-223\" (UID: \"1fa16be315968bf2e0dd0dbdf6fc0e5d\") " pod="kube-system/kube-controller-manager-ip-172-31-29-223" Dec 16 02:08:52.110798 kubelet[2951]: I1216 02:08:52.110754 2951 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-29-223" Dec 16 02:08:52.111293 kubelet[2951]: E1216 02:08:52.111248 2951 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.29.223:6443/api/v1/nodes\": dial tcp 172.31.29.223:6443: connect: connection refused" node="ip-172-31-29-223" Dec 16 02:08:52.249868 containerd[1959]: time="2025-12-16T02:08:52.249796350Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-29-223,Uid:1fa16be315968bf2e0dd0dbdf6fc0e5d,Namespace:kube-system,Attempt:0,}" Dec 16 02:08:52.274646 kubelet[2951]: E1216 02:08:52.274573 2951 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.29.223:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-29-223?timeout=10s\": dial tcp 172.31.29.223:6443: connect: connection refused" interval="800ms" Dec 16 02:08:52.280382 containerd[1959]: time="2025-12-16T02:08:52.280278390Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-29-223,Uid:6583efaf04c10e2801f8e3607711d5cd,Namespace:kube-system,Attempt:0,}" Dec 16 02:08:52.299065 containerd[1959]: time="2025-12-16T02:08:52.298987362Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-29-223,Uid:619bf629a131c61c5c1ba9aacbb8f191,Namespace:kube-system,Attempt:0,}" Dec 16 02:08:52.306532 containerd[1959]: time="2025-12-16T02:08:52.306459846Z" level=info msg="connecting to shim 4e7db25c43351e3d77437ddd208027ef9016b5f6e9fd623a0e0bb907e4836f83" address="unix:///run/containerd/s/00709b6ed7c947ba0b40ee5d1a4ca3668c34c7aef478121ca003f9de94a4ab95" namespace=k8s.io protocol=ttrpc version=3 Dec 16 02:08:52.311456 update_engine[1941]: I20251216 02:08:52.311122 1941 update_attempter.cc:509] Updating boot flags... Dec 16 02:08:52.384136 containerd[1959]: time="2025-12-16T02:08:52.383452398Z" level=info msg="connecting to shim fc2847b871daa6423cec58b1912ce924aec31573b83f454ea633c4b0cf4622fd" address="unix:///run/containerd/s/53b216779e8bcf3e12b0282b2c74bfeb8504d107402118a1ead1614607c46f89" namespace=k8s.io protocol=ttrpc version=3 Dec 16 02:08:52.469436 systemd[1]: Started cri-containerd-4e7db25c43351e3d77437ddd208027ef9016b5f6e9fd623a0e0bb907e4836f83.scope - libcontainer container 4e7db25c43351e3d77437ddd208027ef9016b5f6e9fd623a0e0bb907e4836f83. Dec 16 02:08:52.472309 containerd[1959]: time="2025-12-16T02:08:52.469645963Z" level=info msg="connecting to shim ffb0140e2b2b0399cd620a251dfe10187c71fd8d0e3f841f19eaa7ac4dfdf314" address="unix:///run/containerd/s/3db9dee00ed292a1643bcfd4b537ecde0470c49b9dd054d80869c5f4a11b5113" namespace=k8s.io protocol=ttrpc version=3 Dec 16 02:08:52.517823 kubelet[2951]: I1216 02:08:52.517641 2951 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-29-223" Dec 16 02:08:52.519970 kubelet[2951]: E1216 02:08:52.519691 2951 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.29.223:6443/api/v1/nodes\": dial tcp 172.31.29.223:6443: connect: connection refused" node="ip-172-31-29-223" Dec 16 02:08:52.529000 audit: BPF prog-id=86 op=LOAD Dec 16 02:08:52.530000 audit: BPF prog-id=87 op=LOAD Dec 16 02:08:52.530000 audit[3010]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a0180 a2=98 a3=0 items=0 ppid=2996 pid=3010 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:08:52.530000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3465376462323563343333353165336437373433376464643230383032 Dec 16 02:08:52.531000 audit: BPF prog-id=87 op=UNLOAD Dec 16 02:08:52.531000 audit[3010]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2996 pid=3010 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:08:52.531000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3465376462323563343333353165336437373433376464643230383032 Dec 16 02:08:52.531000 audit: BPF prog-id=88 op=LOAD Dec 16 02:08:52.531000 audit[3010]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a03e8 a2=98 a3=0 items=0 ppid=2996 pid=3010 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:08:52.531000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3465376462323563343333353165336437373433376464643230383032 Dec 16 02:08:52.533000 audit: BPF prog-id=89 op=LOAD Dec 16 02:08:52.533000 audit[3010]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=40001a0168 a2=98 a3=0 items=0 ppid=2996 pid=3010 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:08:52.533000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3465376462323563343333353165336437373433376464643230383032 Dec 16 02:08:52.533000 audit: BPF prog-id=89 op=UNLOAD Dec 16 02:08:52.533000 audit[3010]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2996 pid=3010 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:08:52.533000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3465376462323563343333353165336437373433376464643230383032 Dec 16 02:08:52.533000 audit: BPF prog-id=88 op=UNLOAD Dec 16 02:08:52.533000 audit[3010]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2996 pid=3010 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:08:52.533000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3465376462323563343333353165336437373433376464643230383032 Dec 16 02:08:52.533000 audit: BPF prog-id=90 op=LOAD Dec 16 02:08:52.533000 audit[3010]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a0648 a2=98 a3=0 items=0 ppid=2996 pid=3010 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:08:52.533000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3465376462323563343333353165336437373433376464643230383032 Dec 16 02:08:52.571428 systemd[1]: Started cri-containerd-fc2847b871daa6423cec58b1912ce924aec31573b83f454ea633c4b0cf4622fd.scope - libcontainer container fc2847b871daa6423cec58b1912ce924aec31573b83f454ea633c4b0cf4622fd. Dec 16 02:08:52.680857 systemd[1]: Started cri-containerd-ffb0140e2b2b0399cd620a251dfe10187c71fd8d0e3f841f19eaa7ac4dfdf314.scope - libcontainer container ffb0140e2b2b0399cd620a251dfe10187c71fd8d0e3f841f19eaa7ac4dfdf314. Dec 16 02:08:52.693000 audit: BPF prog-id=91 op=LOAD Dec 16 02:08:52.699000 audit: BPF prog-id=92 op=LOAD Dec 16 02:08:52.699000 audit[3063]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001a0180 a2=98 a3=0 items=0 ppid=3033 pid=3063 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:08:52.699000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6663323834376238373164616136343233636563353862313931326365 Dec 16 02:08:52.699000 audit: BPF prog-id=92 op=UNLOAD Dec 16 02:08:52.699000 audit[3063]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3033 pid=3063 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:08:52.699000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6663323834376238373164616136343233636563353862313931326365 Dec 16 02:08:52.702000 audit: BPF prog-id=93 op=LOAD Dec 16 02:08:52.702000 audit[3063]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001a03e8 a2=98 a3=0 items=0 ppid=3033 pid=3063 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:08:52.702000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6663323834376238373164616136343233636563353862313931326365 Dec 16 02:08:52.702000 audit: BPF prog-id=94 op=LOAD Dec 16 02:08:52.702000 audit[3063]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=40001a0168 a2=98 a3=0 items=0 ppid=3033 pid=3063 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:08:52.702000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6663323834376238373164616136343233636563353862313931326365 Dec 16 02:08:52.703000 audit: BPF prog-id=94 op=UNLOAD Dec 16 02:08:52.703000 audit[3063]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3033 pid=3063 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:08:52.703000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6663323834376238373164616136343233636563353862313931326365 Dec 16 02:08:52.704000 audit: BPF prog-id=93 op=UNLOAD Dec 16 02:08:52.704000 audit[3063]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3033 pid=3063 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:08:52.704000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6663323834376238373164616136343233636563353862313931326365 Dec 16 02:08:52.704000 audit: BPF prog-id=95 op=LOAD Dec 16 02:08:52.704000 audit[3063]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001a0648 a2=98 a3=0 items=0 ppid=3033 pid=3063 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:08:52.704000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6663323834376238373164616136343233636563353862313931326365 Dec 16 02:08:52.713178 containerd[1959]: time="2025-12-16T02:08:52.712640168Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-29-223,Uid:1fa16be315968bf2e0dd0dbdf6fc0e5d,Namespace:kube-system,Attempt:0,} returns sandbox id \"4e7db25c43351e3d77437ddd208027ef9016b5f6e9fd623a0e0bb907e4836f83\"" Dec 16 02:08:52.740470 containerd[1959]: time="2025-12-16T02:08:52.740387960Z" level=info msg="CreateContainer within sandbox \"4e7db25c43351e3d77437ddd208027ef9016b5f6e9fd623a0e0bb907e4836f83\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Dec 16 02:08:52.761000 audit: BPF prog-id=96 op=LOAD Dec 16 02:08:52.762000 audit: BPF prog-id=97 op=LOAD Dec 16 02:08:52.762000 audit[3105]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001b0180 a2=98 a3=0 items=0 ppid=3065 pid=3105 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:08:52.762000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6666623031343065326232623033393963643632306132353164666531 Dec 16 02:08:52.762000 audit: BPF prog-id=97 op=UNLOAD Dec 16 02:08:52.762000 audit[3105]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3065 pid=3105 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:08:52.762000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6666623031343065326232623033393963643632306132353164666531 Dec 16 02:08:52.764000 audit: BPF prog-id=98 op=LOAD Dec 16 02:08:52.764000 audit[3105]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001b03e8 a2=98 a3=0 items=0 ppid=3065 pid=3105 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:08:52.764000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6666623031343065326232623033393963643632306132353164666531 Dec 16 02:08:52.764000 audit: BPF prog-id=99 op=LOAD Dec 16 02:08:52.764000 audit[3105]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=40001b0168 a2=98 a3=0 items=0 ppid=3065 pid=3105 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:08:52.764000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6666623031343065326232623033393963643632306132353164666531 Dec 16 02:08:52.764000 audit: BPF prog-id=99 op=UNLOAD Dec 16 02:08:52.764000 audit[3105]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3065 pid=3105 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:08:52.764000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6666623031343065326232623033393963643632306132353164666531 Dec 16 02:08:52.764000 audit: BPF prog-id=98 op=UNLOAD Dec 16 02:08:52.764000 audit[3105]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3065 pid=3105 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:08:52.764000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6666623031343065326232623033393963643632306132353164666531 Dec 16 02:08:52.764000 audit: BPF prog-id=100 op=LOAD Dec 16 02:08:52.764000 audit[3105]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001b0648 a2=98 a3=0 items=0 ppid=3065 pid=3105 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:08:52.764000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6666623031343065326232623033393963643632306132353164666531 Dec 16 02:08:52.799548 containerd[1959]: time="2025-12-16T02:08:52.799160372Z" level=info msg="Container e41bcdd08568003f5ad0e6a7abd80b8baaeefde1bdaebe2d29cd48abb1d72a0e: CDI devices from CRI Config.CDIDevices: []" Dec 16 02:08:52.842862 kubelet[2951]: E1216 02:08:52.842242 2951 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://172.31.29.223:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-29-223&limit=500&resourceVersion=0\": dial tcp 172.31.29.223:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Dec 16 02:08:52.899022 kubelet[2951]: E1216 02:08:52.898836 2951 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://172.31.29.223:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.29.223:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Dec 16 02:08:52.946136 containerd[1959]: time="2025-12-16T02:08:52.943715181Z" level=info msg="CreateContainer within sandbox \"4e7db25c43351e3d77437ddd208027ef9016b5f6e9fd623a0e0bb907e4836f83\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"e41bcdd08568003f5ad0e6a7abd80b8baaeefde1bdaebe2d29cd48abb1d72a0e\"" Dec 16 02:08:52.951347 containerd[1959]: time="2025-12-16T02:08:52.951187209Z" level=info msg="StartContainer for \"e41bcdd08568003f5ad0e6a7abd80b8baaeefde1bdaebe2d29cd48abb1d72a0e\"" Dec 16 02:08:52.966966 containerd[1959]: time="2025-12-16T02:08:52.966913233Z" level=info msg="connecting to shim e41bcdd08568003f5ad0e6a7abd80b8baaeefde1bdaebe2d29cd48abb1d72a0e" address="unix:///run/containerd/s/00709b6ed7c947ba0b40ee5d1a4ca3668c34c7aef478121ca003f9de94a4ab95" protocol=ttrpc version=3 Dec 16 02:08:53.015944 containerd[1959]: time="2025-12-16T02:08:53.015863466Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-29-223,Uid:619bf629a131c61c5c1ba9aacbb8f191,Namespace:kube-system,Attempt:0,} returns sandbox id \"ffb0140e2b2b0399cd620a251dfe10187c71fd8d0e3f841f19eaa7ac4dfdf314\"" Dec 16 02:08:53.018548 containerd[1959]: time="2025-12-16T02:08:53.018393750Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-29-223,Uid:6583efaf04c10e2801f8e3607711d5cd,Namespace:kube-system,Attempt:0,} returns sandbox id \"fc2847b871daa6423cec58b1912ce924aec31573b83f454ea633c4b0cf4622fd\"" Dec 16 02:08:53.037872 containerd[1959]: time="2025-12-16T02:08:53.037567722Z" level=info msg="CreateContainer within sandbox \"ffb0140e2b2b0399cd620a251dfe10187c71fd8d0e3f841f19eaa7ac4dfdf314\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Dec 16 02:08:53.044387 containerd[1959]: time="2025-12-16T02:08:53.044160834Z" level=info msg="CreateContainer within sandbox \"fc2847b871daa6423cec58b1912ce924aec31573b83f454ea633c4b0cf4622fd\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Dec 16 02:08:53.076791 kubelet[2951]: E1216 02:08:53.076686 2951 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.29.223:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-29-223?timeout=10s\": dial tcp 172.31.29.223:6443: connect: connection refused" interval="1.6s" Dec 16 02:08:53.092275 containerd[1959]: time="2025-12-16T02:08:53.092208750Z" level=info msg="Container 12f72d9c617373d45631f07dee8cf0a3449ea3de22e57de8ab35b676131824b0: CDI devices from CRI Config.CDIDevices: []" Dec 16 02:08:53.095399 systemd[1]: Started cri-containerd-e41bcdd08568003f5ad0e6a7abd80b8baaeefde1bdaebe2d29cd48abb1d72a0e.scope - libcontainer container e41bcdd08568003f5ad0e6a7abd80b8baaeefde1bdaebe2d29cd48abb1d72a0e. Dec 16 02:08:53.098351 containerd[1959]: time="2025-12-16T02:08:53.098269314Z" level=info msg="Container 5ae4ba12859df0eed902adb2a3647f51dc0f53313205d88514a98df7b79cce45: CDI devices from CRI Config.CDIDevices: []" Dec 16 02:08:53.111361 containerd[1959]: time="2025-12-16T02:08:53.111277938Z" level=info msg="CreateContainer within sandbox \"ffb0140e2b2b0399cd620a251dfe10187c71fd8d0e3f841f19eaa7ac4dfdf314\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"12f72d9c617373d45631f07dee8cf0a3449ea3de22e57de8ab35b676131824b0\"" Dec 16 02:08:53.112957 containerd[1959]: time="2025-12-16T02:08:53.112838430Z" level=info msg="StartContainer for \"12f72d9c617373d45631f07dee8cf0a3449ea3de22e57de8ab35b676131824b0\"" Dec 16 02:08:53.116560 containerd[1959]: time="2025-12-16T02:08:53.116482950Z" level=info msg="connecting to shim 12f72d9c617373d45631f07dee8cf0a3449ea3de22e57de8ab35b676131824b0" address="unix:///run/containerd/s/3db9dee00ed292a1643bcfd4b537ecde0470c49b9dd054d80869c5f4a11b5113" protocol=ttrpc version=3 Dec 16 02:08:53.119079 containerd[1959]: time="2025-12-16T02:08:53.118876302Z" level=info msg="CreateContainer within sandbox \"fc2847b871daa6423cec58b1912ce924aec31573b83f454ea633c4b0cf4622fd\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"5ae4ba12859df0eed902adb2a3647f51dc0f53313205d88514a98df7b79cce45\"" Dec 16 02:08:53.122081 containerd[1959]: time="2025-12-16T02:08:53.122001522Z" level=info msg="StartContainer for \"5ae4ba12859df0eed902adb2a3647f51dc0f53313205d88514a98df7b79cce45\"" Dec 16 02:08:53.124404 containerd[1959]: time="2025-12-16T02:08:53.124040742Z" level=info msg="connecting to shim 5ae4ba12859df0eed902adb2a3647f51dc0f53313205d88514a98df7b79cce45" address="unix:///run/containerd/s/53b216779e8bcf3e12b0282b2c74bfeb8504d107402118a1ead1614607c46f89" protocol=ttrpc version=3 Dec 16 02:08:53.135475 kubelet[2951]: E1216 02:08:53.135374 2951 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://172.31.29.223:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.29.223:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Dec 16 02:08:53.145000 audit: BPF prog-id=101 op=LOAD Dec 16 02:08:53.146000 audit: BPF prog-id=102 op=LOAD Dec 16 02:08:53.146000 audit[3222]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000106180 a2=98 a3=0 items=0 ppid=2996 pid=3222 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:08:53.146000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6534316263646430383536383030336635616430653661376162643830 Dec 16 02:08:53.147000 audit: BPF prog-id=102 op=UNLOAD Dec 16 02:08:53.147000 audit[3222]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2996 pid=3222 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:08:53.147000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6534316263646430383536383030336635616430653661376162643830 Dec 16 02:08:53.148000 audit: BPF prog-id=103 op=LOAD Dec 16 02:08:53.148000 audit[3222]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001063e8 a2=98 a3=0 items=0 ppid=2996 pid=3222 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:08:53.148000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6534316263646430383536383030336635616430653661376162643830 Dec 16 02:08:53.148000 audit: BPF prog-id=104 op=LOAD Dec 16 02:08:53.148000 audit[3222]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000106168 a2=98 a3=0 items=0 ppid=2996 pid=3222 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:08:53.148000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6534316263646430383536383030336635616430653661376162643830 Dec 16 02:08:53.149000 audit: BPF prog-id=104 op=UNLOAD Dec 16 02:08:53.149000 audit[3222]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2996 pid=3222 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:08:53.149000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6534316263646430383536383030336635616430653661376162643830 Dec 16 02:08:53.150000 audit: BPF prog-id=103 op=UNLOAD Dec 16 02:08:53.150000 audit[3222]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2996 pid=3222 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:08:53.150000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6534316263646430383536383030336635616430653661376162643830 Dec 16 02:08:53.150000 audit: BPF prog-id=105 op=LOAD Dec 16 02:08:53.150000 audit[3222]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000106648 a2=98 a3=0 items=0 ppid=2996 pid=3222 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:08:53.150000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6534316263646430383536383030336635616430653661376162643830 Dec 16 02:08:53.188926 systemd[1]: Started cri-containerd-12f72d9c617373d45631f07dee8cf0a3449ea3de22e57de8ab35b676131824b0.scope - libcontainer container 12f72d9c617373d45631f07dee8cf0a3449ea3de22e57de8ab35b676131824b0. Dec 16 02:08:53.193149 systemd[1]: Started cri-containerd-5ae4ba12859df0eed902adb2a3647f51dc0f53313205d88514a98df7b79cce45.scope - libcontainer container 5ae4ba12859df0eed902adb2a3647f51dc0f53313205d88514a98df7b79cce45. Dec 16 02:08:53.249000 audit: BPF prog-id=106 op=LOAD Dec 16 02:08:53.250000 audit: BPF prog-id=107 op=LOAD Dec 16 02:08:53.250000 audit[3244]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001b0180 a2=98 a3=0 items=0 ppid=3033 pid=3244 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:08:53.250000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3561653462613132383539646630656564393032616462326133363437 Dec 16 02:08:53.250000 audit: BPF prog-id=107 op=UNLOAD Dec 16 02:08:53.250000 audit[3244]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3033 pid=3244 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:08:53.250000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3561653462613132383539646630656564393032616462326133363437 Dec 16 02:08:53.251000 audit: BPF prog-id=108 op=LOAD Dec 16 02:08:53.251000 audit[3244]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001b03e8 a2=98 a3=0 items=0 ppid=3033 pid=3244 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:08:53.251000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3561653462613132383539646630656564393032616462326133363437 Dec 16 02:08:53.252000 audit: BPF prog-id=109 op=LOAD Dec 16 02:08:53.252000 audit[3244]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=40001b0168 a2=98 a3=0 items=0 ppid=3033 pid=3244 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:08:53.252000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3561653462613132383539646630656564393032616462326133363437 Dec 16 02:08:53.254000 audit: BPF prog-id=109 op=UNLOAD Dec 16 02:08:53.254000 audit[3244]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3033 pid=3244 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:08:53.254000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3561653462613132383539646630656564393032616462326133363437 Dec 16 02:08:53.254000 audit: BPF prog-id=108 op=UNLOAD Dec 16 02:08:53.254000 audit[3244]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3033 pid=3244 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:08:53.254000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3561653462613132383539646630656564393032616462326133363437 Dec 16 02:08:53.256000 audit: BPF prog-id=110 op=LOAD Dec 16 02:08:53.256000 audit[3244]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001b0648 a2=98 a3=0 items=0 ppid=3033 pid=3244 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:08:53.256000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3561653462613132383539646630656564393032616462326133363437 Dec 16 02:08:53.273041 containerd[1959]: time="2025-12-16T02:08:53.272854099Z" level=info msg="StartContainer for \"e41bcdd08568003f5ad0e6a7abd80b8baaeefde1bdaebe2d29cd48abb1d72a0e\" returns successfully" Dec 16 02:08:53.278000 audit: BPF prog-id=111 op=LOAD Dec 16 02:08:53.279000 audit: BPF prog-id=112 op=LOAD Dec 16 02:08:53.279000 audit[3246]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001b0180 a2=98 a3=0 items=0 ppid=3065 pid=3246 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:08:53.279000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3132663732643963363137333733643435363331663037646565386366 Dec 16 02:08:53.279000 audit: BPF prog-id=112 op=UNLOAD Dec 16 02:08:53.279000 audit[3246]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3065 pid=3246 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:08:53.279000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3132663732643963363137333733643435363331663037646565386366 Dec 16 02:08:53.279000 audit: BPF prog-id=113 op=LOAD Dec 16 02:08:53.279000 audit[3246]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001b03e8 a2=98 a3=0 items=0 ppid=3065 pid=3246 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:08:53.279000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3132663732643963363137333733643435363331663037646565386366 Dec 16 02:08:53.280000 audit: BPF prog-id=114 op=LOAD Dec 16 02:08:53.280000 audit[3246]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=40001b0168 a2=98 a3=0 items=0 ppid=3065 pid=3246 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:08:53.280000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3132663732643963363137333733643435363331663037646565386366 Dec 16 02:08:53.280000 audit: BPF prog-id=114 op=UNLOAD Dec 16 02:08:53.280000 audit[3246]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3065 pid=3246 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:08:53.280000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3132663732643963363137333733643435363331663037646565386366 Dec 16 02:08:53.280000 audit: BPF prog-id=113 op=UNLOAD Dec 16 02:08:53.280000 audit[3246]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3065 pid=3246 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:08:53.280000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3132663732643963363137333733643435363331663037646565386366 Dec 16 02:08:53.280000 audit: BPF prog-id=115 op=LOAD Dec 16 02:08:53.280000 audit[3246]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001b0648 a2=98 a3=0 items=0 ppid=3065 pid=3246 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:08:53.280000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3132663732643963363137333733643435363331663037646565386366 Dec 16 02:08:53.330160 kubelet[2951]: I1216 02:08:53.329973 2951 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-29-223" Dec 16 02:08:53.331535 kubelet[2951]: E1216 02:08:53.331470 2951 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.29.223:6443/api/v1/nodes\": dial tcp 172.31.29.223:6443: connect: connection refused" node="ip-172-31-29-223" Dec 16 02:08:53.384245 kubelet[2951]: E1216 02:08:53.384110 2951 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://172.31.29.223:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.29.223:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Dec 16 02:08:53.389704 containerd[1959]: time="2025-12-16T02:08:53.389641051Z" level=info msg="StartContainer for \"5ae4ba12859df0eed902adb2a3647f51dc0f53313205d88514a98df7b79cce45\" returns successfully" Dec 16 02:08:53.411731 containerd[1959]: time="2025-12-16T02:08:53.411597175Z" level=info msg="StartContainer for \"12f72d9c617373d45631f07dee8cf0a3449ea3de22e57de8ab35b676131824b0\" returns successfully" Dec 16 02:08:53.867355 kubelet[2951]: E1216 02:08:53.866928 2951 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-29-223\" not found" node="ip-172-31-29-223" Dec 16 02:08:53.875580 kubelet[2951]: E1216 02:08:53.874702 2951 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-29-223\" not found" node="ip-172-31-29-223" Dec 16 02:08:53.890821 kubelet[2951]: E1216 02:08:53.890539 2951 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-29-223\" not found" node="ip-172-31-29-223" Dec 16 02:08:54.886906 kubelet[2951]: E1216 02:08:54.885756 2951 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-29-223\" not found" node="ip-172-31-29-223" Dec 16 02:08:54.886906 kubelet[2951]: E1216 02:08:54.886463 2951 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-29-223\" not found" node="ip-172-31-29-223" Dec 16 02:08:54.889937 kubelet[2951]: E1216 02:08:54.889906 2951 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-29-223\" not found" node="ip-172-31-29-223" Dec 16 02:08:54.934734 kubelet[2951]: I1216 02:08:54.934590 2951 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-29-223" Dec 16 02:08:55.889829 kubelet[2951]: E1216 02:08:55.889547 2951 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-29-223\" not found" node="ip-172-31-29-223" Dec 16 02:08:55.891177 kubelet[2951]: E1216 02:08:55.890815 2951 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-29-223\" not found" node="ip-172-31-29-223" Dec 16 02:08:58.341032 kubelet[2951]: E1216 02:08:58.340908 2951 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-29-223\" not found" node="ip-172-31-29-223" Dec 16 02:08:58.487561 kubelet[2951]: I1216 02:08:58.487172 2951 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-29-223" Dec 16 02:08:58.572530 kubelet[2951]: I1216 02:08:58.572484 2951 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-29-223" Dec 16 02:08:58.589126 kubelet[2951]: E1216 02:08:58.587930 2951 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-29-223\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ip-172-31-29-223" Dec 16 02:08:58.589740 kubelet[2951]: I1216 02:08:58.589340 2951 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-29-223" Dec 16 02:08:58.593211 kubelet[2951]: E1216 02:08:58.592637 2951 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ip-172-31-29-223\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ip-172-31-29-223" Dec 16 02:08:58.594089 kubelet[2951]: I1216 02:08:58.593441 2951 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-29-223" Dec 16 02:08:58.600228 kubelet[2951]: E1216 02:08:58.599872 2951 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ip-172-31-29-223\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ip-172-31-29-223" Dec 16 02:08:58.635102 kubelet[2951]: I1216 02:08:58.634393 2951 apiserver.go:52] "Watching apiserver" Dec 16 02:08:58.672979 kubelet[2951]: I1216 02:08:58.672922 2951 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Dec 16 02:09:01.168846 systemd[1]: Reload requested from client PID 3327 ('systemctl') (unit session-10.scope)... Dec 16 02:09:01.168872 systemd[1]: Reloading... Dec 16 02:09:01.374171 zram_generator::config[3374]: No configuration found. Dec 16 02:09:01.953383 systemd[1]: Reloading finished in 783 ms. Dec 16 02:09:01.993091 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 02:09:02.012026 systemd[1]: kubelet.service: Deactivated successfully. Dec 16 02:09:02.012647 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 02:09:02.011000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:09:02.013976 kernel: kauditd_printk_skb: 158 callbacks suppressed Dec 16 02:09:02.014133 kernel: audit: type=1131 audit(1765850942.011:398): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:09:02.018552 systemd[1]: kubelet.service: Consumed 2.370s CPU time, 128.7M memory peak. Dec 16 02:09:02.022748 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 02:09:02.027000 audit: BPF prog-id=116 op=LOAD Dec 16 02:09:02.027000 audit: BPF prog-id=71 op=UNLOAD Dec 16 02:09:02.033682 kernel: audit: type=1334 audit(1765850942.027:399): prog-id=116 op=LOAD Dec 16 02:09:02.033804 kernel: audit: type=1334 audit(1765850942.027:400): prog-id=71 op=UNLOAD Dec 16 02:09:02.027000 audit: BPF prog-id=117 op=LOAD Dec 16 02:09:02.036640 kernel: audit: type=1334 audit(1765850942.027:401): prog-id=117 op=LOAD Dec 16 02:09:02.042975 kernel: audit: type=1334 audit(1765850942.027:402): prog-id=118 op=LOAD Dec 16 02:09:02.043126 kernel: audit: type=1334 audit(1765850942.027:403): prog-id=72 op=UNLOAD Dec 16 02:09:02.043193 kernel: audit: type=1334 audit(1765850942.027:404): prog-id=73 op=UNLOAD Dec 16 02:09:02.043240 kernel: audit: type=1334 audit(1765850942.036:405): prog-id=119 op=LOAD Dec 16 02:09:02.027000 audit: BPF prog-id=118 op=LOAD Dec 16 02:09:02.027000 audit: BPF prog-id=72 op=UNLOAD Dec 16 02:09:02.027000 audit: BPF prog-id=73 op=UNLOAD Dec 16 02:09:02.036000 audit: BPF prog-id=119 op=LOAD Dec 16 02:09:02.036000 audit: BPF prog-id=69 op=UNLOAD Dec 16 02:09:02.045199 kernel: audit: type=1334 audit(1765850942.036:406): prog-id=69 op=UNLOAD Dec 16 02:09:02.042000 audit: BPF prog-id=120 op=LOAD Dec 16 02:09:02.047153 kernel: audit: type=1334 audit(1765850942.042:407): prog-id=120 op=LOAD Dec 16 02:09:02.042000 audit: BPF prog-id=70 op=UNLOAD Dec 16 02:09:02.044000 audit: BPF prog-id=121 op=LOAD Dec 16 02:09:02.044000 audit: BPF prog-id=78 op=UNLOAD Dec 16 02:09:02.046000 audit: BPF prog-id=122 op=LOAD Dec 16 02:09:02.046000 audit: BPF prog-id=123 op=LOAD Dec 16 02:09:02.046000 audit: BPF prog-id=79 op=UNLOAD Dec 16 02:09:02.046000 audit: BPF prog-id=80 op=UNLOAD Dec 16 02:09:02.057000 audit: BPF prog-id=124 op=LOAD Dec 16 02:09:02.057000 audit: BPF prog-id=75 op=UNLOAD Dec 16 02:09:02.057000 audit: BPF prog-id=125 op=LOAD Dec 16 02:09:02.057000 audit: BPF prog-id=126 op=LOAD Dec 16 02:09:02.058000 audit: BPF prog-id=76 op=UNLOAD Dec 16 02:09:02.058000 audit: BPF prog-id=77 op=UNLOAD Dec 16 02:09:02.059000 audit: BPF prog-id=127 op=LOAD Dec 16 02:09:02.059000 audit: BPF prog-id=66 op=UNLOAD Dec 16 02:09:02.059000 audit: BPF prog-id=128 op=LOAD Dec 16 02:09:02.059000 audit: BPF prog-id=129 op=LOAD Dec 16 02:09:02.059000 audit: BPF prog-id=67 op=UNLOAD Dec 16 02:09:02.059000 audit: BPF prog-id=68 op=UNLOAD Dec 16 02:09:02.061000 audit: BPF prog-id=130 op=LOAD Dec 16 02:09:02.062000 audit: BPF prog-id=83 op=UNLOAD Dec 16 02:09:02.062000 audit: BPF prog-id=131 op=LOAD Dec 16 02:09:02.062000 audit: BPF prog-id=132 op=LOAD Dec 16 02:09:02.062000 audit: BPF prog-id=84 op=UNLOAD Dec 16 02:09:02.062000 audit: BPF prog-id=85 op=UNLOAD Dec 16 02:09:02.066000 audit: BPF prog-id=133 op=LOAD Dec 16 02:09:02.066000 audit: BPF prog-id=74 op=UNLOAD Dec 16 02:09:02.067000 audit: BPF prog-id=134 op=LOAD Dec 16 02:09:02.067000 audit: BPF prog-id=135 op=LOAD Dec 16 02:09:02.067000 audit: BPF prog-id=81 op=UNLOAD Dec 16 02:09:02.068000 audit: BPF prog-id=82 op=UNLOAD Dec 16 02:09:02.470000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:09:02.471505 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 02:09:02.498130 (kubelet)[3435]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 16 02:09:02.614624 kubelet[3435]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 16 02:09:02.614624 kubelet[3435]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Dec 16 02:09:02.614624 kubelet[3435]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 16 02:09:02.616753 kubelet[3435]: I1216 02:09:02.614796 3435 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 16 02:09:02.644112 kubelet[3435]: I1216 02:09:02.641808 3435 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Dec 16 02:09:02.644297 kubelet[3435]: I1216 02:09:02.644183 3435 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 16 02:09:02.644791 kubelet[3435]: I1216 02:09:02.644733 3435 server.go:956] "Client rotation is on, will bootstrap in background" Dec 16 02:09:02.650103 kubelet[3435]: I1216 02:09:02.649995 3435 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Dec 16 02:09:02.660536 kubelet[3435]: I1216 02:09:02.660370 3435 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 16 02:09:02.680828 kubelet[3435]: I1216 02:09:02.680532 3435 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Dec 16 02:09:02.686584 kubelet[3435]: I1216 02:09:02.686527 3435 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 16 02:09:02.687305 kubelet[3435]: I1216 02:09:02.687215 3435 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 16 02:09:02.687630 kubelet[3435]: I1216 02:09:02.687289 3435 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-29-223","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 16 02:09:02.687874 kubelet[3435]: I1216 02:09:02.687638 3435 topology_manager.go:138] "Creating topology manager with none policy" Dec 16 02:09:02.687874 kubelet[3435]: I1216 02:09:02.687665 3435 container_manager_linux.go:303] "Creating device plugin manager" Dec 16 02:09:02.687874 kubelet[3435]: I1216 02:09:02.687762 3435 state_mem.go:36] "Initialized new in-memory state store" Dec 16 02:09:02.688204 kubelet[3435]: I1216 02:09:02.688165 3435 kubelet.go:480] "Attempting to sync node with API server" Dec 16 02:09:02.688337 kubelet[3435]: I1216 02:09:02.688225 3435 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 16 02:09:02.688337 kubelet[3435]: I1216 02:09:02.688280 3435 kubelet.go:386] "Adding apiserver pod source" Dec 16 02:09:02.688337 kubelet[3435]: I1216 02:09:02.688310 3435 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 16 02:09:02.693736 kubelet[3435]: I1216 02:09:02.692734 3435 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.1.5" apiVersion="v1" Dec 16 02:09:02.694471 kubelet[3435]: I1216 02:09:02.694402 3435 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Dec 16 02:09:02.703635 kubelet[3435]: I1216 02:09:02.703578 3435 watchdog_linux.go:99] "Systemd watchdog is not enabled" Dec 16 02:09:02.703867 kubelet[3435]: I1216 02:09:02.703671 3435 server.go:1289] "Started kubelet" Dec 16 02:09:02.713843 kubelet[3435]: I1216 02:09:02.713779 3435 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 16 02:09:02.728522 kubelet[3435]: I1216 02:09:02.725523 3435 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Dec 16 02:09:02.729079 kubelet[3435]: I1216 02:09:02.728817 3435 server.go:317] "Adding debug handlers to kubelet server" Dec 16 02:09:02.741895 kubelet[3435]: I1216 02:09:02.740111 3435 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 16 02:09:02.741895 kubelet[3435]: I1216 02:09:02.740597 3435 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 16 02:09:02.742195 kubelet[3435]: I1216 02:09:02.742041 3435 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Dec 16 02:09:02.749952 kubelet[3435]: I1216 02:09:02.749848 3435 volume_manager.go:297] "Starting Kubelet Volume Manager" Dec 16 02:09:02.751535 kubelet[3435]: E1216 02:09:02.751437 3435 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-29-223\" not found" Dec 16 02:09:02.755107 kubelet[3435]: I1216 02:09:02.754027 3435 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Dec 16 02:09:02.755107 kubelet[3435]: I1216 02:09:02.754565 3435 reconciler.go:26] "Reconciler: start to sync state" Dec 16 02:09:02.776733 kubelet[3435]: I1216 02:09:02.771240 3435 factory.go:223] Registration of the systemd container factory successfully Dec 16 02:09:02.776733 kubelet[3435]: E1216 02:09:02.776315 3435 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 16 02:09:02.777091 kubelet[3435]: I1216 02:09:02.776978 3435 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 16 02:09:02.789966 kubelet[3435]: I1216 02:09:02.789822 3435 factory.go:223] Registration of the containerd container factory successfully Dec 16 02:09:02.818162 kubelet[3435]: I1216 02:09:02.818044 3435 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Dec 16 02:09:02.822295 kubelet[3435]: I1216 02:09:02.822247 3435 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Dec 16 02:09:02.822497 kubelet[3435]: I1216 02:09:02.822470 3435 status_manager.go:230] "Starting to sync pod status with apiserver" Dec 16 02:09:02.822665 kubelet[3435]: I1216 02:09:02.822634 3435 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Dec 16 02:09:02.822825 kubelet[3435]: I1216 02:09:02.822802 3435 kubelet.go:2436] "Starting kubelet main sync loop" Dec 16 02:09:02.823090 kubelet[3435]: E1216 02:09:02.823020 3435 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 16 02:09:02.923876 kubelet[3435]: E1216 02:09:02.923828 3435 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Dec 16 02:09:02.929425 kubelet[3435]: I1216 02:09:02.929372 3435 cpu_manager.go:221] "Starting CPU manager" policy="none" Dec 16 02:09:02.930099 kubelet[3435]: I1216 02:09:02.929679 3435 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Dec 16 02:09:02.930099 kubelet[3435]: I1216 02:09:02.929740 3435 state_mem.go:36] "Initialized new in-memory state store" Dec 16 02:09:02.930099 kubelet[3435]: I1216 02:09:02.929993 3435 state_mem.go:88] "Updated default CPUSet" cpuSet="" Dec 16 02:09:02.930433 kubelet[3435]: I1216 02:09:02.930040 3435 state_mem.go:96] "Updated CPUSet assignments" assignments={} Dec 16 02:09:02.930572 kubelet[3435]: I1216 02:09:02.930548 3435 policy_none.go:49] "None policy: Start" Dec 16 02:09:02.930690 kubelet[3435]: I1216 02:09:02.930667 3435 memory_manager.go:186] "Starting memorymanager" policy="None" Dec 16 02:09:02.930832 kubelet[3435]: I1216 02:09:02.930810 3435 state_mem.go:35] "Initializing new in-memory state store" Dec 16 02:09:02.931275 kubelet[3435]: I1216 02:09:02.931232 3435 state_mem.go:75] "Updated machine memory state" Dec 16 02:09:02.944107 kubelet[3435]: E1216 02:09:02.943584 3435 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Dec 16 02:09:02.944107 kubelet[3435]: I1216 02:09:02.943917 3435 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 16 02:09:02.944107 kubelet[3435]: I1216 02:09:02.943939 3435 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 16 02:09:02.944552 kubelet[3435]: I1216 02:09:02.944505 3435 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 16 02:09:02.951502 kubelet[3435]: E1216 02:09:02.950809 3435 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Dec 16 02:09:03.066833 kubelet[3435]: I1216 02:09:03.066630 3435 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-29-223" Dec 16 02:09:03.084864 kubelet[3435]: I1216 02:09:03.084727 3435 kubelet_node_status.go:124] "Node was previously registered" node="ip-172-31-29-223" Dec 16 02:09:03.085108 kubelet[3435]: I1216 02:09:03.085036 3435 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-29-223" Dec 16 02:09:03.126155 kubelet[3435]: I1216 02:09:03.126096 3435 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-29-223" Dec 16 02:09:03.128959 kubelet[3435]: I1216 02:09:03.128887 3435 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-29-223" Dec 16 02:09:03.133696 kubelet[3435]: I1216 02:09:03.133597 3435 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-29-223" Dec 16 02:09:03.161091 kubelet[3435]: I1216 02:09:03.160998 3435 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1fa16be315968bf2e0dd0dbdf6fc0e5d-k8s-certs\") pod \"kube-controller-manager-ip-172-31-29-223\" (UID: \"1fa16be315968bf2e0dd0dbdf6fc0e5d\") " pod="kube-system/kube-controller-manager-ip-172-31-29-223" Dec 16 02:09:03.161237 kubelet[3435]: I1216 02:09:03.161108 3435 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/1fa16be315968bf2e0dd0dbdf6fc0e5d-kubeconfig\") pod \"kube-controller-manager-ip-172-31-29-223\" (UID: \"1fa16be315968bf2e0dd0dbdf6fc0e5d\") " pod="kube-system/kube-controller-manager-ip-172-31-29-223" Dec 16 02:09:03.161237 kubelet[3435]: I1216 02:09:03.161159 3435 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1fa16be315968bf2e0dd0dbdf6fc0e5d-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-29-223\" (UID: \"1fa16be315968bf2e0dd0dbdf6fc0e5d\") " pod="kube-system/kube-controller-manager-ip-172-31-29-223" Dec 16 02:09:03.161237 kubelet[3435]: I1216 02:09:03.161207 3435 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6583efaf04c10e2801f8e3607711d5cd-kubeconfig\") pod \"kube-scheduler-ip-172-31-29-223\" (UID: \"6583efaf04c10e2801f8e3607711d5cd\") " pod="kube-system/kube-scheduler-ip-172-31-29-223" Dec 16 02:09:03.161455 kubelet[3435]: I1216 02:09:03.161247 3435 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/619bf629a131c61c5c1ba9aacbb8f191-k8s-certs\") pod \"kube-apiserver-ip-172-31-29-223\" (UID: \"619bf629a131c61c5c1ba9aacbb8f191\") " pod="kube-system/kube-apiserver-ip-172-31-29-223" Dec 16 02:09:03.161455 kubelet[3435]: I1216 02:09:03.161284 3435 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/619bf629a131c61c5c1ba9aacbb8f191-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-29-223\" (UID: \"619bf629a131c61c5c1ba9aacbb8f191\") " pod="kube-system/kube-apiserver-ip-172-31-29-223" Dec 16 02:09:03.161455 kubelet[3435]: I1216 02:09:03.161322 3435 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1fa16be315968bf2e0dd0dbdf6fc0e5d-ca-certs\") pod \"kube-controller-manager-ip-172-31-29-223\" (UID: \"1fa16be315968bf2e0dd0dbdf6fc0e5d\") " pod="kube-system/kube-controller-manager-ip-172-31-29-223" Dec 16 02:09:03.161455 kubelet[3435]: I1216 02:09:03.161357 3435 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/619bf629a131c61c5c1ba9aacbb8f191-ca-certs\") pod \"kube-apiserver-ip-172-31-29-223\" (UID: \"619bf629a131c61c5c1ba9aacbb8f191\") " pod="kube-system/kube-apiserver-ip-172-31-29-223" Dec 16 02:09:03.161455 kubelet[3435]: I1216 02:09:03.161394 3435 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/1fa16be315968bf2e0dd0dbdf6fc0e5d-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-29-223\" (UID: \"1fa16be315968bf2e0dd0dbdf6fc0e5d\") " pod="kube-system/kube-controller-manager-ip-172-31-29-223" Dec 16 02:09:03.690166 kubelet[3435]: I1216 02:09:03.690071 3435 apiserver.go:52] "Watching apiserver" Dec 16 02:09:03.755033 kubelet[3435]: I1216 02:09:03.754916 3435 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Dec 16 02:09:03.870100 kubelet[3435]: I1216 02:09:03.869617 3435 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-29-223" Dec 16 02:09:03.884914 kubelet[3435]: E1216 02:09:03.884846 3435 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-29-223\" already exists" pod="kube-system/kube-apiserver-ip-172-31-29-223" Dec 16 02:09:03.931156 kubelet[3435]: I1216 02:09:03.930345 3435 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-29-223" podStartSLOduration=0.930322616 podStartE2EDuration="930.322616ms" podCreationTimestamp="2025-12-16 02:09:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 02:09:03.927444836 +0000 UTC m=+1.413646436" watchObservedRunningTime="2025-12-16 02:09:03.930322616 +0000 UTC m=+1.416524240" Dec 16 02:09:03.931750 kubelet[3435]: I1216 02:09:03.931643 3435 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-29-223" podStartSLOduration=0.931621544 podStartE2EDuration="931.621544ms" podCreationTimestamp="2025-12-16 02:09:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 02:09:03.906835532 +0000 UTC m=+1.393037156" watchObservedRunningTime="2025-12-16 02:09:03.931621544 +0000 UTC m=+1.417823156" Dec 16 02:09:03.979073 kubelet[3435]: I1216 02:09:03.978979 3435 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-29-223" podStartSLOduration=0.978925556 podStartE2EDuration="978.925556ms" podCreationTimestamp="2025-12-16 02:09:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 02:09:03.95274368 +0000 UTC m=+1.438945304" watchObservedRunningTime="2025-12-16 02:09:03.978925556 +0000 UTC m=+1.465127180" Dec 16 02:09:06.507134 kubelet[3435]: I1216 02:09:06.506800 3435 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Dec 16 02:09:06.509440 containerd[1959]: time="2025-12-16T02:09:06.509216781Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Dec 16 02:09:06.511318 kubelet[3435]: I1216 02:09:06.510779 3435 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Dec 16 02:09:07.227390 systemd[1]: Created slice kubepods-besteffort-pode45389ba_75b8_4ed1_9fab_67f8ca7bb421.slice - libcontainer container kubepods-besteffort-pode45389ba_75b8_4ed1_9fab_67f8ca7bb421.slice. Dec 16 02:09:07.291540 kubelet[3435]: I1216 02:09:07.291427 3435 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/e45389ba-75b8-4ed1-9fab-67f8ca7bb421-kube-proxy\") pod \"kube-proxy-jjjsq\" (UID: \"e45389ba-75b8-4ed1-9fab-67f8ca7bb421\") " pod="kube-system/kube-proxy-jjjsq" Dec 16 02:09:07.291540 kubelet[3435]: I1216 02:09:07.291497 3435 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e45389ba-75b8-4ed1-9fab-67f8ca7bb421-xtables-lock\") pod \"kube-proxy-jjjsq\" (UID: \"e45389ba-75b8-4ed1-9fab-67f8ca7bb421\") " pod="kube-system/kube-proxy-jjjsq" Dec 16 02:09:07.291895 kubelet[3435]: I1216 02:09:07.291777 3435 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e45389ba-75b8-4ed1-9fab-67f8ca7bb421-lib-modules\") pod \"kube-proxy-jjjsq\" (UID: \"e45389ba-75b8-4ed1-9fab-67f8ca7bb421\") " pod="kube-system/kube-proxy-jjjsq" Dec 16 02:09:07.291895 kubelet[3435]: I1216 02:09:07.291830 3435 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x9drw\" (UniqueName: \"kubernetes.io/projected/e45389ba-75b8-4ed1-9fab-67f8ca7bb421-kube-api-access-x9drw\") pod \"kube-proxy-jjjsq\" (UID: \"e45389ba-75b8-4ed1-9fab-67f8ca7bb421\") " pod="kube-system/kube-proxy-jjjsq" Dec 16 02:09:07.403412 kubelet[3435]: E1216 02:09:07.403356 3435 projected.go:289] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Dec 16 02:09:07.403412 kubelet[3435]: E1216 02:09:07.403408 3435 projected.go:194] Error preparing data for projected volume kube-api-access-x9drw for pod kube-system/kube-proxy-jjjsq: configmap "kube-root-ca.crt" not found Dec 16 02:09:07.403620 kubelet[3435]: E1216 02:09:07.403522 3435 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e45389ba-75b8-4ed1-9fab-67f8ca7bb421-kube-api-access-x9drw podName:e45389ba-75b8-4ed1-9fab-67f8ca7bb421 nodeName:}" failed. No retries permitted until 2025-12-16 02:09:07.903488537 +0000 UTC m=+5.389690161 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-x9drw" (UniqueName: "kubernetes.io/projected/e45389ba-75b8-4ed1-9fab-67f8ca7bb421-kube-api-access-x9drw") pod "kube-proxy-jjjsq" (UID: "e45389ba-75b8-4ed1-9fab-67f8ca7bb421") : configmap "kube-root-ca.crt" not found Dec 16 02:09:07.721185 systemd[1]: Created slice kubepods-besteffort-podcb11f455_b284_40c2_990e_3e7bc2af26e2.slice - libcontainer container kubepods-besteffort-podcb11f455_b284_40c2_990e_3e7bc2af26e2.slice. Dec 16 02:09:07.794605 kubelet[3435]: I1216 02:09:07.794551 3435 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/cb11f455-b284-40c2-990e-3e7bc2af26e2-var-lib-calico\") pod \"tigera-operator-7dcd859c48-6p2fj\" (UID: \"cb11f455-b284-40c2-990e-3e7bc2af26e2\") " pod="tigera-operator/tigera-operator-7dcd859c48-6p2fj" Dec 16 02:09:07.795223 kubelet[3435]: I1216 02:09:07.794624 3435 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-686s7\" (UniqueName: \"kubernetes.io/projected/cb11f455-b284-40c2-990e-3e7bc2af26e2-kube-api-access-686s7\") pod \"tigera-operator-7dcd859c48-6p2fj\" (UID: \"cb11f455-b284-40c2-990e-3e7bc2af26e2\") " pod="tigera-operator/tigera-operator-7dcd859c48-6p2fj" Dec 16 02:09:08.031137 containerd[1959]: time="2025-12-16T02:09:08.031044296Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-6p2fj,Uid:cb11f455-b284-40c2-990e-3e7bc2af26e2,Namespace:tigera-operator,Attempt:0,}" Dec 16 02:09:08.084088 containerd[1959]: time="2025-12-16T02:09:08.083259956Z" level=info msg="connecting to shim b63291b75d01cead7f9c3e97a5b649da7dc38036757cde1171aa8320a87b4245" address="unix:///run/containerd/s/489a8202ae0919288f30ab3e5b8c1ea308dacaf680847191dd9871ad8be8f75a" namespace=k8s.io protocol=ttrpc version=3 Dec 16 02:09:08.140181 containerd[1959]: time="2025-12-16T02:09:08.140136717Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-jjjsq,Uid:e45389ba-75b8-4ed1-9fab-67f8ca7bb421,Namespace:kube-system,Attempt:0,}" Dec 16 02:09:08.144463 systemd[1]: Started cri-containerd-b63291b75d01cead7f9c3e97a5b649da7dc38036757cde1171aa8320a87b4245.scope - libcontainer container b63291b75d01cead7f9c3e97a5b649da7dc38036757cde1171aa8320a87b4245. Dec 16 02:09:08.192000 audit: BPF prog-id=136 op=LOAD Dec 16 02:09:08.194969 kernel: kauditd_printk_skb: 32 callbacks suppressed Dec 16 02:09:08.195042 kernel: audit: type=1334 audit(1765850948.192:440): prog-id=136 op=LOAD Dec 16 02:09:08.196000 audit: BPF prog-id=137 op=LOAD Dec 16 02:09:08.199056 kernel: audit: type=1334 audit(1765850948.196:441): prog-id=137 op=LOAD Dec 16 02:09:08.196000 audit[3501]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178180 a2=98 a3=0 items=0 ppid=3490 pid=3501 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:09:08.206509 kernel: audit: type=1300 audit(1765850948.196:441): arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178180 a2=98 a3=0 items=0 ppid=3490 pid=3501 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:09:08.196000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6236333239316237356430316365616437663963336539376135623634 Dec 16 02:09:08.217026 kernel: audit: type=1327 audit(1765850948.196:441): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6236333239316237356430316365616437663963336539376135623634 Dec 16 02:09:08.218726 containerd[1959]: time="2025-12-16T02:09:08.218651085Z" level=info msg="connecting to shim 78c71abca1dbdc2dbeecce12e1cbda8e86af992fe7c78d2885ad788f5ec63b61" address="unix:///run/containerd/s/70ca794c9d70e545b2c2c016dc61657db2f4ecec849c6c1e478aaf688a6ad31b" namespace=k8s.io protocol=ttrpc version=3 Dec 16 02:09:08.198000 audit: BPF prog-id=137 op=UNLOAD Dec 16 02:09:08.198000 audit[3501]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3490 pid=3501 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:09:08.228760 kernel: audit: type=1334 audit(1765850948.198:442): prog-id=137 op=UNLOAD Dec 16 02:09:08.228994 kernel: audit: type=1300 audit(1765850948.198:442): arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3490 pid=3501 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:09:08.230262 kernel: audit: type=1327 audit(1765850948.198:442): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6236333239316237356430316365616437663963336539376135623634 Dec 16 02:09:08.198000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6236333239316237356430316365616437663963336539376135623634 Dec 16 02:09:08.207000 audit: BPF prog-id=138 op=LOAD Dec 16 02:09:08.237088 kernel: audit: type=1334 audit(1765850948.207:443): prog-id=138 op=LOAD Dec 16 02:09:08.207000 audit[3501]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=3490 pid=3501 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:09:08.243462 kernel: audit: type=1300 audit(1765850948.207:443): arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=3490 pid=3501 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:09:08.207000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6236333239316237356430316365616437663963336539376135623634 Dec 16 02:09:08.251117 kernel: audit: type=1327 audit(1765850948.207:443): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6236333239316237356430316365616437663963336539376135623634 Dec 16 02:09:08.208000 audit: BPF prog-id=139 op=LOAD Dec 16 02:09:08.208000 audit[3501]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000178168 a2=98 a3=0 items=0 ppid=3490 pid=3501 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:09:08.208000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6236333239316237356430316365616437663963336539376135623634 Dec 16 02:09:08.209000 audit: BPF prog-id=139 op=UNLOAD Dec 16 02:09:08.209000 audit[3501]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3490 pid=3501 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:09:08.209000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6236333239316237356430316365616437663963336539376135623634 Dec 16 02:09:08.209000 audit: BPF prog-id=138 op=UNLOAD Dec 16 02:09:08.209000 audit[3501]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3490 pid=3501 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:09:08.209000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6236333239316237356430316365616437663963336539376135623634 Dec 16 02:09:08.209000 audit: BPF prog-id=140 op=LOAD Dec 16 02:09:08.209000 audit[3501]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178648 a2=98 a3=0 items=0 ppid=3490 pid=3501 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:09:08.209000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6236333239316237356430316365616437663963336539376135623634 Dec 16 02:09:08.279496 systemd[1]: Started cri-containerd-78c71abca1dbdc2dbeecce12e1cbda8e86af992fe7c78d2885ad788f5ec63b61.scope - libcontainer container 78c71abca1dbdc2dbeecce12e1cbda8e86af992fe7c78d2885ad788f5ec63b61. Dec 16 02:09:08.310796 containerd[1959]: time="2025-12-16T02:09:08.310584909Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-6p2fj,Uid:cb11f455-b284-40c2-990e-3e7bc2af26e2,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"b63291b75d01cead7f9c3e97a5b649da7dc38036757cde1171aa8320a87b4245\"" Dec 16 02:09:08.320111 containerd[1959]: time="2025-12-16T02:09:08.319986862Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Dec 16 02:09:08.325000 audit: BPF prog-id=141 op=LOAD Dec 16 02:09:08.327000 audit: BPF prog-id=142 op=LOAD Dec 16 02:09:08.327000 audit[3542]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130180 a2=98 a3=0 items=0 ppid=3529 pid=3542 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:09:08.327000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3738633731616263613164626463326462656563636531326531636264 Dec 16 02:09:08.327000 audit: BPF prog-id=142 op=UNLOAD Dec 16 02:09:08.327000 audit[3542]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3529 pid=3542 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:09:08.327000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3738633731616263613164626463326462656563636531326531636264 Dec 16 02:09:08.327000 audit: BPF prog-id=143 op=LOAD Dec 16 02:09:08.327000 audit[3542]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001303e8 a2=98 a3=0 items=0 ppid=3529 pid=3542 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:09:08.327000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3738633731616263613164626463326462656563636531326531636264 Dec 16 02:09:08.327000 audit: BPF prog-id=144 op=LOAD Dec 16 02:09:08.327000 audit[3542]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000130168 a2=98 a3=0 items=0 ppid=3529 pid=3542 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:09:08.327000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3738633731616263613164626463326462656563636531326531636264 Dec 16 02:09:08.327000 audit: BPF prog-id=144 op=UNLOAD Dec 16 02:09:08.327000 audit[3542]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3529 pid=3542 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:09:08.327000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3738633731616263613164626463326462656563636531326531636264 Dec 16 02:09:08.328000 audit: BPF prog-id=143 op=UNLOAD Dec 16 02:09:08.328000 audit[3542]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3529 pid=3542 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:09:08.328000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3738633731616263613164626463326462656563636531326531636264 Dec 16 02:09:08.328000 audit: BPF prog-id=145 op=LOAD Dec 16 02:09:08.328000 audit[3542]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130648 a2=98 a3=0 items=0 ppid=3529 pid=3542 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:09:08.328000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3738633731616263613164626463326462656563636531326531636264 Dec 16 02:09:08.360749 containerd[1959]: time="2025-12-16T02:09:08.360616990Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-jjjsq,Uid:e45389ba-75b8-4ed1-9fab-67f8ca7bb421,Namespace:kube-system,Attempt:0,} returns sandbox id \"78c71abca1dbdc2dbeecce12e1cbda8e86af992fe7c78d2885ad788f5ec63b61\"" Dec 16 02:09:08.372094 containerd[1959]: time="2025-12-16T02:09:08.371992870Z" level=info msg="CreateContainer within sandbox \"78c71abca1dbdc2dbeecce12e1cbda8e86af992fe7c78d2885ad788f5ec63b61\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Dec 16 02:09:08.393128 containerd[1959]: time="2025-12-16T02:09:08.392246014Z" level=info msg="Container 546793ab200732fa3991b52fea12e00b331ca437bec29da63ef745a0002795b1: CDI devices from CRI Config.CDIDevices: []" Dec 16 02:09:08.409494 containerd[1959]: time="2025-12-16T02:09:08.409386526Z" level=info msg="CreateContainer within sandbox \"78c71abca1dbdc2dbeecce12e1cbda8e86af992fe7c78d2885ad788f5ec63b61\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"546793ab200732fa3991b52fea12e00b331ca437bec29da63ef745a0002795b1\"" Dec 16 02:09:08.411016 containerd[1959]: time="2025-12-16T02:09:08.410938162Z" level=info msg="StartContainer for \"546793ab200732fa3991b52fea12e00b331ca437bec29da63ef745a0002795b1\"" Dec 16 02:09:08.416740 containerd[1959]: time="2025-12-16T02:09:08.416666266Z" level=info msg="connecting to shim 546793ab200732fa3991b52fea12e00b331ca437bec29da63ef745a0002795b1" address="unix:///run/containerd/s/70ca794c9d70e545b2c2c016dc61657db2f4ecec849c6c1e478aaf688a6ad31b" protocol=ttrpc version=3 Dec 16 02:09:08.453445 systemd[1]: Started cri-containerd-546793ab200732fa3991b52fea12e00b331ca437bec29da63ef745a0002795b1.scope - libcontainer container 546793ab200732fa3991b52fea12e00b331ca437bec29da63ef745a0002795b1. Dec 16 02:09:08.541000 audit: BPF prog-id=146 op=LOAD Dec 16 02:09:08.541000 audit[3578]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001383e8 a2=98 a3=0 items=0 ppid=3529 pid=3578 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:09:08.541000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3534363739336162323030373332666133393931623532666561313265 Dec 16 02:09:08.541000 audit: BPF prog-id=147 op=LOAD Dec 16 02:09:08.541000 audit[3578]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=4000138168 a2=98 a3=0 items=0 ppid=3529 pid=3578 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:09:08.541000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3534363739336162323030373332666133393931623532666561313265 Dec 16 02:09:08.542000 audit: BPF prog-id=147 op=UNLOAD Dec 16 02:09:08.542000 audit[3578]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3529 pid=3578 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:09:08.542000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3534363739336162323030373332666133393931623532666561313265 Dec 16 02:09:08.542000 audit: BPF prog-id=146 op=UNLOAD Dec 16 02:09:08.542000 audit[3578]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3529 pid=3578 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:09:08.542000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3534363739336162323030373332666133393931623532666561313265 Dec 16 02:09:08.542000 audit: BPF prog-id=148 op=LOAD Dec 16 02:09:08.542000 audit[3578]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000138648 a2=98 a3=0 items=0 ppid=3529 pid=3578 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:09:08.542000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3534363739336162323030373332666133393931623532666561313265 Dec 16 02:09:08.600955 containerd[1959]: time="2025-12-16T02:09:08.598351595Z" level=info msg="StartContainer for \"546793ab200732fa3991b52fea12e00b331ca437bec29da63ef745a0002795b1\" returns successfully" Dec 16 02:09:08.927000 audit[3641]: NETFILTER_CFG table=mangle:54 family=2 entries=1 op=nft_register_chain pid=3641 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 02:09:08.927000 audit[3641]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffde056c50 a2=0 a3=1 items=0 ppid=3591 pid=3641 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:09:08.927000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Dec 16 02:09:08.929000 audit[3642]: NETFILTER_CFG table=nat:55 family=2 entries=1 op=nft_register_chain pid=3642 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 02:09:08.929000 audit[3642]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffc8c7d800 a2=0 a3=1 items=0 ppid=3591 pid=3642 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:09:08.929000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Dec 16 02:09:08.932000 audit[3643]: NETFILTER_CFG table=filter:56 family=2 entries=1 op=nft_register_chain pid=3643 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 02:09:08.932000 audit[3643]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffca087530 a2=0 a3=1 items=0 ppid=3591 pid=3643 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:09:08.932000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Dec 16 02:09:08.937000 audit[3644]: NETFILTER_CFG table=mangle:57 family=10 entries=1 op=nft_register_chain pid=3644 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 02:09:08.937000 audit[3644]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=fffff15a0f60 a2=0 a3=1 items=0 ppid=3591 pid=3644 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:09:08.937000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Dec 16 02:09:08.949000 audit[3645]: NETFILTER_CFG table=nat:58 family=10 entries=1 op=nft_register_chain pid=3645 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 02:09:08.949000 audit[3645]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffe557d060 a2=0 a3=1 items=0 ppid=3591 pid=3645 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:09:08.949000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Dec 16 02:09:08.953028 kubelet[3435]: I1216 02:09:08.952914 3435 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-jjjsq" podStartSLOduration=1.952889029 podStartE2EDuration="1.952889029s" podCreationTimestamp="2025-12-16 02:09:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 02:09:08.951628801 +0000 UTC m=+6.437830413" watchObservedRunningTime="2025-12-16 02:09:08.952889029 +0000 UTC m=+6.439090629" Dec 16 02:09:08.990000 audit[3649]: NETFILTER_CFG table=filter:59 family=10 entries=1 op=nft_register_chain pid=3649 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 02:09:08.990000 audit[3649]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffd6164c30 a2=0 a3=1 items=0 ppid=3591 pid=3649 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:09:08.990000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Dec 16 02:09:09.072000 audit[3651]: NETFILTER_CFG table=filter:60 family=2 entries=1 op=nft_register_chain pid=3651 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 02:09:09.072000 audit[3651]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=108 a0=3 a1=ffffd7d9cfa0 a2=0 a3=1 items=0 ppid=3591 pid=3651 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:09:09.072000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Dec 16 02:09:09.080000 audit[3653]: NETFILTER_CFG table=filter:61 family=2 entries=1 op=nft_register_rule pid=3653 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 02:09:09.080000 audit[3653]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=752 a0=3 a1=ffffee4856c0 a2=0 a3=1 items=0 ppid=3591 pid=3653 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:09:09.080000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276696365 Dec 16 02:09:09.094000 audit[3656]: NETFILTER_CFG table=filter:62 family=2 entries=1 op=nft_register_rule pid=3656 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 02:09:09.094000 audit[3656]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=752 a0=3 a1=ffffcb52dce0 a2=0 a3=1 items=0 ppid=3591 pid=3656 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:09:09.094000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669 Dec 16 02:09:09.098000 audit[3657]: NETFILTER_CFG table=filter:63 family=2 entries=1 op=nft_register_chain pid=3657 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 02:09:09.098000 audit[3657]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffc416f390 a2=0 a3=1 items=0 ppid=3591 pid=3657 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:09:09.098000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Dec 16 02:09:09.105000 audit[3659]: NETFILTER_CFG table=filter:64 family=2 entries=1 op=nft_register_rule pid=3659 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 02:09:09.105000 audit[3659]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=fffff7c89690 a2=0 a3=1 items=0 ppid=3591 pid=3659 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:09:09.105000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Dec 16 02:09:09.108000 audit[3660]: NETFILTER_CFG table=filter:65 family=2 entries=1 op=nft_register_chain pid=3660 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 02:09:09.108000 audit[3660]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffdc124b90 a2=0 a3=1 items=0 ppid=3591 pid=3660 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:09:09.108000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Dec 16 02:09:09.119000 audit[3662]: NETFILTER_CFG table=filter:66 family=2 entries=1 op=nft_register_rule pid=3662 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 02:09:09.119000 audit[3662]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=744 a0=3 a1=fffff07f2980 a2=0 a3=1 items=0 ppid=3591 pid=3662 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:09:09.119000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Dec 16 02:09:09.129000 audit[3665]: NETFILTER_CFG table=filter:67 family=2 entries=1 op=nft_register_rule pid=3665 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 02:09:09.129000 audit[3665]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=744 a0=3 a1=ffffd3ee06a0 a2=0 a3=1 items=0 ppid=3591 pid=3665 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:09:09.129000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D53 Dec 16 02:09:09.133000 audit[3666]: NETFILTER_CFG table=filter:68 family=2 entries=1 op=nft_register_chain pid=3666 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 02:09:09.133000 audit[3666]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffd4718140 a2=0 a3=1 items=0 ppid=3591 pid=3666 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:09:09.133000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Dec 16 02:09:09.140000 audit[3668]: NETFILTER_CFG table=filter:69 family=2 entries=1 op=nft_register_rule pid=3668 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 02:09:09.140000 audit[3668]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffdca2b550 a2=0 a3=1 items=0 ppid=3591 pid=3668 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:09:09.140000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Dec 16 02:09:09.143000 audit[3669]: NETFILTER_CFG table=filter:70 family=2 entries=1 op=nft_register_chain pid=3669 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 02:09:09.143000 audit[3669]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=fffff8655180 a2=0 a3=1 items=0 ppid=3591 pid=3669 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:09:09.143000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Dec 16 02:09:09.151000 audit[3671]: NETFILTER_CFG table=filter:71 family=2 entries=1 op=nft_register_rule pid=3671 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 02:09:09.151000 audit[3671]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=fffff574c160 a2=0 a3=1 items=0 ppid=3591 pid=3671 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:09:09.151000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Dec 16 02:09:09.161000 audit[3674]: NETFILTER_CFG table=filter:72 family=2 entries=1 op=nft_register_rule pid=3674 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 02:09:09.161000 audit[3674]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffe9d6bdf0 a2=0 a3=1 items=0 ppid=3591 pid=3674 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:09:09.161000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Dec 16 02:09:09.172000 audit[3677]: NETFILTER_CFG table=filter:73 family=2 entries=1 op=nft_register_rule pid=3677 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 02:09:09.172000 audit[3677]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=fffff838a150 a2=0 a3=1 items=0 ppid=3591 pid=3677 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:09:09.172000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Dec 16 02:09:09.174000 audit[3678]: NETFILTER_CFG table=nat:74 family=2 entries=1 op=nft_register_chain pid=3678 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 02:09:09.174000 audit[3678]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffe2e4a1e0 a2=0 a3=1 items=0 ppid=3591 pid=3678 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:09:09.174000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Dec 16 02:09:09.185000 audit[3680]: NETFILTER_CFG table=nat:75 family=2 entries=1 op=nft_register_rule pid=3680 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 02:09:09.185000 audit[3680]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=524 a0=3 a1=fffff39a3e20 a2=0 a3=1 items=0 ppid=3591 pid=3680 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:09:09.185000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Dec 16 02:09:09.194000 audit[3683]: NETFILTER_CFG table=nat:76 family=2 entries=1 op=nft_register_rule pid=3683 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 02:09:09.194000 audit[3683]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffc26660d0 a2=0 a3=1 items=0 ppid=3591 pid=3683 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:09:09.194000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Dec 16 02:09:09.199000 audit[3684]: NETFILTER_CFG table=nat:77 family=2 entries=1 op=nft_register_chain pid=3684 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 02:09:09.199000 audit[3684]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffe8647740 a2=0 a3=1 items=0 ppid=3591 pid=3684 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:09:09.199000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Dec 16 02:09:09.210000 audit[3686]: NETFILTER_CFG table=nat:78 family=2 entries=1 op=nft_register_rule pid=3686 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 02:09:09.210000 audit[3686]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=532 a0=3 a1=ffffe24a6220 a2=0 a3=1 items=0 ppid=3591 pid=3686 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:09:09.210000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Dec 16 02:09:09.259000 audit[3692]: NETFILTER_CFG table=filter:79 family=2 entries=8 op=nft_register_rule pid=3692 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 02:09:09.259000 audit[3692]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=ffffee3398a0 a2=0 a3=1 items=0 ppid=3591 pid=3692 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:09:09.259000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 02:09:09.270000 audit[3692]: NETFILTER_CFG table=nat:80 family=2 entries=14 op=nft_register_chain pid=3692 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 02:09:09.270000 audit[3692]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5508 a0=3 a1=ffffee3398a0 a2=0 a3=1 items=0 ppid=3591 pid=3692 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:09:09.270000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 02:09:09.273000 audit[3697]: NETFILTER_CFG table=filter:81 family=10 entries=1 op=nft_register_chain pid=3697 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 02:09:09.273000 audit[3697]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=108 a0=3 a1=ffffd1d14950 a2=0 a3=1 items=0 ppid=3591 pid=3697 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:09:09.273000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Dec 16 02:09:09.280000 audit[3699]: NETFILTER_CFG table=filter:82 family=10 entries=2 op=nft_register_chain pid=3699 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 02:09:09.280000 audit[3699]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=836 a0=3 a1=ffffdb20ff90 a2=0 a3=1 items=0 ppid=3591 pid=3699 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:09:09.280000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C6520736572766963 Dec 16 02:09:09.290000 audit[3702]: NETFILTER_CFG table=filter:83 family=10 entries=1 op=nft_register_rule pid=3702 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 02:09:09.290000 audit[3702]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=752 a0=3 a1=fffffa3478e0 a2=0 a3=1 items=0 ppid=3591 pid=3702 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:09:09.290000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276 Dec 16 02:09:09.293000 audit[3703]: NETFILTER_CFG table=filter:84 family=10 entries=1 op=nft_register_chain pid=3703 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 02:09:09.293000 audit[3703]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffca4b37b0 a2=0 a3=1 items=0 ppid=3591 pid=3703 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:09:09.293000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Dec 16 02:09:09.299000 audit[3705]: NETFILTER_CFG table=filter:85 family=10 entries=1 op=nft_register_rule pid=3705 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 02:09:09.299000 audit[3705]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffe04d6230 a2=0 a3=1 items=0 ppid=3591 pid=3705 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:09:09.299000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Dec 16 02:09:09.302000 audit[3706]: NETFILTER_CFG table=filter:86 family=10 entries=1 op=nft_register_chain pid=3706 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 02:09:09.302000 audit[3706]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffcd1bd860 a2=0 a3=1 items=0 ppid=3591 pid=3706 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:09:09.302000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Dec 16 02:09:09.309000 audit[3708]: NETFILTER_CFG table=filter:87 family=10 entries=1 op=nft_register_rule pid=3708 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 02:09:09.309000 audit[3708]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=744 a0=3 a1=ffffe0325af0 a2=0 a3=1 items=0 ppid=3591 pid=3708 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:09:09.309000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B554245 Dec 16 02:09:09.320000 audit[3711]: NETFILTER_CFG table=filter:88 family=10 entries=2 op=nft_register_chain pid=3711 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 02:09:09.320000 audit[3711]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=828 a0=3 a1=ffffe7e95b60 a2=0 a3=1 items=0 ppid=3591 pid=3711 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:09:09.320000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Dec 16 02:09:09.323000 audit[3712]: NETFILTER_CFG table=filter:89 family=10 entries=1 op=nft_register_chain pid=3712 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 02:09:09.323000 audit[3712]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffff73efab0 a2=0 a3=1 items=0 ppid=3591 pid=3712 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:09:09.323000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Dec 16 02:09:09.329000 audit[3714]: NETFILTER_CFG table=filter:90 family=10 entries=1 op=nft_register_rule pid=3714 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 02:09:09.329000 audit[3714]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffe845a410 a2=0 a3=1 items=0 ppid=3591 pid=3714 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:09:09.329000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Dec 16 02:09:09.332000 audit[3715]: NETFILTER_CFG table=filter:91 family=10 entries=1 op=nft_register_chain pid=3715 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 02:09:09.332000 audit[3715]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffdf894f10 a2=0 a3=1 items=0 ppid=3591 pid=3715 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:09:09.332000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Dec 16 02:09:09.339000 audit[3717]: NETFILTER_CFG table=filter:92 family=10 entries=1 op=nft_register_rule pid=3717 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 02:09:09.339000 audit[3717]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffcde96180 a2=0 a3=1 items=0 ppid=3591 pid=3717 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:09:09.339000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Dec 16 02:09:09.349000 audit[3720]: NETFILTER_CFG table=filter:93 family=10 entries=1 op=nft_register_rule pid=3720 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 02:09:09.349000 audit[3720]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffdedc5d30 a2=0 a3=1 items=0 ppid=3591 pid=3720 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:09:09.349000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Dec 16 02:09:09.359000 audit[3723]: NETFILTER_CFG table=filter:94 family=10 entries=1 op=nft_register_rule pid=3723 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 02:09:09.359000 audit[3723]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffcf4f5830 a2=0 a3=1 items=0 ppid=3591 pid=3723 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:09:09.359000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C Dec 16 02:09:09.362000 audit[3724]: NETFILTER_CFG table=nat:95 family=10 entries=1 op=nft_register_chain pid=3724 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 02:09:09.362000 audit[3724]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffe78b84e0 a2=0 a3=1 items=0 ppid=3591 pid=3724 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:09:09.362000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Dec 16 02:09:09.372000 audit[3726]: NETFILTER_CFG table=nat:96 family=10 entries=1 op=nft_register_rule pid=3726 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 02:09:09.372000 audit[3726]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=524 a0=3 a1=ffffe3cfa7a0 a2=0 a3=1 items=0 ppid=3591 pid=3726 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:09:09.372000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Dec 16 02:09:09.383000 audit[3729]: NETFILTER_CFG table=nat:97 family=10 entries=1 op=nft_register_rule pid=3729 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 02:09:09.383000 audit[3729]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=fffff46003c0 a2=0 a3=1 items=0 ppid=3591 pid=3729 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:09:09.383000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Dec 16 02:09:09.387000 audit[3730]: NETFILTER_CFG table=nat:98 family=10 entries=1 op=nft_register_chain pid=3730 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 02:09:09.387000 audit[3730]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffc0f7c310 a2=0 a3=1 items=0 ppid=3591 pid=3730 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:09:09.387000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Dec 16 02:09:09.395000 audit[3732]: NETFILTER_CFG table=nat:99 family=10 entries=2 op=nft_register_chain pid=3732 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 02:09:09.395000 audit[3732]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=612 a0=3 a1=fffff79fde50 a2=0 a3=1 items=0 ppid=3591 pid=3732 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:09:09.395000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Dec 16 02:09:09.399000 audit[3733]: NETFILTER_CFG table=filter:100 family=10 entries=1 op=nft_register_chain pid=3733 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 02:09:09.399000 audit[3733]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffffe06570 a2=0 a3=1 items=0 ppid=3591 pid=3733 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:09:09.399000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Dec 16 02:09:09.405000 audit[3735]: NETFILTER_CFG table=filter:101 family=10 entries=1 op=nft_register_rule pid=3735 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 02:09:09.405000 audit[3735]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=228 a0=3 a1=fffff16c3870 a2=0 a3=1 items=0 ppid=3591 pid=3735 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:09:09.405000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Dec 16 02:09:09.413000 audit[3738]: NETFILTER_CFG table=filter:102 family=10 entries=1 op=nft_register_rule pid=3738 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 02:09:09.413000 audit[3738]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=228 a0=3 a1=ffffc07f2ed0 a2=0 a3=1 items=0 ppid=3591 pid=3738 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:09:09.413000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Dec 16 02:09:09.421000 audit[3740]: NETFILTER_CFG table=filter:103 family=10 entries=3 op=nft_register_rule pid=3740 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Dec 16 02:09:09.421000 audit[3740]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2088 a0=3 a1=ffffc3d85800 a2=0 a3=1 items=0 ppid=3591 pid=3740 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:09:09.421000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 02:09:09.422000 audit[3740]: NETFILTER_CFG table=nat:104 family=10 entries=7 op=nft_register_chain pid=3740 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Dec 16 02:09:09.422000 audit[3740]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2056 a0=3 a1=ffffc3d85800 a2=0 a3=1 items=0 ppid=3591 pid=3740 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:09:09.422000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 02:09:09.807858 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3491874180.mount: Deactivated successfully. Dec 16 02:09:11.036194 containerd[1959]: time="2025-12-16T02:09:11.036113951Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 02:09:11.039117 containerd[1959]: time="2025-12-16T02:09:11.039012779Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=20773434" Dec 16 02:09:11.042216 containerd[1959]: time="2025-12-16T02:09:11.042148427Z" level=info msg="ImageCreate event name:\"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 02:09:11.048100 containerd[1959]: time="2025-12-16T02:09:11.047415383Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 02:09:11.049254 containerd[1959]: time="2025-12-16T02:09:11.049181471Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"22147999\" in 2.728876357s" Dec 16 02:09:11.049254 containerd[1959]: time="2025-12-16T02:09:11.049242479Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\"" Dec 16 02:09:11.061599 containerd[1959]: time="2025-12-16T02:09:11.061526339Z" level=info msg="CreateContainer within sandbox \"b63291b75d01cead7f9c3e97a5b649da7dc38036757cde1171aa8320a87b4245\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Dec 16 02:09:11.078121 containerd[1959]: time="2025-12-16T02:09:11.078068771Z" level=info msg="Container 0fbf977894cb77aa6ed906034392b3e42e4ef070d2da98f83bdbfdc679c48f40: CDI devices from CRI Config.CDIDevices: []" Dec 16 02:09:11.087208 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2198202121.mount: Deactivated successfully. Dec 16 02:09:11.095963 containerd[1959]: time="2025-12-16T02:09:11.095882639Z" level=info msg="CreateContainer within sandbox \"b63291b75d01cead7f9c3e97a5b649da7dc38036757cde1171aa8320a87b4245\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"0fbf977894cb77aa6ed906034392b3e42e4ef070d2da98f83bdbfdc679c48f40\"" Dec 16 02:09:11.099126 containerd[1959]: time="2025-12-16T02:09:11.098728619Z" level=info msg="StartContainer for \"0fbf977894cb77aa6ed906034392b3e42e4ef070d2da98f83bdbfdc679c48f40\"" Dec 16 02:09:11.101731 containerd[1959]: time="2025-12-16T02:09:11.101680235Z" level=info msg="connecting to shim 0fbf977894cb77aa6ed906034392b3e42e4ef070d2da98f83bdbfdc679c48f40" address="unix:///run/containerd/s/489a8202ae0919288f30ab3e5b8c1ea308dacaf680847191dd9871ad8be8f75a" protocol=ttrpc version=3 Dec 16 02:09:11.141415 systemd[1]: Started cri-containerd-0fbf977894cb77aa6ed906034392b3e42e4ef070d2da98f83bdbfdc679c48f40.scope - libcontainer container 0fbf977894cb77aa6ed906034392b3e42e4ef070d2da98f83bdbfdc679c48f40. Dec 16 02:09:11.165000 audit: BPF prog-id=149 op=LOAD Dec 16 02:09:11.166000 audit: BPF prog-id=150 op=LOAD Dec 16 02:09:11.166000 audit[3749]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130180 a2=98 a3=0 items=0 ppid=3490 pid=3749 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:09:11.166000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3066626639373738393463623737616136656439303630333433393262 Dec 16 02:09:11.166000 audit: BPF prog-id=150 op=UNLOAD Dec 16 02:09:11.166000 audit[3749]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3490 pid=3749 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:09:11.166000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3066626639373738393463623737616136656439303630333433393262 Dec 16 02:09:11.167000 audit: BPF prog-id=151 op=LOAD Dec 16 02:09:11.167000 audit[3749]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001303e8 a2=98 a3=0 items=0 ppid=3490 pid=3749 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:09:11.167000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3066626639373738393463623737616136656439303630333433393262 Dec 16 02:09:11.167000 audit: BPF prog-id=152 op=LOAD Dec 16 02:09:11.167000 audit[3749]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000130168 a2=98 a3=0 items=0 ppid=3490 pid=3749 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:09:11.167000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3066626639373738393463623737616136656439303630333433393262 Dec 16 02:09:11.168000 audit: BPF prog-id=152 op=UNLOAD Dec 16 02:09:11.168000 audit[3749]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3490 pid=3749 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:09:11.168000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3066626639373738393463623737616136656439303630333433393262 Dec 16 02:09:11.168000 audit: BPF prog-id=151 op=UNLOAD Dec 16 02:09:11.168000 audit[3749]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3490 pid=3749 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:09:11.168000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3066626639373738393463623737616136656439303630333433393262 Dec 16 02:09:11.168000 audit: BPF prog-id=153 op=LOAD Dec 16 02:09:11.168000 audit[3749]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130648 a2=98 a3=0 items=0 ppid=3490 pid=3749 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:09:11.168000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3066626639373738393463623737616136656439303630333433393262 Dec 16 02:09:11.204464 containerd[1959]: time="2025-12-16T02:09:11.204400104Z" level=info msg="StartContainer for \"0fbf977894cb77aa6ed906034392b3e42e4ef070d2da98f83bdbfdc679c48f40\" returns successfully" Dec 16 02:09:11.932513 kubelet[3435]: I1216 02:09:11.932402 3435 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7dcd859c48-6p2fj" podStartSLOduration=2.1975785820000002 podStartE2EDuration="4.932350707s" podCreationTimestamp="2025-12-16 02:09:07 +0000 UTC" firstStartedPulling="2025-12-16 02:09:08.317030518 +0000 UTC m=+5.803232130" lastFinishedPulling="2025-12-16 02:09:11.051802655 +0000 UTC m=+8.538004255" observedRunningTime="2025-12-16 02:09:11.931702227 +0000 UTC m=+9.417903851" watchObservedRunningTime="2025-12-16 02:09:11.932350707 +0000 UTC m=+9.418552331" Dec 16 02:09:18.539000 audit[2349]: USER_END pid=2349 uid=500 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 16 02:09:18.540137 sudo[2349]: pam_unix(sudo:session): session closed for user root Dec 16 02:09:18.542353 kernel: kauditd_printk_skb: 224 callbacks suppressed Dec 16 02:09:18.542478 kernel: audit: type=1106 audit(1765850958.539:520): pid=2349 uid=500 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 16 02:09:18.552663 kernel: audit: type=1104 audit(1765850958.539:521): pid=2349 uid=500 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 16 02:09:18.539000 audit[2349]: CRED_DISP pid=2349 uid=500 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 16 02:09:18.569163 sshd[2348]: Connection closed by 139.178.89.65 port 59860 Dec 16 02:09:18.571490 sshd-session[2344]: pam_unix(sshd:session): session closed for user core Dec 16 02:09:18.574000 audit[2344]: USER_END pid=2344 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 02:09:18.586454 systemd[1]: sshd@8-172.31.29.223:22-139.178.89.65:59860.service: Deactivated successfully. Dec 16 02:09:18.596765 systemd[1]: session-10.scope: Deactivated successfully. Dec 16 02:09:18.602590 kernel: audit: type=1106 audit(1765850958.574:522): pid=2344 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 02:09:18.602724 kernel: audit: type=1104 audit(1765850958.574:523): pid=2344 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 02:09:18.574000 audit[2344]: CRED_DISP pid=2344 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 02:09:18.599762 systemd[1]: session-10.scope: Consumed 11.623s CPU time, 224.3M memory peak. Dec 16 02:09:18.585000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-172.31.29.223:22-139.178.89.65:59860 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:09:18.603495 systemd-logind[1940]: Session 10 logged out. Waiting for processes to exit. Dec 16 02:09:18.611942 systemd-logind[1940]: Removed session 10. Dec 16 02:09:18.614172 kernel: audit: type=1131 audit(1765850958.585:524): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-172.31.29.223:22-139.178.89.65:59860 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:09:22.054000 audit[3832]: NETFILTER_CFG table=filter:105 family=2 entries=15 op=nft_register_rule pid=3832 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 02:09:22.054000 audit[3832]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5992 a0=3 a1=fffff28904f0 a2=0 a3=1 items=0 ppid=3591 pid=3832 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:09:22.067032 kernel: audit: type=1325 audit(1765850962.054:525): table=filter:105 family=2 entries=15 op=nft_register_rule pid=3832 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 02:09:22.067193 kernel: audit: type=1300 audit(1765850962.054:525): arch=c00000b7 syscall=211 success=yes exit=5992 a0=3 a1=fffff28904f0 a2=0 a3=1 items=0 ppid=3591 pid=3832 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:09:22.054000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 02:09:22.073187 kernel: audit: type=1327 audit(1765850962.054:525): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 02:09:22.067000 audit[3832]: NETFILTER_CFG table=nat:106 family=2 entries=12 op=nft_register_rule pid=3832 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 02:09:22.077964 kernel: audit: type=1325 audit(1765850962.067:526): table=nat:106 family=2 entries=12 op=nft_register_rule pid=3832 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 02:09:22.067000 audit[3832]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=fffff28904f0 a2=0 a3=1 items=0 ppid=3591 pid=3832 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:09:22.067000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 02:09:22.088125 kernel: audit: type=1300 audit(1765850962.067:526): arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=fffff28904f0 a2=0 a3=1 items=0 ppid=3591 pid=3832 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:09:22.171000 audit[3834]: NETFILTER_CFG table=filter:107 family=2 entries=16 op=nft_register_rule pid=3834 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 02:09:22.171000 audit[3834]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5992 a0=3 a1=ffffe825e4b0 a2=0 a3=1 items=0 ppid=3591 pid=3834 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:09:22.171000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 02:09:22.177000 audit[3834]: NETFILTER_CFG table=nat:108 family=2 entries=12 op=nft_register_rule pid=3834 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 02:09:22.177000 audit[3834]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffe825e4b0 a2=0 a3=1 items=0 ppid=3591 pid=3834 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:09:22.177000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 02:09:29.528000 audit[3839]: NETFILTER_CFG table=filter:109 family=2 entries=17 op=nft_register_rule pid=3839 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 02:09:29.534947 kernel: kauditd_printk_skb: 7 callbacks suppressed Dec 16 02:09:29.535107 kernel: audit: type=1325 audit(1765850969.528:529): table=filter:109 family=2 entries=17 op=nft_register_rule pid=3839 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 02:09:29.528000 audit[3839]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=6736 a0=3 a1=ffffcf63bc80 a2=0 a3=1 items=0 ppid=3591 pid=3839 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:09:29.543458 kernel: audit: type=1300 audit(1765850969.528:529): arch=c00000b7 syscall=211 success=yes exit=6736 a0=3 a1=ffffcf63bc80 a2=0 a3=1 items=0 ppid=3591 pid=3839 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:09:29.528000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 02:09:29.550130 kernel: audit: type=1327 audit(1765850969.528:529): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 02:09:29.551000 audit[3839]: NETFILTER_CFG table=nat:110 family=2 entries=12 op=nft_register_rule pid=3839 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 02:09:29.551000 audit[3839]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffcf63bc80 a2=0 a3=1 items=0 ppid=3591 pid=3839 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:09:29.563069 kernel: audit: type=1325 audit(1765850969.551:530): table=nat:110 family=2 entries=12 op=nft_register_rule pid=3839 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 02:09:29.563195 kernel: audit: type=1300 audit(1765850969.551:530): arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffcf63bc80 a2=0 a3=1 items=0 ppid=3591 pid=3839 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:09:29.551000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 02:09:29.570092 kernel: audit: type=1327 audit(1765850969.551:530): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 02:09:29.606000 audit[3841]: NETFILTER_CFG table=filter:111 family=2 entries=18 op=nft_register_rule pid=3841 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 02:09:29.606000 audit[3841]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=6736 a0=3 a1=ffffc0d79eb0 a2=0 a3=1 items=0 ppid=3591 pid=3841 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:09:29.614239 kernel: audit: type=1325 audit(1765850969.606:531): table=filter:111 family=2 entries=18 op=nft_register_rule pid=3841 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 02:09:29.606000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 02:09:29.628095 kernel: audit: type=1300 audit(1765850969.606:531): arch=c00000b7 syscall=211 success=yes exit=6736 a0=3 a1=ffffc0d79eb0 a2=0 a3=1 items=0 ppid=3591 pid=3841 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:09:29.628198 kernel: audit: type=1327 audit(1765850969.606:531): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 02:09:29.631000 audit[3841]: NETFILTER_CFG table=nat:112 family=2 entries=12 op=nft_register_rule pid=3841 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 02:09:29.637170 kernel: audit: type=1325 audit(1765850969.631:532): table=nat:112 family=2 entries=12 op=nft_register_rule pid=3841 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 02:09:29.631000 audit[3841]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffc0d79eb0 a2=0 a3=1 items=0 ppid=3591 pid=3841 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:09:29.631000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 02:09:30.661000 audit[3843]: NETFILTER_CFG table=filter:113 family=2 entries=19 op=nft_register_rule pid=3843 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 02:09:30.661000 audit[3843]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=7480 a0=3 a1=ffffe65c7740 a2=0 a3=1 items=0 ppid=3591 pid=3843 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:09:30.661000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 02:09:30.666000 audit[3843]: NETFILTER_CFG table=nat:114 family=2 entries=12 op=nft_register_rule pid=3843 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 02:09:30.666000 audit[3843]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffe65c7740 a2=0 a3=1 items=0 ppid=3591 pid=3843 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:09:30.666000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 02:09:36.819000 audit[3845]: NETFILTER_CFG table=filter:115 family=2 entries=21 op=nft_register_rule pid=3845 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 02:09:36.822247 kernel: kauditd_printk_skb: 8 callbacks suppressed Dec 16 02:09:36.819000 audit[3845]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=8224 a0=3 a1=ffffe991e3f0 a2=0 a3=1 items=0 ppid=3591 pid=3845 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:09:36.839127 kernel: audit: type=1325 audit(1765850976.819:535): table=filter:115 family=2 entries=21 op=nft_register_rule pid=3845 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 02:09:36.839352 kernel: audit: type=1300 audit(1765850976.819:535): arch=c00000b7 syscall=211 success=yes exit=8224 a0=3 a1=ffffe991e3f0 a2=0 a3=1 items=0 ppid=3591 pid=3845 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:09:36.819000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 02:09:36.844449 kernel: audit: type=1327 audit(1765850976.819:535): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 02:09:36.846000 audit[3845]: NETFILTER_CFG table=nat:116 family=2 entries=12 op=nft_register_rule pid=3845 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 02:09:36.853099 kernel: audit: type=1325 audit(1765850976.846:536): table=nat:116 family=2 entries=12 op=nft_register_rule pid=3845 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 02:09:36.846000 audit[3845]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffe991e3f0 a2=0 a3=1 items=0 ppid=3591 pid=3845 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:09:36.846000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 02:09:36.878816 kernel: audit: type=1300 audit(1765850976.846:536): arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffe991e3f0 a2=0 a3=1 items=0 ppid=3591 pid=3845 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:09:36.879109 kernel: audit: type=1327 audit(1765850976.846:536): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 02:09:36.928941 systemd[1]: Created slice kubepods-besteffort-pod9c809760_b0d1_466a_973a_f2d03dae4ae7.slice - libcontainer container kubepods-besteffort-pod9c809760_b0d1_466a_973a_f2d03dae4ae7.slice. Dec 16 02:09:36.936000 audit[3847]: NETFILTER_CFG table=filter:117 family=2 entries=22 op=nft_register_rule pid=3847 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 02:09:36.936000 audit[3847]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=8224 a0=3 a1=ffffd4c2acb0 a2=0 a3=1 items=0 ppid=3591 pid=3847 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:09:36.949439 kernel: audit: type=1325 audit(1765850976.936:537): table=filter:117 family=2 entries=22 op=nft_register_rule pid=3847 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 02:09:36.949584 kernel: audit: type=1300 audit(1765850976.936:537): arch=c00000b7 syscall=211 success=yes exit=8224 a0=3 a1=ffffd4c2acb0 a2=0 a3=1 items=0 ppid=3591 pid=3847 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:09:36.936000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 02:09:36.954254 kernel: audit: type=1327 audit(1765850976.936:537): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 02:09:36.955000 audit[3847]: NETFILTER_CFG table=nat:118 family=2 entries=12 op=nft_register_rule pid=3847 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 02:09:36.955000 audit[3847]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffd4c2acb0 a2=0 a3=1 items=0 ppid=3591 pid=3847 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:09:36.960276 kernel: audit: type=1325 audit(1765850976.955:538): table=nat:118 family=2 entries=12 op=nft_register_rule pid=3847 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 02:09:36.955000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 02:09:37.006360 kubelet[3435]: I1216 02:09:37.006279 3435 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9c809760-b0d1-466a-973a-f2d03dae4ae7-tigera-ca-bundle\") pod \"calico-typha-655ddcc49f-p859j\" (UID: \"9c809760-b0d1-466a-973a-f2d03dae4ae7\") " pod="calico-system/calico-typha-655ddcc49f-p859j" Dec 16 02:09:37.007647 kubelet[3435]: I1216 02:09:37.006369 3435 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/9c809760-b0d1-466a-973a-f2d03dae4ae7-typha-certs\") pod \"calico-typha-655ddcc49f-p859j\" (UID: \"9c809760-b0d1-466a-973a-f2d03dae4ae7\") " pod="calico-system/calico-typha-655ddcc49f-p859j" Dec 16 02:09:37.007647 kubelet[3435]: I1216 02:09:37.006414 3435 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kqb4k\" (UniqueName: \"kubernetes.io/projected/9c809760-b0d1-466a-973a-f2d03dae4ae7-kube-api-access-kqb4k\") pod \"calico-typha-655ddcc49f-p859j\" (UID: \"9c809760-b0d1-466a-973a-f2d03dae4ae7\") " pod="calico-system/calico-typha-655ddcc49f-p859j" Dec 16 02:09:37.239190 containerd[1959]: time="2025-12-16T02:09:37.238762837Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-655ddcc49f-p859j,Uid:9c809760-b0d1-466a-973a-f2d03dae4ae7,Namespace:calico-system,Attempt:0,}" Dec 16 02:09:37.291589 systemd[1]: Created slice kubepods-besteffort-poda5d5724e_9ef6_43d2_86ce_d3837b27c82c.slice - libcontainer container kubepods-besteffort-poda5d5724e_9ef6_43d2_86ce_d3837b27c82c.slice. Dec 16 02:09:37.311072 kubelet[3435]: I1216 02:09:37.310676 3435 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/a5d5724e-9ef6-43d2-86ce-d3837b27c82c-flexvol-driver-host\") pod \"calico-node-5tkx4\" (UID: \"a5d5724e-9ef6-43d2-86ce-d3837b27c82c\") " pod="calico-system/calico-node-5tkx4" Dec 16 02:09:37.311072 kubelet[3435]: I1216 02:09:37.310745 3435 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a5d5724e-9ef6-43d2-86ce-d3837b27c82c-lib-modules\") pod \"calico-node-5tkx4\" (UID: \"a5d5724e-9ef6-43d2-86ce-d3837b27c82c\") " pod="calico-system/calico-node-5tkx4" Dec 16 02:09:37.311072 kubelet[3435]: I1216 02:09:37.310798 3435 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a5d5724e-9ef6-43d2-86ce-d3837b27c82c-tigera-ca-bundle\") pod \"calico-node-5tkx4\" (UID: \"a5d5724e-9ef6-43d2-86ce-d3837b27c82c\") " pod="calico-system/calico-node-5tkx4" Dec 16 02:09:37.311072 kubelet[3435]: I1216 02:09:37.310870 3435 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wrcg4\" (UniqueName: \"kubernetes.io/projected/a5d5724e-9ef6-43d2-86ce-d3837b27c82c-kube-api-access-wrcg4\") pod \"calico-node-5tkx4\" (UID: \"a5d5724e-9ef6-43d2-86ce-d3837b27c82c\") " pod="calico-system/calico-node-5tkx4" Dec 16 02:09:37.311072 kubelet[3435]: I1216 02:09:37.310945 3435 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/a5d5724e-9ef6-43d2-86ce-d3837b27c82c-cni-log-dir\") pod \"calico-node-5tkx4\" (UID: \"a5d5724e-9ef6-43d2-86ce-d3837b27c82c\") " pod="calico-system/calico-node-5tkx4" Dec 16 02:09:37.311446 kubelet[3435]: I1216 02:09:37.311006 3435 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/a5d5724e-9ef6-43d2-86ce-d3837b27c82c-var-run-calico\") pod \"calico-node-5tkx4\" (UID: \"a5d5724e-9ef6-43d2-86ce-d3837b27c82c\") " pod="calico-system/calico-node-5tkx4" Dec 16 02:09:37.313626 kubelet[3435]: I1216 02:09:37.311643 3435 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/a5d5724e-9ef6-43d2-86ce-d3837b27c82c-node-certs\") pod \"calico-node-5tkx4\" (UID: \"a5d5724e-9ef6-43d2-86ce-d3837b27c82c\") " pod="calico-system/calico-node-5tkx4" Dec 16 02:09:37.313626 kubelet[3435]: I1216 02:09:37.311795 3435 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/a5d5724e-9ef6-43d2-86ce-d3837b27c82c-policysync\") pod \"calico-node-5tkx4\" (UID: \"a5d5724e-9ef6-43d2-86ce-d3837b27c82c\") " pod="calico-system/calico-node-5tkx4" Dec 16 02:09:37.313626 kubelet[3435]: I1216 02:09:37.311848 3435 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/a5d5724e-9ef6-43d2-86ce-d3837b27c82c-var-lib-calico\") pod \"calico-node-5tkx4\" (UID: \"a5d5724e-9ef6-43d2-86ce-d3837b27c82c\") " pod="calico-system/calico-node-5tkx4" Dec 16 02:09:37.313626 kubelet[3435]: I1216 02:09:37.311935 3435 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/a5d5724e-9ef6-43d2-86ce-d3837b27c82c-cni-bin-dir\") pod \"calico-node-5tkx4\" (UID: \"a5d5724e-9ef6-43d2-86ce-d3837b27c82c\") " pod="calico-system/calico-node-5tkx4" Dec 16 02:09:37.313626 kubelet[3435]: I1216 02:09:37.312109 3435 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/a5d5724e-9ef6-43d2-86ce-d3837b27c82c-cni-net-dir\") pod \"calico-node-5tkx4\" (UID: \"a5d5724e-9ef6-43d2-86ce-d3837b27c82c\") " pod="calico-system/calico-node-5tkx4" Dec 16 02:09:37.313983 kubelet[3435]: I1216 02:09:37.312297 3435 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a5d5724e-9ef6-43d2-86ce-d3837b27c82c-xtables-lock\") pod \"calico-node-5tkx4\" (UID: \"a5d5724e-9ef6-43d2-86ce-d3837b27c82c\") " pod="calico-system/calico-node-5tkx4" Dec 16 02:09:37.319082 containerd[1959]: time="2025-12-16T02:09:37.318448646Z" level=info msg="connecting to shim 0c07bb0d942f1e6e65c76a085359c29dba0acccc225f2d2ae0c62fcc43a879ea" address="unix:///run/containerd/s/5ad0c22bdcd9bcec666d85b2510641853cec5c7e59c4bf5029e3b8bdd2f50d49" namespace=k8s.io protocol=ttrpc version=3 Dec 16 02:09:37.393471 systemd[1]: Started cri-containerd-0c07bb0d942f1e6e65c76a085359c29dba0acccc225f2d2ae0c62fcc43a879ea.scope - libcontainer container 0c07bb0d942f1e6e65c76a085359c29dba0acccc225f2d2ae0c62fcc43a879ea. Dec 16 02:09:37.415261 kubelet[3435]: E1216 02:09:37.415170 3435 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:09:37.415261 kubelet[3435]: W1216 02:09:37.415249 3435 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:09:37.415751 kubelet[3435]: E1216 02:09:37.415324 3435 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:09:37.416511 kubelet[3435]: E1216 02:09:37.416463 3435 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:09:37.416649 kubelet[3435]: W1216 02:09:37.416499 3435 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:09:37.416649 kubelet[3435]: E1216 02:09:37.416556 3435 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:09:37.418357 kubelet[3435]: E1216 02:09:37.418302 3435 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:09:37.418357 kubelet[3435]: W1216 02:09:37.418367 3435 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:09:37.418357 kubelet[3435]: E1216 02:09:37.418406 3435 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:09:37.419098 kubelet[3435]: E1216 02:09:37.419034 3435 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:09:37.419098 kubelet[3435]: W1216 02:09:37.419094 3435 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:09:37.419233 kubelet[3435]: E1216 02:09:37.419124 3435 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:09:37.419730 kubelet[3435]: E1216 02:09:37.419651 3435 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:09:37.419730 kubelet[3435]: W1216 02:09:37.419712 3435 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:09:37.419895 kubelet[3435]: E1216 02:09:37.419742 3435 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:09:37.421956 kubelet[3435]: E1216 02:09:37.421879 3435 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:09:37.421956 kubelet[3435]: W1216 02:09:37.421943 3435 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:09:37.422198 kubelet[3435]: E1216 02:09:37.422000 3435 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:09:37.422868 kubelet[3435]: E1216 02:09:37.422598 3435 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:09:37.422868 kubelet[3435]: W1216 02:09:37.422662 3435 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:09:37.422868 kubelet[3435]: E1216 02:09:37.422694 3435 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:09:37.424273 kubelet[3435]: E1216 02:09:37.423345 3435 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:09:37.424273 kubelet[3435]: W1216 02:09:37.423371 3435 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:09:37.424273 kubelet[3435]: E1216 02:09:37.423401 3435 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:09:37.424716 kubelet[3435]: E1216 02:09:37.424667 3435 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:09:37.424815 kubelet[3435]: W1216 02:09:37.424704 3435 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:09:37.424815 kubelet[3435]: E1216 02:09:37.424756 3435 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:09:37.426687 kubelet[3435]: E1216 02:09:37.426629 3435 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:09:37.426687 kubelet[3435]: W1216 02:09:37.426673 3435 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:09:37.426917 kubelet[3435]: E1216 02:09:37.426707 3435 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:09:37.427388 kubelet[3435]: E1216 02:09:37.427343 3435 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:09:37.427388 kubelet[3435]: W1216 02:09:37.427378 3435 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:09:37.427556 kubelet[3435]: E1216 02:09:37.427408 3435 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:09:37.430107 kubelet[3435]: E1216 02:09:37.429614 3435 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:09:37.430107 kubelet[3435]: W1216 02:09:37.429678 3435 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:09:37.430107 kubelet[3435]: E1216 02:09:37.429712 3435 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:09:37.430107 kubelet[3435]: E1216 02:09:37.430033 3435 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:09:37.430107 kubelet[3435]: W1216 02:09:37.430086 3435 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:09:37.430107 kubelet[3435]: E1216 02:09:37.430113 3435 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:09:37.430498 kubelet[3435]: E1216 02:09:37.430423 3435 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:09:37.430498 kubelet[3435]: W1216 02:09:37.430442 3435 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:09:37.430498 kubelet[3435]: E1216 02:09:37.430468 3435 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:09:37.431698 kubelet[3435]: E1216 02:09:37.430714 3435 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:09:37.431698 kubelet[3435]: W1216 02:09:37.430743 3435 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:09:37.431698 kubelet[3435]: E1216 02:09:37.430765 3435 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:09:37.433301 kubelet[3435]: E1216 02:09:37.433236 3435 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:09:37.433301 kubelet[3435]: W1216 02:09:37.433282 3435 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:09:37.433517 kubelet[3435]: E1216 02:09:37.433319 3435 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:09:37.436887 kubelet[3435]: E1216 02:09:37.436322 3435 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:09:37.436887 kubelet[3435]: W1216 02:09:37.436401 3435 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:09:37.436887 kubelet[3435]: E1216 02:09:37.436436 3435 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:09:37.437542 kubelet[3435]: E1216 02:09:37.436974 3435 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:09:37.437542 kubelet[3435]: W1216 02:09:37.436996 3435 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:09:37.437542 kubelet[3435]: E1216 02:09:37.437087 3435 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:09:37.437999 kubelet[3435]: E1216 02:09:37.437956 3435 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:09:37.437999 kubelet[3435]: W1216 02:09:37.437991 3435 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:09:37.438201 kubelet[3435]: E1216 02:09:37.438021 3435 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:09:37.439357 kubelet[3435]: E1216 02:09:37.439307 3435 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:09:37.439357 kubelet[3435]: W1216 02:09:37.439344 3435 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:09:37.440244 kubelet[3435]: E1216 02:09:37.439377 3435 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:09:37.440538 kubelet[3435]: E1216 02:09:37.440496 3435 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:09:37.440538 kubelet[3435]: W1216 02:09:37.440531 3435 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:09:37.440658 kubelet[3435]: E1216 02:09:37.440564 3435 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:09:37.441566 kubelet[3435]: E1216 02:09:37.441520 3435 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:09:37.441566 kubelet[3435]: W1216 02:09:37.441557 3435 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:09:37.441785 kubelet[3435]: E1216 02:09:37.441589 3435 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:09:37.443302 kubelet[3435]: E1216 02:09:37.443246 3435 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:09:37.443302 kubelet[3435]: W1216 02:09:37.443288 3435 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:09:37.443737 kubelet[3435]: E1216 02:09:37.443321 3435 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:09:37.443805 kubelet[3435]: E1216 02:09:37.443746 3435 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:09:37.443805 kubelet[3435]: W1216 02:09:37.443768 3435 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:09:37.443805 kubelet[3435]: E1216 02:09:37.443794 3435 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:09:37.444380 kubelet[3435]: E1216 02:09:37.444154 3435 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:09:37.444380 kubelet[3435]: W1216 02:09:37.444173 3435 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:09:37.444380 kubelet[3435]: E1216 02:09:37.444199 3435 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:09:37.445075 kubelet[3435]: E1216 02:09:37.444792 3435 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:09:37.445445 kubelet[3435]: W1216 02:09:37.445361 3435 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:09:37.446187 kubelet[3435]: E1216 02:09:37.445450 3435 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:09:37.446465 kubelet[3435]: E1216 02:09:37.446416 3435 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:09:37.446567 kubelet[3435]: W1216 02:09:37.446456 3435 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:09:37.446567 kubelet[3435]: E1216 02:09:37.446512 3435 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:09:37.449168 kubelet[3435]: E1216 02:09:37.449113 3435 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:09:37.449168 kubelet[3435]: W1216 02:09:37.449154 3435 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:09:37.449373 kubelet[3435]: E1216 02:09:37.449187 3435 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:09:37.451434 kubelet[3435]: E1216 02:09:37.451259 3435 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:09:37.451434 kubelet[3435]: W1216 02:09:37.451419 3435 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:09:37.451660 kubelet[3435]: E1216 02:09:37.451567 3435 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:09:37.453287 kubelet[3435]: E1216 02:09:37.453234 3435 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:09:37.453287 kubelet[3435]: W1216 02:09:37.453273 3435 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:09:37.453510 kubelet[3435]: E1216 02:09:37.453373 3435 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:09:37.456084 kubelet[3435]: E1216 02:09:37.455310 3435 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:09:37.456084 kubelet[3435]: W1216 02:09:37.455471 3435 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:09:37.456084 kubelet[3435]: E1216 02:09:37.455507 3435 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:09:37.460481 kubelet[3435]: E1216 02:09:37.460415 3435 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:09:37.460481 kubelet[3435]: W1216 02:09:37.460483 3435 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:09:37.460657 kubelet[3435]: E1216 02:09:37.460518 3435 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:09:37.463780 kubelet[3435]: E1216 02:09:37.463697 3435 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:09:37.463780 kubelet[3435]: W1216 02:09:37.463768 3435 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:09:37.464005 kubelet[3435]: E1216 02:09:37.463839 3435 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:09:37.466605 kubelet[3435]: E1216 02:09:37.466415 3435 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:09:37.466605 kubelet[3435]: W1216 02:09:37.466579 3435 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:09:37.466926 kubelet[3435]: E1216 02:09:37.466871 3435 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:09:37.492712 kubelet[3435]: E1216 02:09:37.490100 3435 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:09:37.493414 kubelet[3435]: W1216 02:09:37.492898 3435 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:09:37.493414 kubelet[3435]: E1216 02:09:37.492952 3435 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:09:37.556510 kubelet[3435]: E1216 02:09:37.556455 3435 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:09:37.556510 kubelet[3435]: W1216 02:09:37.556502 3435 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:09:37.556725 kubelet[3435]: E1216 02:09:37.556536 3435 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:09:37.610130 containerd[1959]: time="2025-12-16T02:09:37.610027911Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-5tkx4,Uid:a5d5724e-9ef6-43d2-86ce-d3837b27c82c,Namespace:calico-system,Attempt:0,}" Dec 16 02:09:37.654961 kubelet[3435]: E1216 02:09:37.654601 3435 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ghk7w" podUID="f50b9bab-3859-4d1b-ba44-d918ecbff9d1" Dec 16 02:09:37.694568 containerd[1959]: time="2025-12-16T02:09:37.694314279Z" level=info msg="connecting to shim 3ada54df1a0024a7fcf41bc861794676d8a60b65a8bd53c791179507e719f74a" address="unix:///run/containerd/s/39d3e6e276ad180d419f76f3055ca4b5b16846b803c33fb43c824ee4821f0161" namespace=k8s.io protocol=ttrpc version=3 Dec 16 02:09:37.698117 kubelet[3435]: E1216 02:09:37.697927 3435 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:09:37.698117 kubelet[3435]: W1216 02:09:37.697977 3435 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:09:37.698117 kubelet[3435]: E1216 02:09:37.698014 3435 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:09:37.702309 kubelet[3435]: E1216 02:09:37.701568 3435 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:09:37.702309 kubelet[3435]: W1216 02:09:37.701929 3435 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:09:37.702309 kubelet[3435]: E1216 02:09:37.702017 3435 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:09:37.704840 kubelet[3435]: E1216 02:09:37.704775 3435 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:09:37.704840 kubelet[3435]: W1216 02:09:37.704822 3435 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:09:37.705112 kubelet[3435]: E1216 02:09:37.704856 3435 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:09:37.709189 kubelet[3435]: E1216 02:09:37.709116 3435 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:09:37.709189 kubelet[3435]: W1216 02:09:37.709165 3435 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:09:37.709403 kubelet[3435]: E1216 02:09:37.709201 3435 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:09:37.712526 kubelet[3435]: E1216 02:09:37.710935 3435 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:09:37.712526 kubelet[3435]: W1216 02:09:37.710980 3435 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:09:37.712526 kubelet[3435]: E1216 02:09:37.711013 3435 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:09:37.714079 kubelet[3435]: E1216 02:09:37.713371 3435 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:09:37.714079 kubelet[3435]: W1216 02:09:37.713410 3435 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:09:37.714079 kubelet[3435]: E1216 02:09:37.713443 3435 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:09:37.716678 kubelet[3435]: E1216 02:09:37.716165 3435 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:09:37.716678 kubelet[3435]: W1216 02:09:37.716234 3435 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:09:37.716678 kubelet[3435]: E1216 02:09:37.716327 3435 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:09:37.719901 kubelet[3435]: E1216 02:09:37.719842 3435 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:09:37.719901 kubelet[3435]: W1216 02:09:37.719886 3435 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:09:37.720332 kubelet[3435]: E1216 02:09:37.719943 3435 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:09:37.721441 kubelet[3435]: E1216 02:09:37.721029 3435 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:09:37.721441 kubelet[3435]: W1216 02:09:37.721267 3435 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:09:37.721441 kubelet[3435]: E1216 02:09:37.721332 3435 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:09:37.727374 kubelet[3435]: E1216 02:09:37.727126 3435 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:09:37.727374 kubelet[3435]: W1216 02:09:37.727169 3435 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:09:37.727374 kubelet[3435]: E1216 02:09:37.727202 3435 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:09:37.728564 kubelet[3435]: E1216 02:09:37.727837 3435 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:09:37.728564 kubelet[3435]: W1216 02:09:37.728406 3435 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:09:37.729451 kubelet[3435]: E1216 02:09:37.729273 3435 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:09:37.731819 kubelet[3435]: E1216 02:09:37.731779 3435 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:09:37.732136 kubelet[3435]: W1216 02:09:37.732017 3435 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:09:37.733201 kubelet[3435]: E1216 02:09:37.732904 3435 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:09:37.735832 kubelet[3435]: E1216 02:09:37.735148 3435 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:09:37.735832 kubelet[3435]: W1216 02:09:37.735186 3435 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:09:37.735832 kubelet[3435]: E1216 02:09:37.735217 3435 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:09:37.736713 kubelet[3435]: E1216 02:09:37.736681 3435 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:09:37.737174 kubelet[3435]: W1216 02:09:37.736930 3435 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:09:37.737174 kubelet[3435]: E1216 02:09:37.736977 3435 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:09:37.738796 kubelet[3435]: E1216 02:09:37.738748 3435 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:09:37.739646 kubelet[3435]: W1216 02:09:37.739193 3435 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:09:37.739646 kubelet[3435]: E1216 02:09:37.739237 3435 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:09:37.740555 kubelet[3435]: E1216 02:09:37.740426 3435 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:09:37.740824 kubelet[3435]: W1216 02:09:37.740795 3435 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:09:37.741066 kubelet[3435]: E1216 02:09:37.741024 3435 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:09:37.745196 kubelet[3435]: E1216 02:09:37.742756 3435 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:09:37.745196 kubelet[3435]: W1216 02:09:37.743135 3435 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:09:37.745196 kubelet[3435]: E1216 02:09:37.743181 3435 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:09:37.746653 kubelet[3435]: E1216 02:09:37.746517 3435 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:09:37.746962 kubelet[3435]: W1216 02:09:37.746905 3435 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:09:37.747534 kubelet[3435]: E1216 02:09:37.747208 3435 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:09:37.748123 kubelet[3435]: E1216 02:09:37.748071 3435 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:09:37.748475 kubelet[3435]: W1216 02:09:37.748419 3435 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:09:37.749070 kubelet[3435]: E1216 02:09:37.748666 3435 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:09:37.749971 kubelet[3435]: E1216 02:09:37.749830 3435 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:09:37.750497 kubelet[3435]: W1216 02:09:37.750182 3435 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:09:37.750497 kubelet[3435]: E1216 02:09:37.750229 3435 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:09:37.752400 kubelet[3435]: E1216 02:09:37.752269 3435 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:09:37.753491 kubelet[3435]: W1216 02:09:37.753135 3435 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:09:37.753491 kubelet[3435]: E1216 02:09:37.753190 3435 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:09:37.753491 kubelet[3435]: I1216 02:09:37.753254 3435 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/f50b9bab-3859-4d1b-ba44-d918ecbff9d1-registration-dir\") pod \"csi-node-driver-ghk7w\" (UID: \"f50b9bab-3859-4d1b-ba44-d918ecbff9d1\") " pod="calico-system/csi-node-driver-ghk7w" Dec 16 02:09:37.754729 kubelet[3435]: E1216 02:09:37.754595 3435 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:09:37.755632 kubelet[3435]: W1216 02:09:37.754997 3435 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:09:37.755632 kubelet[3435]: E1216 02:09:37.755042 3435 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:09:37.755632 kubelet[3435]: I1216 02:09:37.755226 3435 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/f50b9bab-3859-4d1b-ba44-d918ecbff9d1-varrun\") pod \"csi-node-driver-ghk7w\" (UID: \"f50b9bab-3859-4d1b-ba44-d918ecbff9d1\") " pod="calico-system/csi-node-driver-ghk7w" Dec 16 02:09:37.756631 kubelet[3435]: E1216 02:09:37.756498 3435 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:09:37.756938 kubelet[3435]: W1216 02:09:37.756903 3435 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:09:37.757246 kubelet[3435]: E1216 02:09:37.757196 3435 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:09:37.757663 kubelet[3435]: I1216 02:09:37.757497 3435 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6zdjz\" (UniqueName: \"kubernetes.io/projected/f50b9bab-3859-4d1b-ba44-d918ecbff9d1-kube-api-access-6zdjz\") pod \"csi-node-driver-ghk7w\" (UID: \"f50b9bab-3859-4d1b-ba44-d918ecbff9d1\") " pod="calico-system/csi-node-driver-ghk7w" Dec 16 02:09:37.759257 kubelet[3435]: E1216 02:09:37.759191 3435 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:09:37.760158 kubelet[3435]: W1216 02:09:37.759987 3435 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:09:37.760158 kubelet[3435]: E1216 02:09:37.760041 3435 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:09:37.762181 kubelet[3435]: E1216 02:09:37.762142 3435 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:09:37.762842 kubelet[3435]: W1216 02:09:37.762355 3435 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:09:37.762842 kubelet[3435]: E1216 02:09:37.762402 3435 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:09:37.763682 kubelet[3435]: E1216 02:09:37.763547 3435 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:09:37.764213 kubelet[3435]: W1216 02:09:37.764169 3435 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:09:37.764719 kubelet[3435]: E1216 02:09:37.764575 3435 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:09:37.767106 kubelet[3435]: E1216 02:09:37.767073 3435 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:09:37.767535 kubelet[3435]: W1216 02:09:37.767290 3435 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:09:37.767535 kubelet[3435]: E1216 02:09:37.767333 3435 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:09:37.770082 kubelet[3435]: I1216 02:09:37.769108 3435 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f50b9bab-3859-4d1b-ba44-d918ecbff9d1-kubelet-dir\") pod \"csi-node-driver-ghk7w\" (UID: \"f50b9bab-3859-4d1b-ba44-d918ecbff9d1\") " pod="calico-system/csi-node-driver-ghk7w" Dec 16 02:09:37.770650 kubelet[3435]: E1216 02:09:37.770181 3435 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:09:37.770650 kubelet[3435]: W1216 02:09:37.770219 3435 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:09:37.770650 kubelet[3435]: E1216 02:09:37.770254 3435 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:09:37.771910 kubelet[3435]: E1216 02:09:37.771834 3435 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:09:37.771910 kubelet[3435]: W1216 02:09:37.771886 3435 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:09:37.772199 kubelet[3435]: E1216 02:09:37.771920 3435 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:09:37.772384 kubelet[3435]: E1216 02:09:37.772341 3435 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:09:37.772384 kubelet[3435]: W1216 02:09:37.772373 3435 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:09:37.772509 kubelet[3435]: E1216 02:09:37.772400 3435 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:09:37.773370 kubelet[3435]: E1216 02:09:37.773319 3435 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:09:37.773808 kubelet[3435]: W1216 02:09:37.773358 3435 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:09:37.773808 kubelet[3435]: E1216 02:09:37.773597 3435 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:09:37.774494 kubelet[3435]: I1216 02:09:37.774417 3435 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/f50b9bab-3859-4d1b-ba44-d918ecbff9d1-socket-dir\") pod \"csi-node-driver-ghk7w\" (UID: \"f50b9bab-3859-4d1b-ba44-d918ecbff9d1\") " pod="calico-system/csi-node-driver-ghk7w" Dec 16 02:09:37.775499 kubelet[3435]: E1216 02:09:37.775295 3435 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:09:37.775499 kubelet[3435]: W1216 02:09:37.775337 3435 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:09:37.775499 kubelet[3435]: E1216 02:09:37.775433 3435 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:09:37.777449 kubelet[3435]: E1216 02:09:37.776915 3435 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:09:37.777449 kubelet[3435]: W1216 02:09:37.776956 3435 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:09:37.777449 kubelet[3435]: E1216 02:09:37.777019 3435 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:09:37.778662 kubelet[3435]: E1216 02:09:37.778540 3435 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:09:37.778662 kubelet[3435]: W1216 02:09:37.778597 3435 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:09:37.778662 kubelet[3435]: E1216 02:09:37.778634 3435 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:09:37.779881 kubelet[3435]: E1216 02:09:37.779823 3435 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:09:37.779881 kubelet[3435]: W1216 02:09:37.779867 3435 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:09:37.780502 kubelet[3435]: E1216 02:09:37.779906 3435 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:09:37.823564 systemd[1]: Started cri-containerd-3ada54df1a0024a7fcf41bc861794676d8a60b65a8bd53c791179507e719f74a.scope - libcontainer container 3ada54df1a0024a7fcf41bc861794676d8a60b65a8bd53c791179507e719f74a. Dec 16 02:09:37.869000 audit: BPF prog-id=154 op=LOAD Dec 16 02:09:37.873000 audit: BPF prog-id=155 op=LOAD Dec 16 02:09:37.873000 audit[3870]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130180 a2=98 a3=0 items=0 ppid=3859 pid=3870 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:09:37.873000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3063303762623064393432663165366536356337366130383533353963 Dec 16 02:09:37.876000 audit: BPF prog-id=155 op=UNLOAD Dec 16 02:09:37.876000 audit[3870]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3859 pid=3870 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:09:37.879249 kubelet[3435]: E1216 02:09:37.878466 3435 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:09:37.879249 kubelet[3435]: W1216 02:09:37.878498 3435 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:09:37.879249 kubelet[3435]: E1216 02:09:37.878531 3435 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:09:37.876000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3063303762623064393432663165366536356337366130383533353963 Dec 16 02:09:37.882080 kubelet[3435]: E1216 02:09:37.881832 3435 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:09:37.883020 kubelet[3435]: W1216 02:09:37.882663 3435 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:09:37.880000 audit: BPF prog-id=156 op=LOAD Dec 16 02:09:37.884410 kubelet[3435]: E1216 02:09:37.883518 3435 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:09:37.880000 audit[3870]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001303e8 a2=98 a3=0 items=0 ppid=3859 pid=3870 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:09:37.880000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3063303762623064393432663165366536356337366130383533353963 Dec 16 02:09:37.885379 kubelet[3435]: E1216 02:09:37.884413 3435 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:09:37.885379 kubelet[3435]: W1216 02:09:37.884443 3435 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:09:37.885379 kubelet[3435]: E1216 02:09:37.884475 3435 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:09:37.885379 kubelet[3435]: E1216 02:09:37.884878 3435 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:09:37.885379 kubelet[3435]: W1216 02:09:37.884946 3435 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:09:37.885379 kubelet[3435]: E1216 02:09:37.884979 3435 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:09:37.885774 kubelet[3435]: E1216 02:09:37.885467 3435 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:09:37.885774 kubelet[3435]: W1216 02:09:37.885496 3435 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:09:37.885774 kubelet[3435]: E1216 02:09:37.885527 3435 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:09:37.885000 audit: BPF prog-id=157 op=LOAD Dec 16 02:09:37.885000 audit[3870]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000130168 a2=98 a3=0 items=0 ppid=3859 pid=3870 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:09:37.885000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3063303762623064393432663165366536356337366130383533353963 Dec 16 02:09:37.885000 audit: BPF prog-id=157 op=UNLOAD Dec 16 02:09:37.885000 audit[3870]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3859 pid=3870 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:09:37.885000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3063303762623064393432663165366536356337366130383533353963 Dec 16 02:09:37.885000 audit: BPF prog-id=156 op=UNLOAD Dec 16 02:09:37.885000 audit[3870]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3859 pid=3870 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:09:37.885000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3063303762623064393432663165366536356337366130383533353963 Dec 16 02:09:37.885000 audit: BPF prog-id=158 op=LOAD Dec 16 02:09:37.885000 audit[3870]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130648 a2=98 a3=0 items=0 ppid=3859 pid=3870 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:09:37.885000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3063303762623064393432663165366536356337366130383533353963 Dec 16 02:09:37.894907 kubelet[3435]: E1216 02:09:37.887749 3435 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:09:37.894907 kubelet[3435]: W1216 02:09:37.887779 3435 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:09:37.894907 kubelet[3435]: E1216 02:09:37.887811 3435 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:09:37.894907 kubelet[3435]: E1216 02:09:37.888634 3435 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:09:37.894907 kubelet[3435]: W1216 02:09:37.888667 3435 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:09:37.894907 kubelet[3435]: E1216 02:09:37.888700 3435 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:09:37.894907 kubelet[3435]: E1216 02:09:37.890817 3435 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:09:37.894907 kubelet[3435]: W1216 02:09:37.890849 3435 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:09:37.894907 kubelet[3435]: E1216 02:09:37.890881 3435 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:09:37.894907 kubelet[3435]: E1216 02:09:37.891619 3435 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:09:37.897488 kubelet[3435]: W1216 02:09:37.891653 3435 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:09:37.897488 kubelet[3435]: E1216 02:09:37.891713 3435 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:09:37.897488 kubelet[3435]: E1216 02:09:37.892312 3435 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:09:37.897488 kubelet[3435]: W1216 02:09:37.892381 3435 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:09:37.897488 kubelet[3435]: E1216 02:09:37.892444 3435 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:09:37.897488 kubelet[3435]: E1216 02:09:37.894238 3435 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:09:37.897488 kubelet[3435]: W1216 02:09:37.894274 3435 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:09:37.897488 kubelet[3435]: E1216 02:09:37.894308 3435 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:09:37.897488 kubelet[3435]: E1216 02:09:37.894985 3435 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:09:37.897488 kubelet[3435]: W1216 02:09:37.895016 3435 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:09:37.898649 kubelet[3435]: E1216 02:09:37.895159 3435 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:09:37.898649 kubelet[3435]: E1216 02:09:37.895619 3435 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:09:37.898649 kubelet[3435]: W1216 02:09:37.895643 3435 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:09:37.898649 kubelet[3435]: E1216 02:09:37.895672 3435 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:09:37.898649 kubelet[3435]: E1216 02:09:37.896038 3435 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:09:37.898649 kubelet[3435]: W1216 02:09:37.896243 3435 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:09:37.898649 kubelet[3435]: E1216 02:09:37.896278 3435 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:09:37.898649 kubelet[3435]: E1216 02:09:37.897406 3435 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:09:37.898649 kubelet[3435]: W1216 02:09:37.897437 3435 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:09:37.898649 kubelet[3435]: E1216 02:09:37.897469 3435 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:09:37.901619 kubelet[3435]: E1216 02:09:37.900639 3435 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:09:37.901619 kubelet[3435]: W1216 02:09:37.900669 3435 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:09:37.901619 kubelet[3435]: E1216 02:09:37.900702 3435 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:09:37.902313 kubelet[3435]: E1216 02:09:37.902219 3435 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:09:37.902313 kubelet[3435]: W1216 02:09:37.902270 3435 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:09:37.902313 kubelet[3435]: E1216 02:09:37.902307 3435 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:09:37.903705 kubelet[3435]: E1216 02:09:37.903655 3435 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:09:37.903705 kubelet[3435]: W1216 02:09:37.903693 3435 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:09:37.904524 kubelet[3435]: E1216 02:09:37.903729 3435 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:09:37.904524 kubelet[3435]: E1216 02:09:37.904222 3435 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:09:37.904524 kubelet[3435]: W1216 02:09:37.904247 3435 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:09:37.904524 kubelet[3435]: E1216 02:09:37.904274 3435 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:09:37.905532 kubelet[3435]: E1216 02:09:37.905482 3435 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:09:37.905532 kubelet[3435]: W1216 02:09:37.905522 3435 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:09:37.906341 kubelet[3435]: E1216 02:09:37.905557 3435 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:09:37.906341 kubelet[3435]: E1216 02:09:37.905992 3435 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:09:37.906341 kubelet[3435]: W1216 02:09:37.906017 3435 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:09:37.906341 kubelet[3435]: E1216 02:09:37.906066 3435 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:09:37.907263 kubelet[3435]: E1216 02:09:37.906735 3435 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:09:37.907263 kubelet[3435]: W1216 02:09:37.906772 3435 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:09:37.907263 kubelet[3435]: E1216 02:09:37.906806 3435 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:09:37.908414 kubelet[3435]: E1216 02:09:37.908314 3435 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:09:37.908414 kubelet[3435]: W1216 02:09:37.908359 3435 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:09:37.908414 kubelet[3435]: E1216 02:09:37.908397 3435 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:09:37.909652 kubelet[3435]: E1216 02:09:37.908903 3435 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:09:37.909652 kubelet[3435]: W1216 02:09:37.908932 3435 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:09:37.909652 kubelet[3435]: E1216 02:09:37.908963 3435 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:09:37.910423 kubelet[3435]: E1216 02:09:37.910368 3435 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:09:37.910423 kubelet[3435]: W1216 02:09:37.910411 3435 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:09:37.910423 kubelet[3435]: E1216 02:09:37.910448 3435 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:09:37.948088 kubelet[3435]: E1216 02:09:37.947941 3435 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:09:37.948088 kubelet[3435]: W1216 02:09:37.947984 3435 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:09:37.948088 kubelet[3435]: E1216 02:09:37.948019 3435 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:09:38.005000 audit: BPF prog-id=159 op=LOAD Dec 16 02:09:38.008000 audit: BPF prog-id=160 op=LOAD Dec 16 02:09:38.008000 audit[3991]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000106180 a2=98 a3=0 items=0 ppid=3953 pid=3991 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:09:38.008000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3361646135346466316130303234613766636634316263383631373934 Dec 16 02:09:38.008000 audit: BPF prog-id=160 op=UNLOAD Dec 16 02:09:38.008000 audit[3991]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3953 pid=3991 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:09:38.008000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3361646135346466316130303234613766636634316263383631373934 Dec 16 02:09:38.009000 audit: BPF prog-id=161 op=LOAD Dec 16 02:09:38.009000 audit[3991]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001063e8 a2=98 a3=0 items=0 ppid=3953 pid=3991 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:09:38.009000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3361646135346466316130303234613766636634316263383631373934 Dec 16 02:09:38.009000 audit: BPF prog-id=162 op=LOAD Dec 16 02:09:38.009000 audit[3991]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000106168 a2=98 a3=0 items=0 ppid=3953 pid=3991 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:09:38.009000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3361646135346466316130303234613766636634316263383631373934 Dec 16 02:09:38.010000 audit: BPF prog-id=162 op=UNLOAD Dec 16 02:09:38.010000 audit[3991]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3953 pid=3991 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:09:38.010000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3361646135346466316130303234613766636634316263383631373934 Dec 16 02:09:38.011000 audit: BPF prog-id=161 op=UNLOAD Dec 16 02:09:38.011000 audit[3991]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3953 pid=3991 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:09:38.011000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3361646135346466316130303234613766636634316263383631373934 Dec 16 02:09:38.014000 audit: BPF prog-id=163 op=LOAD Dec 16 02:09:38.014000 audit[3991]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000106648 a2=98 a3=0 items=0 ppid=3953 pid=3991 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:09:38.014000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3361646135346466316130303234613766636634316263383631373934 Dec 16 02:09:38.034000 audit[4050]: NETFILTER_CFG table=filter:119 family=2 entries=22 op=nft_register_rule pid=4050 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 02:09:38.034000 audit[4050]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=8224 a0=3 a1=ffffe80e45a0 a2=0 a3=1 items=0 ppid=3591 pid=4050 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:09:38.034000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 02:09:38.038000 audit[4050]: NETFILTER_CFG table=nat:120 family=2 entries=12 op=nft_register_rule pid=4050 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 02:09:38.038000 audit[4050]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffe80e45a0 a2=0 a3=1 items=0 ppid=3591 pid=4050 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:09:38.038000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 02:09:38.068254 containerd[1959]: time="2025-12-16T02:09:38.067710841Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-5tkx4,Uid:a5d5724e-9ef6-43d2-86ce-d3837b27c82c,Namespace:calico-system,Attempt:0,} returns sandbox id \"3ada54df1a0024a7fcf41bc861794676d8a60b65a8bd53c791179507e719f74a\"" Dec 16 02:09:38.076090 containerd[1959]: time="2025-12-16T02:09:38.075859129Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Dec 16 02:09:38.105431 containerd[1959]: time="2025-12-16T02:09:38.105347053Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-655ddcc49f-p859j,Uid:9c809760-b0d1-466a-973a-f2d03dae4ae7,Namespace:calico-system,Attempt:0,} returns sandbox id \"0c07bb0d942f1e6e65c76a085359c29dba0acccc225f2d2ae0c62fcc43a879ea\"" Dec 16 02:09:38.824979 kubelet[3435]: E1216 02:09:38.824912 3435 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ghk7w" podUID="f50b9bab-3859-4d1b-ba44-d918ecbff9d1" Dec 16 02:09:39.293403 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount64032667.mount: Deactivated successfully. Dec 16 02:09:39.433125 containerd[1959]: time="2025-12-16T02:09:39.432758704Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 02:09:39.435936 containerd[1959]: time="2025-12-16T02:09:39.435843808Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=0" Dec 16 02:09:39.438630 containerd[1959]: time="2025-12-16T02:09:39.438535420Z" level=info msg="ImageCreate event name:\"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 02:09:39.444148 containerd[1959]: time="2025-12-16T02:09:39.443746696Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 02:09:39.446510 containerd[1959]: time="2025-12-16T02:09:39.445471936Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5636392\" in 1.369536511s" Dec 16 02:09:39.446510 containerd[1959]: time="2025-12-16T02:09:39.445552024Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\"" Dec 16 02:09:39.448539 containerd[1959]: time="2025-12-16T02:09:39.448451032Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Dec 16 02:09:39.457904 containerd[1959]: time="2025-12-16T02:09:39.457818208Z" level=info msg="CreateContainer within sandbox \"3ada54df1a0024a7fcf41bc861794676d8a60b65a8bd53c791179507e719f74a\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Dec 16 02:09:39.483239 containerd[1959]: time="2025-12-16T02:09:39.483173104Z" level=info msg="Container 6cbb9516dd093e078d432331974ef056c32dab7a95872db1e289fa55c39af42c: CDI devices from CRI Config.CDIDevices: []" Dec 16 02:09:39.518263 containerd[1959]: time="2025-12-16T02:09:39.518077084Z" level=info msg="CreateContainer within sandbox \"3ada54df1a0024a7fcf41bc861794676d8a60b65a8bd53c791179507e719f74a\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"6cbb9516dd093e078d432331974ef056c32dab7a95872db1e289fa55c39af42c\"" Dec 16 02:09:39.519538 containerd[1959]: time="2025-12-16T02:09:39.519296165Z" level=info msg="StartContainer for \"6cbb9516dd093e078d432331974ef056c32dab7a95872db1e289fa55c39af42c\"" Dec 16 02:09:39.523563 containerd[1959]: time="2025-12-16T02:09:39.523504517Z" level=info msg="connecting to shim 6cbb9516dd093e078d432331974ef056c32dab7a95872db1e289fa55c39af42c" address="unix:///run/containerd/s/39d3e6e276ad180d419f76f3055ca4b5b16846b803c33fb43c824ee4821f0161" protocol=ttrpc version=3 Dec 16 02:09:39.571505 systemd[1]: Started cri-containerd-6cbb9516dd093e078d432331974ef056c32dab7a95872db1e289fa55c39af42c.scope - libcontainer container 6cbb9516dd093e078d432331974ef056c32dab7a95872db1e289fa55c39af42c. Dec 16 02:09:39.664000 audit: BPF prog-id=164 op=LOAD Dec 16 02:09:39.664000 audit[4075]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001b03e8 a2=98 a3=0 items=0 ppid=3953 pid=4075 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:09:39.664000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3663626239353136646430393365303738643433323333313937346566 Dec 16 02:09:39.664000 audit: BPF prog-id=165 op=LOAD Dec 16 02:09:39.664000 audit[4075]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=40001b0168 a2=98 a3=0 items=0 ppid=3953 pid=4075 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:09:39.664000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3663626239353136646430393365303738643433323333313937346566 Dec 16 02:09:39.664000 audit: BPF prog-id=165 op=UNLOAD Dec 16 02:09:39.664000 audit[4075]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3953 pid=4075 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:09:39.664000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3663626239353136646430393365303738643433323333313937346566 Dec 16 02:09:39.664000 audit: BPF prog-id=164 op=UNLOAD Dec 16 02:09:39.664000 audit[4075]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3953 pid=4075 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:09:39.664000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3663626239353136646430393365303738643433323333313937346566 Dec 16 02:09:39.664000 audit: BPF prog-id=166 op=LOAD Dec 16 02:09:39.664000 audit[4075]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001b0648 a2=98 a3=0 items=0 ppid=3953 pid=4075 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:09:39.664000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3663626239353136646430393365303738643433323333313937346566 Dec 16 02:09:39.720863 containerd[1959]: time="2025-12-16T02:09:39.720666162Z" level=info msg="StartContainer for \"6cbb9516dd093e078d432331974ef056c32dab7a95872db1e289fa55c39af42c\" returns successfully" Dec 16 02:09:39.757442 systemd[1]: cri-containerd-6cbb9516dd093e078d432331974ef056c32dab7a95872db1e289fa55c39af42c.scope: Deactivated successfully. Dec 16 02:09:39.760000 audit: BPF prog-id=166 op=UNLOAD Dec 16 02:09:39.767016 containerd[1959]: time="2025-12-16T02:09:39.766913586Z" level=info msg="received container exit event container_id:\"6cbb9516dd093e078d432331974ef056c32dab7a95872db1e289fa55c39af42c\" id:\"6cbb9516dd093e078d432331974ef056c32dab7a95872db1e289fa55c39af42c\" pid:4087 exited_at:{seconds:1765850979 nanos:765706038}" Dec 16 02:09:40.228967 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6cbb9516dd093e078d432331974ef056c32dab7a95872db1e289fa55c39af42c-rootfs.mount: Deactivated successfully. Dec 16 02:09:40.824469 kubelet[3435]: E1216 02:09:40.824391 3435 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ghk7w" podUID="f50b9bab-3859-4d1b-ba44-d918ecbff9d1" Dec 16 02:09:41.669762 containerd[1959]: time="2025-12-16T02:09:41.669670075Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 02:09:41.671186 containerd[1959]: time="2025-12-16T02:09:41.671077987Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=31716861" Dec 16 02:09:41.673203 containerd[1959]: time="2025-12-16T02:09:41.672590071Z" level=info msg="ImageCreate event name:\"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 02:09:41.676023 containerd[1959]: time="2025-12-16T02:09:41.675972127Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 02:09:41.677336 containerd[1959]: time="2025-12-16T02:09:41.677289355Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"33090541\" in 2.228760251s" Dec 16 02:09:41.677523 containerd[1959]: time="2025-12-16T02:09:41.677492803Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\"" Dec 16 02:09:41.680619 containerd[1959]: time="2025-12-16T02:09:41.680283199Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Dec 16 02:09:41.715694 containerd[1959]: time="2025-12-16T02:09:41.715645423Z" level=info msg="CreateContainer within sandbox \"0c07bb0d942f1e6e65c76a085359c29dba0acccc225f2d2ae0c62fcc43a879ea\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Dec 16 02:09:41.730452 containerd[1959]: time="2025-12-16T02:09:41.730394515Z" level=info msg="Container 7b499e338ce91c493030108022f9f56bf98e430eb9349183d40f81237dfc7d1c: CDI devices from CRI Config.CDIDevices: []" Dec 16 02:09:41.737892 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1856185967.mount: Deactivated successfully. Dec 16 02:09:41.748537 containerd[1959]: time="2025-12-16T02:09:41.748254572Z" level=info msg="CreateContainer within sandbox \"0c07bb0d942f1e6e65c76a085359c29dba0acccc225f2d2ae0c62fcc43a879ea\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"7b499e338ce91c493030108022f9f56bf98e430eb9349183d40f81237dfc7d1c\"" Dec 16 02:09:41.751224 containerd[1959]: time="2025-12-16T02:09:41.750300176Z" level=info msg="StartContainer for \"7b499e338ce91c493030108022f9f56bf98e430eb9349183d40f81237dfc7d1c\"" Dec 16 02:09:41.753281 containerd[1959]: time="2025-12-16T02:09:41.753194600Z" level=info msg="connecting to shim 7b499e338ce91c493030108022f9f56bf98e430eb9349183d40f81237dfc7d1c" address="unix:///run/containerd/s/5ad0c22bdcd9bcec666d85b2510641853cec5c7e59c4bf5029e3b8bdd2f50d49" protocol=ttrpc version=3 Dec 16 02:09:41.795425 systemd[1]: Started cri-containerd-7b499e338ce91c493030108022f9f56bf98e430eb9349183d40f81237dfc7d1c.scope - libcontainer container 7b499e338ce91c493030108022f9f56bf98e430eb9349183d40f81237dfc7d1c. Dec 16 02:09:41.827000 audit: BPF prog-id=167 op=LOAD Dec 16 02:09:41.829495 kernel: kauditd_printk_skb: 68 callbacks suppressed Dec 16 02:09:41.829624 kernel: audit: type=1334 audit(1765850981.827:563): prog-id=167 op=LOAD Dec 16 02:09:41.830000 audit: BPF prog-id=168 op=LOAD Dec 16 02:09:41.830000 audit[4135]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178180 a2=98 a3=0 items=0 ppid=3859 pid=4135 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:09:41.839302 kernel: audit: type=1334 audit(1765850981.830:564): prog-id=168 op=LOAD Dec 16 02:09:41.839464 kernel: audit: type=1300 audit(1765850981.830:564): arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178180 a2=98 a3=0 items=0 ppid=3859 pid=4135 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:09:41.830000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3762343939653333386365393163343933303330313038303232663966 Dec 16 02:09:41.847574 kernel: audit: type=1327 audit(1765850981.830:564): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3762343939653333386365393163343933303330313038303232663966 Dec 16 02:09:41.830000 audit: BPF prog-id=168 op=UNLOAD Dec 16 02:09:41.830000 audit[4135]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3859 pid=4135 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:09:41.850332 kernel: audit: type=1334 audit(1765850981.830:565): prog-id=168 op=UNLOAD Dec 16 02:09:41.830000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3762343939653333386365393163343933303330313038303232663966 Dec 16 02:09:41.862867 kernel: audit: type=1300 audit(1765850981.830:565): arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3859 pid=4135 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:09:41.862985 kernel: audit: type=1327 audit(1765850981.830:565): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3762343939653333386365393163343933303330313038303232663966 Dec 16 02:09:41.830000 audit: BPF prog-id=169 op=LOAD Dec 16 02:09:41.865570 kernel: audit: type=1334 audit(1765850981.830:566): prog-id=169 op=LOAD Dec 16 02:09:41.830000 audit[4135]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=3859 pid=4135 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:09:41.872136 kernel: audit: type=1300 audit(1765850981.830:566): arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=3859 pid=4135 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:09:41.830000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3762343939653333386365393163343933303330313038303232663966 Dec 16 02:09:41.878358 kernel: audit: type=1327 audit(1765850981.830:566): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3762343939653333386365393163343933303330313038303232663966 Dec 16 02:09:41.832000 audit: BPF prog-id=170 op=LOAD Dec 16 02:09:41.832000 audit[4135]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000178168 a2=98 a3=0 items=0 ppid=3859 pid=4135 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:09:41.832000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3762343939653333386365393163343933303330313038303232663966 Dec 16 02:09:41.838000 audit: BPF prog-id=170 op=UNLOAD Dec 16 02:09:41.838000 audit[4135]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3859 pid=4135 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:09:41.838000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3762343939653333386365393163343933303330313038303232663966 Dec 16 02:09:41.838000 audit: BPF prog-id=169 op=UNLOAD Dec 16 02:09:41.838000 audit[4135]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3859 pid=4135 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:09:41.838000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3762343939653333386365393163343933303330313038303232663966 Dec 16 02:09:41.838000 audit: BPF prog-id=171 op=LOAD Dec 16 02:09:41.838000 audit[4135]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178648 a2=98 a3=0 items=0 ppid=3859 pid=4135 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:09:41.838000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3762343939653333386365393163343933303330313038303232663966 Dec 16 02:09:41.929377 containerd[1959]: time="2025-12-16T02:09:41.928331744Z" level=info msg="StartContainer for \"7b499e338ce91c493030108022f9f56bf98e430eb9349183d40f81237dfc7d1c\" returns successfully" Dec 16 02:09:42.092405 kubelet[3435]: I1216 02:09:42.092306 3435 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-655ddcc49f-p859j" podStartSLOduration=2.5260674400000003 podStartE2EDuration="6.092287097s" podCreationTimestamp="2025-12-16 02:09:36 +0000 UTC" firstStartedPulling="2025-12-16 02:09:38.112984334 +0000 UTC m=+35.599185934" lastFinishedPulling="2025-12-16 02:09:41.679203991 +0000 UTC m=+39.165405591" observedRunningTime="2025-12-16 02:09:42.090519317 +0000 UTC m=+39.576720929" watchObservedRunningTime="2025-12-16 02:09:42.092287097 +0000 UTC m=+39.578488721" Dec 16 02:09:42.827029 kubelet[3435]: E1216 02:09:42.826090 3435 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ghk7w" podUID="f50b9bab-3859-4d1b-ba44-d918ecbff9d1" Dec 16 02:09:43.157000 audit[4171]: NETFILTER_CFG table=filter:121 family=2 entries=21 op=nft_register_rule pid=4171 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 02:09:43.157000 audit[4171]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=7480 a0=3 a1=ffffe07f45e0 a2=0 a3=1 items=0 ppid=3591 pid=4171 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:09:43.157000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 02:09:43.161000 audit[4171]: NETFILTER_CFG table=nat:122 family=2 entries=19 op=nft_register_chain pid=4171 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 02:09:43.161000 audit[4171]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=6276 a0=3 a1=ffffe07f45e0 a2=0 a3=1 items=0 ppid=3591 pid=4171 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:09:43.161000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 02:09:44.823812 kubelet[3435]: E1216 02:09:44.823612 3435 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ghk7w" podUID="f50b9bab-3859-4d1b-ba44-d918ecbff9d1" Dec 16 02:09:44.951678 containerd[1959]: time="2025-12-16T02:09:44.951530519Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 02:09:44.955164 containerd[1959]: time="2025-12-16T02:09:44.954856859Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=65921248" Dec 16 02:09:44.957496 containerd[1959]: time="2025-12-16T02:09:44.957310992Z" level=info msg="ImageCreate event name:\"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 02:09:44.966158 containerd[1959]: time="2025-12-16T02:09:44.965950860Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 02:09:44.971754 containerd[1959]: time="2025-12-16T02:09:44.971516580Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"67295507\" in 3.291167021s" Dec 16 02:09:44.971754 containerd[1959]: time="2025-12-16T02:09:44.971584752Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\"" Dec 16 02:09:44.983224 containerd[1959]: time="2025-12-16T02:09:44.982318572Z" level=info msg="CreateContainer within sandbox \"3ada54df1a0024a7fcf41bc861794676d8a60b65a8bd53c791179507e719f74a\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Dec 16 02:09:45.005189 containerd[1959]: time="2025-12-16T02:09:45.004538744Z" level=info msg="Container eb5e98234cec3f9be472b84a719857ad8f3bedd493f4afddc5a9afff4ea81b66: CDI devices from CRI Config.CDIDevices: []" Dec 16 02:09:45.034097 containerd[1959]: time="2025-12-16T02:09:45.033949316Z" level=info msg="CreateContainer within sandbox \"3ada54df1a0024a7fcf41bc861794676d8a60b65a8bd53c791179507e719f74a\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"eb5e98234cec3f9be472b84a719857ad8f3bedd493f4afddc5a9afff4ea81b66\"" Dec 16 02:09:45.037777 containerd[1959]: time="2025-12-16T02:09:45.035901308Z" level=info msg="StartContainer for \"eb5e98234cec3f9be472b84a719857ad8f3bedd493f4afddc5a9afff4ea81b66\"" Dec 16 02:09:45.041823 containerd[1959]: time="2025-12-16T02:09:45.041697656Z" level=info msg="connecting to shim eb5e98234cec3f9be472b84a719857ad8f3bedd493f4afddc5a9afff4ea81b66" address="unix:///run/containerd/s/39d3e6e276ad180d419f76f3055ca4b5b16846b803c33fb43c824ee4821f0161" protocol=ttrpc version=3 Dec 16 02:09:45.091420 systemd[1]: Started cri-containerd-eb5e98234cec3f9be472b84a719857ad8f3bedd493f4afddc5a9afff4ea81b66.scope - libcontainer container eb5e98234cec3f9be472b84a719857ad8f3bedd493f4afddc5a9afff4ea81b66. Dec 16 02:09:45.169000 audit: BPF prog-id=172 op=LOAD Dec 16 02:09:45.169000 audit[4182]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=3953 pid=4182 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:09:45.169000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6562356539383233346365633366396265343732623834613731393835 Dec 16 02:09:45.169000 audit: BPF prog-id=173 op=LOAD Dec 16 02:09:45.169000 audit[4182]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=4000178168 a2=98 a3=0 items=0 ppid=3953 pid=4182 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:09:45.169000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6562356539383233346365633366396265343732623834613731393835 Dec 16 02:09:45.169000 audit: BPF prog-id=173 op=UNLOAD Dec 16 02:09:45.169000 audit[4182]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3953 pid=4182 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:09:45.169000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6562356539383233346365633366396265343732623834613731393835 Dec 16 02:09:45.169000 audit: BPF prog-id=172 op=UNLOAD Dec 16 02:09:45.169000 audit[4182]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3953 pid=4182 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:09:45.169000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6562356539383233346365633366396265343732623834613731393835 Dec 16 02:09:45.169000 audit: BPF prog-id=174 op=LOAD Dec 16 02:09:45.169000 audit[4182]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000178648 a2=98 a3=0 items=0 ppid=3953 pid=4182 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:09:45.169000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6562356539383233346365633366396265343732623834613731393835 Dec 16 02:09:45.209745 containerd[1959]: time="2025-12-16T02:09:45.209641461Z" level=info msg="StartContainer for \"eb5e98234cec3f9be472b84a719857ad8f3bedd493f4afddc5a9afff4ea81b66\" returns successfully" Dec 16 02:09:46.224455 containerd[1959]: time="2025-12-16T02:09:46.224374906Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 16 02:09:46.229406 systemd[1]: cri-containerd-eb5e98234cec3f9be472b84a719857ad8f3bedd493f4afddc5a9afff4ea81b66.scope: Deactivated successfully. Dec 16 02:09:46.230208 systemd[1]: cri-containerd-eb5e98234cec3f9be472b84a719857ad8f3bedd493f4afddc5a9afff4ea81b66.scope: Consumed 1.048s CPU time, 186.5M memory peak, 165.9M written to disk. Dec 16 02:09:46.233000 audit: BPF prog-id=174 op=UNLOAD Dec 16 02:09:46.238903 containerd[1959]: time="2025-12-16T02:09:46.238731466Z" level=info msg="received container exit event container_id:\"eb5e98234cec3f9be472b84a719857ad8f3bedd493f4afddc5a9afff4ea81b66\" id:\"eb5e98234cec3f9be472b84a719857ad8f3bedd493f4afddc5a9afff4ea81b66\" pid:4195 exited_at:{seconds:1765850986 nanos:238318198}" Dec 16 02:09:46.285655 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-eb5e98234cec3f9be472b84a719857ad8f3bedd493f4afddc5a9afff4ea81b66-rootfs.mount: Deactivated successfully. Dec 16 02:09:46.310381 kubelet[3435]: I1216 02:09:46.310344 3435 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Dec 16 02:09:46.429922 systemd[1]: Created slice kubepods-burstable-pode5f8c11a_871b_4f4b_9b56_288874230e6c.slice - libcontainer container kubepods-burstable-pode5f8c11a_871b_4f4b_9b56_288874230e6c.slice. Dec 16 02:09:46.460883 kubelet[3435]: I1216 02:09:46.460140 3435 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cbhkz\" (UniqueName: \"kubernetes.io/projected/6568dd52-5b39-4fa9-85f5-6b3edadbd038-kube-api-access-cbhkz\") pod \"coredns-674b8bbfcf-spv6q\" (UID: \"6568dd52-5b39-4fa9-85f5-6b3edadbd038\") " pod="kube-system/coredns-674b8bbfcf-spv6q" Dec 16 02:09:46.462370 kubelet[3435]: I1216 02:09:46.461227 3435 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vm2jv\" (UniqueName: \"kubernetes.io/projected/e5f8c11a-871b-4f4b-9b56-288874230e6c-kube-api-access-vm2jv\") pod \"coredns-674b8bbfcf-zk92z\" (UID: \"e5f8c11a-871b-4f4b-9b56-288874230e6c\") " pod="kube-system/coredns-674b8bbfcf-zk92z" Dec 16 02:09:46.463450 kubelet[3435]: I1216 02:09:46.462501 3435 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e5f8c11a-871b-4f4b-9b56-288874230e6c-config-volume\") pod \"coredns-674b8bbfcf-zk92z\" (UID: \"e5f8c11a-871b-4f4b-9b56-288874230e6c\") " pod="kube-system/coredns-674b8bbfcf-zk92z" Dec 16 02:09:46.466072 kubelet[3435]: I1216 02:09:46.463893 3435 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e4cfe125-283d-49d5-a19e-c93c8097201d-tigera-ca-bundle\") pod \"calico-kube-controllers-8478cfdcd8-h8dlm\" (UID: \"e4cfe125-283d-49d5-a19e-c93c8097201d\") " pod="calico-system/calico-kube-controllers-8478cfdcd8-h8dlm" Dec 16 02:09:46.466072 kubelet[3435]: I1216 02:09:46.464331 3435 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xmhxd\" (UniqueName: \"kubernetes.io/projected/e4cfe125-283d-49d5-a19e-c93c8097201d-kube-api-access-xmhxd\") pod \"calico-kube-controllers-8478cfdcd8-h8dlm\" (UID: \"e4cfe125-283d-49d5-a19e-c93c8097201d\") " pod="calico-system/calico-kube-controllers-8478cfdcd8-h8dlm" Dec 16 02:09:46.466072 kubelet[3435]: I1216 02:09:46.464870 3435 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6568dd52-5b39-4fa9-85f5-6b3edadbd038-config-volume\") pod \"coredns-674b8bbfcf-spv6q\" (UID: \"6568dd52-5b39-4fa9-85f5-6b3edadbd038\") " pod="kube-system/coredns-674b8bbfcf-spv6q" Dec 16 02:09:46.465689 systemd[1]: Created slice kubepods-besteffort-pode4cfe125_283d_49d5_a19e_c93c8097201d.slice - libcontainer container kubepods-besteffort-pode4cfe125_283d_49d5_a19e_c93c8097201d.slice. Dec 16 02:09:46.494101 systemd[1]: Created slice kubepods-burstable-pod6568dd52_5b39_4fa9_85f5_6b3edadbd038.slice - libcontainer container kubepods-burstable-pod6568dd52_5b39_4fa9_85f5_6b3edadbd038.slice. Dec 16 02:09:46.517748 systemd[1]: Created slice kubepods-besteffort-pod495e068f_5f86_4d9c_b537_813d53666a90.slice - libcontainer container kubepods-besteffort-pod495e068f_5f86_4d9c_b537_813d53666a90.slice. Dec 16 02:09:46.566109 kubelet[3435]: I1216 02:09:46.566003 3435 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/495e068f-5f86-4d9c-b537-813d53666a90-calico-apiserver-certs\") pod \"calico-apiserver-7bbfbc6879-7gb6n\" (UID: \"495e068f-5f86-4d9c-b537-813d53666a90\") " pod="calico-apiserver/calico-apiserver-7bbfbc6879-7gb6n" Dec 16 02:09:46.566595 kubelet[3435]: I1216 02:09:46.566357 3435 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p8nl9\" (UniqueName: \"kubernetes.io/projected/495e068f-5f86-4d9c-b537-813d53666a90-kube-api-access-p8nl9\") pod \"calico-apiserver-7bbfbc6879-7gb6n\" (UID: \"495e068f-5f86-4d9c-b537-813d53666a90\") " pod="calico-apiserver/calico-apiserver-7bbfbc6879-7gb6n" Dec 16 02:09:46.590874 systemd[1]: Created slice kubepods-besteffort-pod874b2e4b_6331_48a2_85aa_be2fc619cfc8.slice - libcontainer container kubepods-besteffort-pod874b2e4b_6331_48a2_85aa_be2fc619cfc8.slice. Dec 16 02:09:46.667472 kubelet[3435]: I1216 02:09:46.667388 3435 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bwjcz\" (UniqueName: \"kubernetes.io/projected/d4c19668-1432-40a4-b193-31fc63aa2910-kube-api-access-bwjcz\") pod \"whisker-6cb7ff6886-4gfbr\" (UID: \"d4c19668-1432-40a4-b193-31fc63aa2910\") " pod="calico-system/whisker-6cb7ff6886-4gfbr" Dec 16 02:09:46.669392 kubelet[3435]: I1216 02:09:46.667477 3435 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/874b2e4b-6331-48a2-85aa-be2fc619cfc8-goldmane-ca-bundle\") pod \"goldmane-666569f655-7kpqw\" (UID: \"874b2e4b-6331-48a2-85aa-be2fc619cfc8\") " pod="calico-system/goldmane-666569f655-7kpqw" Dec 16 02:09:46.669392 kubelet[3435]: I1216 02:09:46.667558 3435 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/d4c19668-1432-40a4-b193-31fc63aa2910-whisker-backend-key-pair\") pod \"whisker-6cb7ff6886-4gfbr\" (UID: \"d4c19668-1432-40a4-b193-31fc63aa2910\") " pod="calico-system/whisker-6cb7ff6886-4gfbr" Dec 16 02:09:46.669392 kubelet[3435]: I1216 02:09:46.667618 3435 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d4c19668-1432-40a4-b193-31fc63aa2910-whisker-ca-bundle\") pod \"whisker-6cb7ff6886-4gfbr\" (UID: \"d4c19668-1432-40a4-b193-31fc63aa2910\") " pod="calico-system/whisker-6cb7ff6886-4gfbr" Dec 16 02:09:46.669392 kubelet[3435]: I1216 02:09:46.667673 3435 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/874b2e4b-6331-48a2-85aa-be2fc619cfc8-goldmane-key-pair\") pod \"goldmane-666569f655-7kpqw\" (UID: \"874b2e4b-6331-48a2-85aa-be2fc619cfc8\") " pod="calico-system/goldmane-666569f655-7kpqw" Dec 16 02:09:46.669392 kubelet[3435]: I1216 02:09:46.667709 3435 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cmqs6\" (UniqueName: \"kubernetes.io/projected/874b2e4b-6331-48a2-85aa-be2fc619cfc8-kube-api-access-cmqs6\") pod \"goldmane-666569f655-7kpqw\" (UID: \"874b2e4b-6331-48a2-85aa-be2fc619cfc8\") " pod="calico-system/goldmane-666569f655-7kpqw" Dec 16 02:09:46.669778 kubelet[3435]: I1216 02:09:46.667773 3435 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/874b2e4b-6331-48a2-85aa-be2fc619cfc8-config\") pod \"goldmane-666569f655-7kpqw\" (UID: \"874b2e4b-6331-48a2-85aa-be2fc619cfc8\") " pod="calico-system/goldmane-666569f655-7kpqw" Dec 16 02:09:46.757158 containerd[1959]: time="2025-12-16T02:09:46.756776100Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-zk92z,Uid:e5f8c11a-871b-4f4b-9b56-288874230e6c,Namespace:kube-system,Attempt:0,}" Dec 16 02:09:46.761229 systemd[1]: Created slice kubepods-besteffort-pod9240aeb5_ce88_4dd3_8f08_0a5e1f3a4a69.slice - libcontainer container kubepods-besteffort-pod9240aeb5_ce88_4dd3_8f08_0a5e1f3a4a69.slice. Dec 16 02:09:46.769372 kubelet[3435]: I1216 02:09:46.769286 3435 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-85gh2\" (UniqueName: \"kubernetes.io/projected/9240aeb5-ce88-4dd3-8f08-0a5e1f3a4a69-kube-api-access-85gh2\") pod \"calico-apiserver-7bbfbc6879-szgl8\" (UID: \"9240aeb5-ce88-4dd3-8f08-0a5e1f3a4a69\") " pod="calico-apiserver/calico-apiserver-7bbfbc6879-szgl8" Dec 16 02:09:46.769765 kubelet[3435]: I1216 02:09:46.769402 3435 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/9240aeb5-ce88-4dd3-8f08-0a5e1f3a4a69-calico-apiserver-certs\") pod \"calico-apiserver-7bbfbc6879-szgl8\" (UID: \"9240aeb5-ce88-4dd3-8f08-0a5e1f3a4a69\") " pod="calico-apiserver/calico-apiserver-7bbfbc6879-szgl8" Dec 16 02:09:46.786461 containerd[1959]: time="2025-12-16T02:09:46.786372589Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-8478cfdcd8-h8dlm,Uid:e4cfe125-283d-49d5-a19e-c93c8097201d,Namespace:calico-system,Attempt:0,}" Dec 16 02:09:46.792184 systemd[1]: Created slice kubepods-besteffort-podd4c19668_1432_40a4_b193_31fc63aa2910.slice - libcontainer container kubepods-besteffort-podd4c19668_1432_40a4_b193_31fc63aa2910.slice. Dec 16 02:09:46.815708 containerd[1959]: time="2025-12-16T02:09:46.815567929Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-spv6q,Uid:6568dd52-5b39-4fa9-85f5-6b3edadbd038,Namespace:kube-system,Attempt:0,}" Dec 16 02:09:46.860672 containerd[1959]: time="2025-12-16T02:09:46.860545297Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7bbfbc6879-7gb6n,Uid:495e068f-5f86-4d9c-b537-813d53666a90,Namespace:calico-apiserver,Attempt:0,}" Dec 16 02:09:46.863440 systemd[1]: Created slice kubepods-besteffort-podf50b9bab_3859_4d1b_ba44_d918ecbff9d1.slice - libcontainer container kubepods-besteffort-podf50b9bab_3859_4d1b_ba44_d918ecbff9d1.slice. Dec 16 02:09:46.895533 containerd[1959]: time="2025-12-16T02:09:46.894768241Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-ghk7w,Uid:f50b9bab-3859-4d1b-ba44-d918ecbff9d1,Namespace:calico-system,Attempt:0,}" Dec 16 02:09:46.941170 containerd[1959]: time="2025-12-16T02:09:46.941092285Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-7kpqw,Uid:874b2e4b-6331-48a2-85aa-be2fc619cfc8,Namespace:calico-system,Attempt:0,}" Dec 16 02:09:47.091236 containerd[1959]: time="2025-12-16T02:09:47.091042894Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7bbfbc6879-szgl8,Uid:9240aeb5-ce88-4dd3-8f08-0a5e1f3a4a69,Namespace:calico-apiserver,Attempt:0,}" Dec 16 02:09:47.106085 containerd[1959]: time="2025-12-16T02:09:47.105988858Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6cb7ff6886-4gfbr,Uid:d4c19668-1432-40a4-b193-31fc63aa2910,Namespace:calico-system,Attempt:0,}" Dec 16 02:09:47.258330 containerd[1959]: time="2025-12-16T02:09:47.258216875Z" level=error msg="Failed to destroy network for sandbox \"43fe4f445df5c99f53502fff8c8dce04ffd19d44ea6d6c603ea8ce26af225587\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 02:09:47.474596 containerd[1959]: time="2025-12-16T02:09:47.474115752Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-zk92z,Uid:e5f8c11a-871b-4f4b-9b56-288874230e6c,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"43fe4f445df5c99f53502fff8c8dce04ffd19d44ea6d6c603ea8ce26af225587\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 02:09:47.475815 kubelet[3435]: E1216 02:09:47.475436 3435 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"43fe4f445df5c99f53502fff8c8dce04ffd19d44ea6d6c603ea8ce26af225587\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 02:09:47.475815 kubelet[3435]: E1216 02:09:47.475536 3435 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"43fe4f445df5c99f53502fff8c8dce04ffd19d44ea6d6c603ea8ce26af225587\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-zk92z" Dec 16 02:09:47.478009 kubelet[3435]: E1216 02:09:47.477202 3435 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"43fe4f445df5c99f53502fff8c8dce04ffd19d44ea6d6c603ea8ce26af225587\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-zk92z" Dec 16 02:09:47.479707 kubelet[3435]: E1216 02:09:47.478275 3435 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-zk92z_kube-system(e5f8c11a-871b-4f4b-9b56-288874230e6c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-zk92z_kube-system(e5f8c11a-871b-4f4b-9b56-288874230e6c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"43fe4f445df5c99f53502fff8c8dce04ffd19d44ea6d6c603ea8ce26af225587\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-zk92z" podUID="e5f8c11a-871b-4f4b-9b56-288874230e6c" Dec 16 02:09:47.784477 containerd[1959]: time="2025-12-16T02:09:47.784398182Z" level=error msg="Failed to destroy network for sandbox \"e6514a0ff7e4da53996e39c6fc4d645b9d5a57fc1813df3511e63311380c127b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 02:09:47.792950 containerd[1959]: time="2025-12-16T02:09:47.792826406Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-8478cfdcd8-h8dlm,Uid:e4cfe125-283d-49d5-a19e-c93c8097201d,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"e6514a0ff7e4da53996e39c6fc4d645b9d5a57fc1813df3511e63311380c127b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 02:09:47.795098 kubelet[3435]: E1216 02:09:47.794525 3435 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e6514a0ff7e4da53996e39c6fc4d645b9d5a57fc1813df3511e63311380c127b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 02:09:47.795098 kubelet[3435]: E1216 02:09:47.794621 3435 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e6514a0ff7e4da53996e39c6fc4d645b9d5a57fc1813df3511e63311380c127b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-8478cfdcd8-h8dlm" Dec 16 02:09:47.795098 kubelet[3435]: E1216 02:09:47.794659 3435 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e6514a0ff7e4da53996e39c6fc4d645b9d5a57fc1813df3511e63311380c127b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-8478cfdcd8-h8dlm" Dec 16 02:09:47.795598 kubelet[3435]: E1216 02:09:47.794747 3435 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-8478cfdcd8-h8dlm_calico-system(e4cfe125-283d-49d5-a19e-c93c8097201d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-8478cfdcd8-h8dlm_calico-system(e4cfe125-283d-49d5-a19e-c93c8097201d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e6514a0ff7e4da53996e39c6fc4d645b9d5a57fc1813df3511e63311380c127b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-8478cfdcd8-h8dlm" podUID="e4cfe125-283d-49d5-a19e-c93c8097201d" Dec 16 02:09:47.825121 containerd[1959]: time="2025-12-16T02:09:47.822829838Z" level=error msg="Failed to destroy network for sandbox \"183803abdb4233114c9b08f5a3cb4014b3d7eea4a8ac44c9a8c1eebbfe82ce02\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 02:09:47.834447 containerd[1959]: time="2025-12-16T02:09:47.832220762Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-ghk7w,Uid:f50b9bab-3859-4d1b-ba44-d918ecbff9d1,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"183803abdb4233114c9b08f5a3cb4014b3d7eea4a8ac44c9a8c1eebbfe82ce02\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 02:09:47.835070 kubelet[3435]: E1216 02:09:47.834982 3435 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"183803abdb4233114c9b08f5a3cb4014b3d7eea4a8ac44c9a8c1eebbfe82ce02\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 02:09:47.835416 kubelet[3435]: E1216 02:09:47.835290 3435 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"183803abdb4233114c9b08f5a3cb4014b3d7eea4a8ac44c9a8c1eebbfe82ce02\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-ghk7w" Dec 16 02:09:47.835416 kubelet[3435]: E1216 02:09:47.835371 3435 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"183803abdb4233114c9b08f5a3cb4014b3d7eea4a8ac44c9a8c1eebbfe82ce02\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-ghk7w" Dec 16 02:09:47.837283 kubelet[3435]: E1216 02:09:47.835740 3435 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-ghk7w_calico-system(f50b9bab-3859-4d1b-ba44-d918ecbff9d1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-ghk7w_calico-system(f50b9bab-3859-4d1b-ba44-d918ecbff9d1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"183803abdb4233114c9b08f5a3cb4014b3d7eea4a8ac44c9a8c1eebbfe82ce02\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-ghk7w" podUID="f50b9bab-3859-4d1b-ba44-d918ecbff9d1" Dec 16 02:09:47.902478 containerd[1959]: time="2025-12-16T02:09:47.902371958Z" level=error msg="Failed to destroy network for sandbox \"163b852996d730aeb3e592b41fcd7678a7748a7ce200c791622b186b03e84498\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 02:09:47.909813 containerd[1959]: time="2025-12-16T02:09:47.909706454Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-7kpqw,Uid:874b2e4b-6331-48a2-85aa-be2fc619cfc8,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"163b852996d730aeb3e592b41fcd7678a7748a7ce200c791622b186b03e84498\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 02:09:47.911196 kubelet[3435]: E1216 02:09:47.910510 3435 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"163b852996d730aeb3e592b41fcd7678a7748a7ce200c791622b186b03e84498\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 02:09:47.911196 kubelet[3435]: E1216 02:09:47.910597 3435 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"163b852996d730aeb3e592b41fcd7678a7748a7ce200c791622b186b03e84498\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-7kpqw" Dec 16 02:09:47.911196 kubelet[3435]: E1216 02:09:47.910637 3435 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"163b852996d730aeb3e592b41fcd7678a7748a7ce200c791622b186b03e84498\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-7kpqw" Dec 16 02:09:47.911527 kubelet[3435]: E1216 02:09:47.910731 3435 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-7kpqw_calico-system(874b2e4b-6331-48a2-85aa-be2fc619cfc8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-7kpqw_calico-system(874b2e4b-6331-48a2-85aa-be2fc619cfc8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"163b852996d730aeb3e592b41fcd7678a7748a7ce200c791622b186b03e84498\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-7kpqw" podUID="874b2e4b-6331-48a2-85aa-be2fc619cfc8" Dec 16 02:09:47.926458 containerd[1959]: time="2025-12-16T02:09:47.925988438Z" level=error msg="Failed to destroy network for sandbox \"229b8e91618d18936a3ecc52ecbb55d5d4d556cc66f13103cc9592627beef5d1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 02:09:47.934693 containerd[1959]: time="2025-12-16T02:09:47.934591778Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7bbfbc6879-7gb6n,Uid:495e068f-5f86-4d9c-b537-813d53666a90,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"229b8e91618d18936a3ecc52ecbb55d5d4d556cc66f13103cc9592627beef5d1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 02:09:47.935518 kubelet[3435]: E1216 02:09:47.934978 3435 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"229b8e91618d18936a3ecc52ecbb55d5d4d556cc66f13103cc9592627beef5d1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 02:09:47.935518 kubelet[3435]: E1216 02:09:47.935129 3435 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"229b8e91618d18936a3ecc52ecbb55d5d4d556cc66f13103cc9592627beef5d1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7bbfbc6879-7gb6n" Dec 16 02:09:47.935518 kubelet[3435]: E1216 02:09:47.935381 3435 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"229b8e91618d18936a3ecc52ecbb55d5d4d556cc66f13103cc9592627beef5d1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7bbfbc6879-7gb6n" Dec 16 02:09:47.936956 kubelet[3435]: E1216 02:09:47.935903 3435 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7bbfbc6879-7gb6n_calico-apiserver(495e068f-5f86-4d9c-b537-813d53666a90)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7bbfbc6879-7gb6n_calico-apiserver(495e068f-5f86-4d9c-b537-813d53666a90)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"229b8e91618d18936a3ecc52ecbb55d5d4d556cc66f13103cc9592627beef5d1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7bbfbc6879-7gb6n" podUID="495e068f-5f86-4d9c-b537-813d53666a90" Dec 16 02:09:47.946671 containerd[1959]: time="2025-12-16T02:09:47.946245806Z" level=error msg="Failed to destroy network for sandbox \"87bf6df78b02dbe27b35962bbdf87015a552655ddf1afa2520f9b81bd42b2773\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 02:09:47.951927 containerd[1959]: time="2025-12-16T02:09:47.951299534Z" level=error msg="Failed to destroy network for sandbox \"2028432e52ff1644c44f9c0bc8b4212d59f3419fe54160eb19708f40ceb75d2b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 02:09:47.952518 containerd[1959]: time="2025-12-16T02:09:47.952371614Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6cb7ff6886-4gfbr,Uid:d4c19668-1432-40a4-b193-31fc63aa2910,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"87bf6df78b02dbe27b35962bbdf87015a552655ddf1afa2520f9b81bd42b2773\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 02:09:47.953220 kubelet[3435]: E1216 02:09:47.953136 3435 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"87bf6df78b02dbe27b35962bbdf87015a552655ddf1afa2520f9b81bd42b2773\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 02:09:47.953633 kubelet[3435]: E1216 02:09:47.953229 3435 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"87bf6df78b02dbe27b35962bbdf87015a552655ddf1afa2520f9b81bd42b2773\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6cb7ff6886-4gfbr" Dec 16 02:09:47.953633 kubelet[3435]: E1216 02:09:47.953280 3435 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"87bf6df78b02dbe27b35962bbdf87015a552655ddf1afa2520f9b81bd42b2773\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6cb7ff6886-4gfbr" Dec 16 02:09:47.954957 kubelet[3435]: E1216 02:09:47.953851 3435 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-6cb7ff6886-4gfbr_calico-system(d4c19668-1432-40a4-b193-31fc63aa2910)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-6cb7ff6886-4gfbr_calico-system(d4c19668-1432-40a4-b193-31fc63aa2910)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"87bf6df78b02dbe27b35962bbdf87015a552655ddf1afa2520f9b81bd42b2773\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-6cb7ff6886-4gfbr" podUID="d4c19668-1432-40a4-b193-31fc63aa2910" Dec 16 02:09:47.956450 containerd[1959]: time="2025-12-16T02:09:47.955857050Z" level=error msg="Failed to destroy network for sandbox \"de941398126376c1804c78289abc1fe85121051d9d2ba010c1ebba03438fbb90\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 02:09:47.959445 containerd[1959]: time="2025-12-16T02:09:47.959364050Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-spv6q,Uid:6568dd52-5b39-4fa9-85f5-6b3edadbd038,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"2028432e52ff1644c44f9c0bc8b4212d59f3419fe54160eb19708f40ceb75d2b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 02:09:47.960715 kubelet[3435]: E1216 02:09:47.960648 3435 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2028432e52ff1644c44f9c0bc8b4212d59f3419fe54160eb19708f40ceb75d2b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 02:09:47.960922 kubelet[3435]: E1216 02:09:47.960744 3435 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2028432e52ff1644c44f9c0bc8b4212d59f3419fe54160eb19708f40ceb75d2b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-spv6q" Dec 16 02:09:47.960922 kubelet[3435]: E1216 02:09:47.960784 3435 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2028432e52ff1644c44f9c0bc8b4212d59f3419fe54160eb19708f40ceb75d2b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-spv6q" Dec 16 02:09:47.962129 kubelet[3435]: E1216 02:09:47.960886 3435 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-spv6q_kube-system(6568dd52-5b39-4fa9-85f5-6b3edadbd038)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-spv6q_kube-system(6568dd52-5b39-4fa9-85f5-6b3edadbd038)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2028432e52ff1644c44f9c0bc8b4212d59f3419fe54160eb19708f40ceb75d2b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-spv6q" podUID="6568dd52-5b39-4fa9-85f5-6b3edadbd038" Dec 16 02:09:47.964950 containerd[1959]: time="2025-12-16T02:09:47.964808078Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7bbfbc6879-szgl8,Uid:9240aeb5-ce88-4dd3-8f08-0a5e1f3a4a69,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"de941398126376c1804c78289abc1fe85121051d9d2ba010c1ebba03438fbb90\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 02:09:47.966389 kubelet[3435]: E1216 02:09:47.965471 3435 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"de941398126376c1804c78289abc1fe85121051d9d2ba010c1ebba03438fbb90\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 02:09:47.966389 kubelet[3435]: E1216 02:09:47.965616 3435 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"de941398126376c1804c78289abc1fe85121051d9d2ba010c1ebba03438fbb90\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7bbfbc6879-szgl8" Dec 16 02:09:47.966389 kubelet[3435]: E1216 02:09:47.965691 3435 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"de941398126376c1804c78289abc1fe85121051d9d2ba010c1ebba03438fbb90\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7bbfbc6879-szgl8" Dec 16 02:09:47.966658 kubelet[3435]: E1216 02:09:47.965816 3435 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7bbfbc6879-szgl8_calico-apiserver(9240aeb5-ce88-4dd3-8f08-0a5e1f3a4a69)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7bbfbc6879-szgl8_calico-apiserver(9240aeb5-ce88-4dd3-8f08-0a5e1f3a4a69)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"de941398126376c1804c78289abc1fe85121051d9d2ba010c1ebba03438fbb90\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7bbfbc6879-szgl8" podUID="9240aeb5-ce88-4dd3-8f08-0a5e1f3a4a69" Dec 16 02:09:48.133136 containerd[1959]: time="2025-12-16T02:09:48.131399411Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Dec 16 02:09:48.285118 systemd[1]: run-netns-cni\x2d3113ced1\x2da396\x2ddaaa\x2d6a41\x2df867530137f2.mount: Deactivated successfully. Dec 16 02:09:48.285328 systemd[1]: run-netns-cni\x2d3fddb8fd\x2d1677\x2d2d1c\x2d64b4\x2d9ca768bf604f.mount: Deactivated successfully. Dec 16 02:09:48.285475 systemd[1]: run-netns-cni\x2dea429d51\x2dab6f\x2d8dbf\x2d335f\x2df0cbec730a6c.mount: Deactivated successfully. Dec 16 02:09:48.285662 systemd[1]: run-netns-cni\x2d16d92de1\x2d9d6b\x2d1b45\x2dfe6b\x2d00f45204fbae.mount: Deactivated successfully. Dec 16 02:09:55.360737 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2598562831.mount: Deactivated successfully. Dec 16 02:09:55.445797 containerd[1959]: time="2025-12-16T02:09:55.445125860Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 02:09:55.450278 containerd[1959]: time="2025-12-16T02:09:55.450110348Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=150930912" Dec 16 02:09:55.452669 containerd[1959]: time="2025-12-16T02:09:55.452597960Z" level=info msg="ImageCreate event name:\"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 02:09:55.461805 containerd[1959]: time="2025-12-16T02:09:55.461683076Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 02:09:55.465785 containerd[1959]: time="2025-12-16T02:09:55.465603212Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"150934424\" in 7.334120893s" Dec 16 02:09:55.465785 containerd[1959]: time="2025-12-16T02:09:55.465669320Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\"" Dec 16 02:09:55.505082 containerd[1959]: time="2025-12-16T02:09:55.504897428Z" level=info msg="CreateContainer within sandbox \"3ada54df1a0024a7fcf41bc861794676d8a60b65a8bd53c791179507e719f74a\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Dec 16 02:09:55.558095 containerd[1959]: time="2025-12-16T02:09:55.554500292Z" level=info msg="Container 5e6065f5aa43b7fa9476ff992c9504ea0c29cde58e5ecb2c78341df73ee82393: CDI devices from CRI Config.CDIDevices: []" Dec 16 02:09:55.584507 containerd[1959]: time="2025-12-16T02:09:55.584435552Z" level=info msg="CreateContainer within sandbox \"3ada54df1a0024a7fcf41bc861794676d8a60b65a8bd53c791179507e719f74a\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"5e6065f5aa43b7fa9476ff992c9504ea0c29cde58e5ecb2c78341df73ee82393\"" Dec 16 02:09:55.585954 containerd[1959]: time="2025-12-16T02:09:55.585899264Z" level=info msg="StartContainer for \"5e6065f5aa43b7fa9476ff992c9504ea0c29cde58e5ecb2c78341df73ee82393\"" Dec 16 02:09:55.589429 containerd[1959]: time="2025-12-16T02:09:55.589370768Z" level=info msg="connecting to shim 5e6065f5aa43b7fa9476ff992c9504ea0c29cde58e5ecb2c78341df73ee82393" address="unix:///run/containerd/s/39d3e6e276ad180d419f76f3055ca4b5b16846b803c33fb43c824ee4821f0161" protocol=ttrpc version=3 Dec 16 02:09:55.632835 systemd[1]: Started cri-containerd-5e6065f5aa43b7fa9476ff992c9504ea0c29cde58e5ecb2c78341df73ee82393.scope - libcontainer container 5e6065f5aa43b7fa9476ff992c9504ea0c29cde58e5ecb2c78341df73ee82393. Dec 16 02:09:55.749000 audit: BPF prog-id=175 op=LOAD Dec 16 02:09:55.752293 kernel: kauditd_printk_skb: 34 callbacks suppressed Dec 16 02:09:55.752492 kernel: audit: type=1334 audit(1765850995.749:579): prog-id=175 op=LOAD Dec 16 02:09:55.749000 audit[4455]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=3953 pid=4455 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:09:55.749000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3565363036356635616134336237666139343736666639393263393530 Dec 16 02:09:55.765770 kernel: audit: type=1300 audit(1765850995.749:579): arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=3953 pid=4455 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:09:55.766272 kernel: audit: type=1327 audit(1765850995.749:579): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3565363036356635616134336237666139343736666639393263393530 Dec 16 02:09:55.767998 kernel: audit: type=1334 audit(1765850995.749:580): prog-id=176 op=LOAD Dec 16 02:09:55.749000 audit: BPF prog-id=176 op=LOAD Dec 16 02:09:55.749000 audit[4455]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=4000178168 a2=98 a3=0 items=0 ppid=3953 pid=4455 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:09:55.774644 kernel: audit: type=1300 audit(1765850995.749:580): arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=4000178168 a2=98 a3=0 items=0 ppid=3953 pid=4455 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:09:55.749000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3565363036356635616134336237666139343736666639393263393530 Dec 16 02:09:55.749000 audit: BPF prog-id=176 op=UNLOAD Dec 16 02:09:55.787706 kernel: audit: type=1327 audit(1765850995.749:580): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3565363036356635616134336237666139343736666639393263393530 Dec 16 02:09:55.787843 kernel: audit: type=1334 audit(1765850995.749:581): prog-id=176 op=UNLOAD Dec 16 02:09:55.749000 audit[4455]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3953 pid=4455 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:09:55.794087 kernel: audit: type=1300 audit(1765850995.749:581): arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3953 pid=4455 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:09:55.749000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3565363036356635616134336237666139343736666639393263393530 Dec 16 02:09:55.800674 kernel: audit: type=1327 audit(1765850995.749:581): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3565363036356635616134336237666139343736666639393263393530 Dec 16 02:09:55.801274 kernel: audit: type=1334 audit(1765850995.749:582): prog-id=175 op=UNLOAD Dec 16 02:09:55.749000 audit: BPF prog-id=175 op=UNLOAD Dec 16 02:09:55.749000 audit[4455]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3953 pid=4455 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:09:55.749000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3565363036356635616134336237666139343736666639393263393530 Dec 16 02:09:55.749000 audit: BPF prog-id=177 op=LOAD Dec 16 02:09:55.749000 audit[4455]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000178648 a2=98 a3=0 items=0 ppid=3953 pid=4455 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:09:55.749000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3565363036356635616134336237666139343736666639393263393530 Dec 16 02:09:55.837892 containerd[1959]: time="2025-12-16T02:09:55.837827662Z" level=info msg="StartContainer for \"5e6065f5aa43b7fa9476ff992c9504ea0c29cde58e5ecb2c78341df73ee82393\" returns successfully" Dec 16 02:09:56.122546 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Dec 16 02:09:56.122726 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Dec 16 02:09:56.246809 kubelet[3435]: I1216 02:09:56.246649 3435 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-5tkx4" podStartSLOduration=1.853903625 podStartE2EDuration="19.246597356s" podCreationTimestamp="2025-12-16 02:09:37 +0000 UTC" firstStartedPulling="2025-12-16 02:09:38.075195721 +0000 UTC m=+35.561397333" lastFinishedPulling="2025-12-16 02:09:55.467889464 +0000 UTC m=+52.954091064" observedRunningTime="2025-12-16 02:09:56.242698508 +0000 UTC m=+53.728900132" watchObservedRunningTime="2025-12-16 02:09:56.246597356 +0000 UTC m=+53.732798980" Dec 16 02:09:56.573655 kubelet[3435]: I1216 02:09:56.573575 3435 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bwjcz\" (UniqueName: \"kubernetes.io/projected/d4c19668-1432-40a4-b193-31fc63aa2910-kube-api-access-bwjcz\") pod \"d4c19668-1432-40a4-b193-31fc63aa2910\" (UID: \"d4c19668-1432-40a4-b193-31fc63aa2910\") " Dec 16 02:09:56.573838 kubelet[3435]: I1216 02:09:56.573680 3435 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d4c19668-1432-40a4-b193-31fc63aa2910-whisker-ca-bundle\") pod \"d4c19668-1432-40a4-b193-31fc63aa2910\" (UID: \"d4c19668-1432-40a4-b193-31fc63aa2910\") " Dec 16 02:09:56.573838 kubelet[3435]: I1216 02:09:56.573731 3435 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/d4c19668-1432-40a4-b193-31fc63aa2910-whisker-backend-key-pair\") pod \"d4c19668-1432-40a4-b193-31fc63aa2910\" (UID: \"d4c19668-1432-40a4-b193-31fc63aa2910\") " Dec 16 02:09:56.577386 kubelet[3435]: I1216 02:09:56.576869 3435 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d4c19668-1432-40a4-b193-31fc63aa2910-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "d4c19668-1432-40a4-b193-31fc63aa2910" (UID: "d4c19668-1432-40a4-b193-31fc63aa2910"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 16 02:09:56.589968 systemd[1]: var-lib-kubelet-pods-d4c19668\x2d1432\x2d40a4\x2db193\x2d31fc63aa2910-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dbwjcz.mount: Deactivated successfully. Dec 16 02:09:56.590415 systemd[1]: var-lib-kubelet-pods-d4c19668\x2d1432\x2d40a4\x2db193\x2d31fc63aa2910-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Dec 16 02:09:56.592085 kubelet[3435]: I1216 02:09:56.591205 3435 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d4c19668-1432-40a4-b193-31fc63aa2910-kube-api-access-bwjcz" (OuterVolumeSpecName: "kube-api-access-bwjcz") pod "d4c19668-1432-40a4-b193-31fc63aa2910" (UID: "d4c19668-1432-40a4-b193-31fc63aa2910"). InnerVolumeSpecName "kube-api-access-bwjcz". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 16 02:09:56.592428 kubelet[3435]: I1216 02:09:56.592353 3435 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d4c19668-1432-40a4-b193-31fc63aa2910-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "d4c19668-1432-40a4-b193-31fc63aa2910" (UID: "d4c19668-1432-40a4-b193-31fc63aa2910"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 16 02:09:56.675739 kubelet[3435]: I1216 02:09:56.675646 3435 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d4c19668-1432-40a4-b193-31fc63aa2910-whisker-ca-bundle\") on node \"ip-172-31-29-223\" DevicePath \"\"" Dec 16 02:09:56.675739 kubelet[3435]: I1216 02:09:56.675738 3435 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/d4c19668-1432-40a4-b193-31fc63aa2910-whisker-backend-key-pair\") on node \"ip-172-31-29-223\" DevicePath \"\"" Dec 16 02:09:56.676031 kubelet[3435]: I1216 02:09:56.675798 3435 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-bwjcz\" (UniqueName: \"kubernetes.io/projected/d4c19668-1432-40a4-b193-31fc63aa2910-kube-api-access-bwjcz\") on node \"ip-172-31-29-223\" DevicePath \"\"" Dec 16 02:09:56.859399 systemd[1]: Removed slice kubepods-besteffort-podd4c19668_1432_40a4_b193_31fc63aa2910.slice - libcontainer container kubepods-besteffort-podd4c19668_1432_40a4_b193_31fc63aa2910.slice. Dec 16 02:09:57.368454 systemd[1]: Created slice kubepods-besteffort-pode33f6f2f_50da_4153_83d7_2ef6a7213c28.slice - libcontainer container kubepods-besteffort-pode33f6f2f_50da_4153_83d7_2ef6a7213c28.slice. Dec 16 02:09:57.481922 kubelet[3435]: I1216 02:09:57.481846 3435 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/e33f6f2f-50da-4153-83d7-2ef6a7213c28-whisker-backend-key-pair\") pod \"whisker-78d8765688-2r4mt\" (UID: \"e33f6f2f-50da-4153-83d7-2ef6a7213c28\") " pod="calico-system/whisker-78d8765688-2r4mt" Dec 16 02:09:57.482733 kubelet[3435]: I1216 02:09:57.481933 3435 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e33f6f2f-50da-4153-83d7-2ef6a7213c28-whisker-ca-bundle\") pod \"whisker-78d8765688-2r4mt\" (UID: \"e33f6f2f-50da-4153-83d7-2ef6a7213c28\") " pod="calico-system/whisker-78d8765688-2r4mt" Dec 16 02:09:57.482733 kubelet[3435]: I1216 02:09:57.481992 3435 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9tb62\" (UniqueName: \"kubernetes.io/projected/e33f6f2f-50da-4153-83d7-2ef6a7213c28-kube-api-access-9tb62\") pod \"whisker-78d8765688-2r4mt\" (UID: \"e33f6f2f-50da-4153-83d7-2ef6a7213c28\") " pod="calico-system/whisker-78d8765688-2r4mt" Dec 16 02:09:57.677587 containerd[1959]: time="2025-12-16T02:09:57.677380259Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-78d8765688-2r4mt,Uid:e33f6f2f-50da-4153-83d7-2ef6a7213c28,Namespace:calico-system,Attempt:0,}" Dec 16 02:09:57.824777 containerd[1959]: time="2025-12-16T02:09:57.824715023Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-zk92z,Uid:e5f8c11a-871b-4f4b-9b56-288874230e6c,Namespace:kube-system,Attempt:0,}" Dec 16 02:09:58.827513 containerd[1959]: time="2025-12-16T02:09:58.826443372Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-7kpqw,Uid:874b2e4b-6331-48a2-85aa-be2fc619cfc8,Namespace:calico-system,Attempt:0,}" Dec 16 02:09:58.828310 containerd[1959]: time="2025-12-16T02:09:58.827797380Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-ghk7w,Uid:f50b9bab-3859-4d1b-ba44-d918ecbff9d1,Namespace:calico-system,Attempt:0,}" Dec 16 02:09:58.845633 kubelet[3435]: I1216 02:09:58.845497 3435 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d4c19668-1432-40a4-b193-31fc63aa2910" path="/var/lib/kubelet/pods/d4c19668-1432-40a4-b193-31fc63aa2910/volumes" Dec 16 02:09:59.469221 systemd-networkd[1872]: cali5db082ead3e: Link UP Dec 16 02:09:59.490075 systemd-networkd[1872]: cali5db082ead3e: Gained carrier Dec 16 02:09:59.498877 (udev-worker)[4739]: Network interface NamePolicy= disabled on kernel command line. Dec 16 02:09:59.578928 (udev-worker)[4738]: Network interface NamePolicy= disabled on kernel command line. Dec 16 02:09:59.596850 systemd-networkd[1872]: cali603e7a43b8a: Link UP Dec 16 02:09:59.607142 systemd-networkd[1872]: cali603e7a43b8a: Gained carrier Dec 16 02:09:59.639311 containerd[1959]: 2025-12-16 02:09:57.916 [INFO][4569] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Dec 16 02:09:59.639311 containerd[1959]: 2025-12-16 02:09:58.922 [INFO][4569] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--29--223-k8s-whisker--78d8765688--2r4mt-eth0 whisker-78d8765688- calico-system e33f6f2f-50da-4153-83d7-2ef6a7213c28 964 0 2025-12-16 02:09:57 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:78d8765688 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ip-172-31-29-223 whisker-78d8765688-2r4mt eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali5db082ead3e [] [] }} ContainerID="bcded796aa174b9950ce7a3948452babaddc1cf1a66ccacd8a9b9bb12626acc6" Namespace="calico-system" Pod="whisker-78d8765688-2r4mt" WorkloadEndpoint="ip--172--31--29--223-k8s-whisker--78d8765688--2r4mt-" Dec 16 02:09:59.639311 containerd[1959]: 2025-12-16 02:09:58.923 [INFO][4569] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="bcded796aa174b9950ce7a3948452babaddc1cf1a66ccacd8a9b9bb12626acc6" Namespace="calico-system" Pod="whisker-78d8765688-2r4mt" WorkloadEndpoint="ip--172--31--29--223-k8s-whisker--78d8765688--2r4mt-eth0" Dec 16 02:09:59.639311 containerd[1959]: 2025-12-16 02:09:59.278 [INFO][4701] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="bcded796aa174b9950ce7a3948452babaddc1cf1a66ccacd8a9b9bb12626acc6" HandleID="k8s-pod-network.bcded796aa174b9950ce7a3948452babaddc1cf1a66ccacd8a9b9bb12626acc6" Workload="ip--172--31--29--223-k8s-whisker--78d8765688--2r4mt-eth0" Dec 16 02:09:59.639704 containerd[1959]: 2025-12-16 02:09:59.280 [INFO][4701] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="bcded796aa174b9950ce7a3948452babaddc1cf1a66ccacd8a9b9bb12626acc6" HandleID="k8s-pod-network.bcded796aa174b9950ce7a3948452babaddc1cf1a66ccacd8a9b9bb12626acc6" Workload="ip--172--31--29--223-k8s-whisker--78d8765688--2r4mt-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400033d510), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-29-223", "pod":"whisker-78d8765688-2r4mt", "timestamp":"2025-12-16 02:09:59.278213675 +0000 UTC"}, Hostname:"ip-172-31-29-223", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 02:09:59.639704 containerd[1959]: 2025-12-16 02:09:59.282 [INFO][4701] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 02:09:59.639704 containerd[1959]: 2025-12-16 02:09:59.283 [INFO][4701] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 02:09:59.639704 containerd[1959]: 2025-12-16 02:09:59.284 [INFO][4701] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-29-223' Dec 16 02:09:59.639704 containerd[1959]: 2025-12-16 02:09:59.315 [INFO][4701] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.bcded796aa174b9950ce7a3948452babaddc1cf1a66ccacd8a9b9bb12626acc6" host="ip-172-31-29-223" Dec 16 02:09:59.639704 containerd[1959]: 2025-12-16 02:09:59.327 [INFO][4701] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-29-223" Dec 16 02:09:59.639704 containerd[1959]: 2025-12-16 02:09:59.339 [INFO][4701] ipam/ipam.go 511: Trying affinity for 192.168.54.128/26 host="ip-172-31-29-223" Dec 16 02:09:59.639704 containerd[1959]: 2025-12-16 02:09:59.344 [INFO][4701] ipam/ipam.go 158: Attempting to load block cidr=192.168.54.128/26 host="ip-172-31-29-223" Dec 16 02:09:59.639704 containerd[1959]: 2025-12-16 02:09:59.352 [INFO][4701] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.54.128/26 host="ip-172-31-29-223" Dec 16 02:09:59.640289 containerd[1959]: 2025-12-16 02:09:59.352 [INFO][4701] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.54.128/26 handle="k8s-pod-network.bcded796aa174b9950ce7a3948452babaddc1cf1a66ccacd8a9b9bb12626acc6" host="ip-172-31-29-223" Dec 16 02:09:59.640289 containerd[1959]: 2025-12-16 02:09:59.357 [INFO][4701] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.bcded796aa174b9950ce7a3948452babaddc1cf1a66ccacd8a9b9bb12626acc6 Dec 16 02:09:59.640289 containerd[1959]: 2025-12-16 02:09:59.374 [INFO][4701] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.54.128/26 handle="k8s-pod-network.bcded796aa174b9950ce7a3948452babaddc1cf1a66ccacd8a9b9bb12626acc6" host="ip-172-31-29-223" Dec 16 02:09:59.640289 containerd[1959]: 2025-12-16 02:09:59.390 [INFO][4701] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.54.129/26] block=192.168.54.128/26 handle="k8s-pod-network.bcded796aa174b9950ce7a3948452babaddc1cf1a66ccacd8a9b9bb12626acc6" host="ip-172-31-29-223" Dec 16 02:09:59.640289 containerd[1959]: 2025-12-16 02:09:59.390 [INFO][4701] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.54.129/26] handle="k8s-pod-network.bcded796aa174b9950ce7a3948452babaddc1cf1a66ccacd8a9b9bb12626acc6" host="ip-172-31-29-223" Dec 16 02:09:59.640289 containerd[1959]: 2025-12-16 02:09:59.390 [INFO][4701] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 02:09:59.640289 containerd[1959]: 2025-12-16 02:09:59.391 [INFO][4701] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.54.129/26] IPv6=[] ContainerID="bcded796aa174b9950ce7a3948452babaddc1cf1a66ccacd8a9b9bb12626acc6" HandleID="k8s-pod-network.bcded796aa174b9950ce7a3948452babaddc1cf1a66ccacd8a9b9bb12626acc6" Workload="ip--172--31--29--223-k8s-whisker--78d8765688--2r4mt-eth0" Dec 16 02:09:59.640640 containerd[1959]: 2025-12-16 02:09:59.401 [INFO][4569] cni-plugin/k8s.go 418: Populated endpoint ContainerID="bcded796aa174b9950ce7a3948452babaddc1cf1a66ccacd8a9b9bb12626acc6" Namespace="calico-system" Pod="whisker-78d8765688-2r4mt" WorkloadEndpoint="ip--172--31--29--223-k8s-whisker--78d8765688--2r4mt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--29--223-k8s-whisker--78d8765688--2r4mt-eth0", GenerateName:"whisker-78d8765688-", Namespace:"calico-system", SelfLink:"", UID:"e33f6f2f-50da-4153-83d7-2ef6a7213c28", ResourceVersion:"964", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 2, 9, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"78d8765688", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-29-223", ContainerID:"", Pod:"whisker-78d8765688-2r4mt", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.54.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali5db082ead3e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 02:09:59.640640 containerd[1959]: 2025-12-16 02:09:59.401 [INFO][4569] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.54.129/32] ContainerID="bcded796aa174b9950ce7a3948452babaddc1cf1a66ccacd8a9b9bb12626acc6" Namespace="calico-system" Pod="whisker-78d8765688-2r4mt" WorkloadEndpoint="ip--172--31--29--223-k8s-whisker--78d8765688--2r4mt-eth0" Dec 16 02:09:59.640871 containerd[1959]: 2025-12-16 02:09:59.401 [INFO][4569] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5db082ead3e ContainerID="bcded796aa174b9950ce7a3948452babaddc1cf1a66ccacd8a9b9bb12626acc6" Namespace="calico-system" Pod="whisker-78d8765688-2r4mt" WorkloadEndpoint="ip--172--31--29--223-k8s-whisker--78d8765688--2r4mt-eth0" Dec 16 02:09:59.640871 containerd[1959]: 2025-12-16 02:09:59.502 [INFO][4569] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="bcded796aa174b9950ce7a3948452babaddc1cf1a66ccacd8a9b9bb12626acc6" Namespace="calico-system" Pod="whisker-78d8765688-2r4mt" WorkloadEndpoint="ip--172--31--29--223-k8s-whisker--78d8765688--2r4mt-eth0" Dec 16 02:09:59.640976 containerd[1959]: 2025-12-16 02:09:59.505 [INFO][4569] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="bcded796aa174b9950ce7a3948452babaddc1cf1a66ccacd8a9b9bb12626acc6" Namespace="calico-system" Pod="whisker-78d8765688-2r4mt" WorkloadEndpoint="ip--172--31--29--223-k8s-whisker--78d8765688--2r4mt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--29--223-k8s-whisker--78d8765688--2r4mt-eth0", GenerateName:"whisker-78d8765688-", Namespace:"calico-system", SelfLink:"", UID:"e33f6f2f-50da-4153-83d7-2ef6a7213c28", ResourceVersion:"964", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 2, 9, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"78d8765688", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-29-223", ContainerID:"bcded796aa174b9950ce7a3948452babaddc1cf1a66ccacd8a9b9bb12626acc6", Pod:"whisker-78d8765688-2r4mt", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.54.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali5db082ead3e", MAC:"a2:b8:e2:83:5b:f6", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 02:09:59.644127 containerd[1959]: 2025-12-16 02:09:59.619 [INFO][4569] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="bcded796aa174b9950ce7a3948452babaddc1cf1a66ccacd8a9b9bb12626acc6" Namespace="calico-system" Pod="whisker-78d8765688-2r4mt" WorkloadEndpoint="ip--172--31--29--223-k8s-whisker--78d8765688--2r4mt-eth0" Dec 16 02:09:59.674599 containerd[1959]: 2025-12-16 02:09:57.916 [INFO][4574] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Dec 16 02:09:59.674599 containerd[1959]: 2025-12-16 02:09:58.924 [INFO][4574] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--29--223-k8s-coredns--674b8bbfcf--zk92z-eth0 coredns-674b8bbfcf- kube-system e5f8c11a-871b-4f4b-9b56-288874230e6c 885 0 2025-12-16 02:09:07 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-29-223 coredns-674b8bbfcf-zk92z eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali603e7a43b8a [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="2bf16b7b53befdef37a9e63bd612df67ff3cf314059b2da88beb08e3941b1e2a" Namespace="kube-system" Pod="coredns-674b8bbfcf-zk92z" WorkloadEndpoint="ip--172--31--29--223-k8s-coredns--674b8bbfcf--zk92z-" Dec 16 02:09:59.674599 containerd[1959]: 2025-12-16 02:09:58.924 [INFO][4574] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="2bf16b7b53befdef37a9e63bd612df67ff3cf314059b2da88beb08e3941b1e2a" Namespace="kube-system" Pod="coredns-674b8bbfcf-zk92z" WorkloadEndpoint="ip--172--31--29--223-k8s-coredns--674b8bbfcf--zk92z-eth0" Dec 16 02:09:59.674599 containerd[1959]: 2025-12-16 02:09:59.274 [INFO][4704] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2bf16b7b53befdef37a9e63bd612df67ff3cf314059b2da88beb08e3941b1e2a" HandleID="k8s-pod-network.2bf16b7b53befdef37a9e63bd612df67ff3cf314059b2da88beb08e3941b1e2a" Workload="ip--172--31--29--223-k8s-coredns--674b8bbfcf--zk92z-eth0" Dec 16 02:09:59.674986 containerd[1959]: 2025-12-16 02:09:59.282 [INFO][4704] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="2bf16b7b53befdef37a9e63bd612df67ff3cf314059b2da88beb08e3941b1e2a" HandleID="k8s-pod-network.2bf16b7b53befdef37a9e63bd612df67ff3cf314059b2da88beb08e3941b1e2a" Workload="ip--172--31--29--223-k8s-coredns--674b8bbfcf--zk92z-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000382440), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-29-223", "pod":"coredns-674b8bbfcf-zk92z", "timestamp":"2025-12-16 02:09:59.274153847 +0000 UTC"}, Hostname:"ip-172-31-29-223", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 02:09:59.674986 containerd[1959]: 2025-12-16 02:09:59.282 [INFO][4704] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 02:09:59.674986 containerd[1959]: 2025-12-16 02:09:59.390 [INFO][4704] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 02:09:59.674986 containerd[1959]: 2025-12-16 02:09:59.391 [INFO][4704] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-29-223' Dec 16 02:09:59.674986 containerd[1959]: 2025-12-16 02:09:59.414 [INFO][4704] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.2bf16b7b53befdef37a9e63bd612df67ff3cf314059b2da88beb08e3941b1e2a" host="ip-172-31-29-223" Dec 16 02:09:59.674986 containerd[1959]: 2025-12-16 02:09:59.429 [INFO][4704] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-29-223" Dec 16 02:09:59.674986 containerd[1959]: 2025-12-16 02:09:59.448 [INFO][4704] ipam/ipam.go 511: Trying affinity for 192.168.54.128/26 host="ip-172-31-29-223" Dec 16 02:09:59.674986 containerd[1959]: 2025-12-16 02:09:59.458 [INFO][4704] ipam/ipam.go 158: Attempting to load block cidr=192.168.54.128/26 host="ip-172-31-29-223" Dec 16 02:09:59.674986 containerd[1959]: 2025-12-16 02:09:59.476 [INFO][4704] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.54.128/26 host="ip-172-31-29-223" Dec 16 02:09:59.680022 containerd[1959]: 2025-12-16 02:09:59.476 [INFO][4704] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.54.128/26 handle="k8s-pod-network.2bf16b7b53befdef37a9e63bd612df67ff3cf314059b2da88beb08e3941b1e2a" host="ip-172-31-29-223" Dec 16 02:09:59.680022 containerd[1959]: 2025-12-16 02:09:59.484 [INFO][4704] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.2bf16b7b53befdef37a9e63bd612df67ff3cf314059b2da88beb08e3941b1e2a Dec 16 02:09:59.680022 containerd[1959]: 2025-12-16 02:09:59.517 [INFO][4704] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.54.128/26 handle="k8s-pod-network.2bf16b7b53befdef37a9e63bd612df67ff3cf314059b2da88beb08e3941b1e2a" host="ip-172-31-29-223" Dec 16 02:09:59.680022 containerd[1959]: 2025-12-16 02:09:59.551 [INFO][4704] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.54.130/26] block=192.168.54.128/26 handle="k8s-pod-network.2bf16b7b53befdef37a9e63bd612df67ff3cf314059b2da88beb08e3941b1e2a" host="ip-172-31-29-223" Dec 16 02:09:59.680022 containerd[1959]: 2025-12-16 02:09:59.552 [INFO][4704] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.54.130/26] handle="k8s-pod-network.2bf16b7b53befdef37a9e63bd612df67ff3cf314059b2da88beb08e3941b1e2a" host="ip-172-31-29-223" Dec 16 02:09:59.680022 containerd[1959]: 2025-12-16 02:09:59.552 [INFO][4704] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 02:09:59.680022 containerd[1959]: 2025-12-16 02:09:59.553 [INFO][4704] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.54.130/26] IPv6=[] ContainerID="2bf16b7b53befdef37a9e63bd612df67ff3cf314059b2da88beb08e3941b1e2a" HandleID="k8s-pod-network.2bf16b7b53befdef37a9e63bd612df67ff3cf314059b2da88beb08e3941b1e2a" Workload="ip--172--31--29--223-k8s-coredns--674b8bbfcf--zk92z-eth0" Dec 16 02:09:59.684296 containerd[1959]: 2025-12-16 02:09:59.568 [INFO][4574] cni-plugin/k8s.go 418: Populated endpoint ContainerID="2bf16b7b53befdef37a9e63bd612df67ff3cf314059b2da88beb08e3941b1e2a" Namespace="kube-system" Pod="coredns-674b8bbfcf-zk92z" WorkloadEndpoint="ip--172--31--29--223-k8s-coredns--674b8bbfcf--zk92z-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--29--223-k8s-coredns--674b8bbfcf--zk92z-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"e5f8c11a-871b-4f4b-9b56-288874230e6c", ResourceVersion:"885", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 2, 9, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-29-223", ContainerID:"", Pod:"coredns-674b8bbfcf-zk92z", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.54.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali603e7a43b8a", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 02:09:59.684296 containerd[1959]: 2025-12-16 02:09:59.569 [INFO][4574] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.54.130/32] ContainerID="2bf16b7b53befdef37a9e63bd612df67ff3cf314059b2da88beb08e3941b1e2a" Namespace="kube-system" Pod="coredns-674b8bbfcf-zk92z" WorkloadEndpoint="ip--172--31--29--223-k8s-coredns--674b8bbfcf--zk92z-eth0" Dec 16 02:09:59.684296 containerd[1959]: 2025-12-16 02:09:59.572 [INFO][4574] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali603e7a43b8a ContainerID="2bf16b7b53befdef37a9e63bd612df67ff3cf314059b2da88beb08e3941b1e2a" Namespace="kube-system" Pod="coredns-674b8bbfcf-zk92z" WorkloadEndpoint="ip--172--31--29--223-k8s-coredns--674b8bbfcf--zk92z-eth0" Dec 16 02:09:59.684296 containerd[1959]: 2025-12-16 02:09:59.621 [INFO][4574] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2bf16b7b53befdef37a9e63bd612df67ff3cf314059b2da88beb08e3941b1e2a" Namespace="kube-system" Pod="coredns-674b8bbfcf-zk92z" WorkloadEndpoint="ip--172--31--29--223-k8s-coredns--674b8bbfcf--zk92z-eth0" Dec 16 02:09:59.684296 containerd[1959]: 2025-12-16 02:09:59.626 [INFO][4574] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="2bf16b7b53befdef37a9e63bd612df67ff3cf314059b2da88beb08e3941b1e2a" Namespace="kube-system" Pod="coredns-674b8bbfcf-zk92z" WorkloadEndpoint="ip--172--31--29--223-k8s-coredns--674b8bbfcf--zk92z-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--29--223-k8s-coredns--674b8bbfcf--zk92z-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"e5f8c11a-871b-4f4b-9b56-288874230e6c", ResourceVersion:"885", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 2, 9, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-29-223", ContainerID:"2bf16b7b53befdef37a9e63bd612df67ff3cf314059b2da88beb08e3941b1e2a", Pod:"coredns-674b8bbfcf-zk92z", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.54.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali603e7a43b8a", MAC:"8a:cb:23:40:06:88", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 02:09:59.684296 containerd[1959]: 2025-12-16 02:09:59.661 [INFO][4574] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="2bf16b7b53befdef37a9e63bd612df67ff3cf314059b2da88beb08e3941b1e2a" Namespace="kube-system" Pod="coredns-674b8bbfcf-zk92z" WorkloadEndpoint="ip--172--31--29--223-k8s-coredns--674b8bbfcf--zk92z-eth0" Dec 16 02:09:59.811079 containerd[1959]: time="2025-12-16T02:09:59.810901777Z" level=info msg="connecting to shim bcded796aa174b9950ce7a3948452babaddc1cf1a66ccacd8a9b9bb12626acc6" address="unix:///run/containerd/s/b6782339fe3988d5a50d9d66ae6670c701b26de42f1edef1e4f6241946db2255" namespace=k8s.io protocol=ttrpc version=3 Dec 16 02:09:59.825246 containerd[1959]: time="2025-12-16T02:09:59.825157873Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7bbfbc6879-7gb6n,Uid:495e068f-5f86-4d9c-b537-813d53666a90,Namespace:calico-apiserver,Attempt:0,}" Dec 16 02:09:59.935257 systemd-networkd[1872]: cali79e086e721b: Link UP Dec 16 02:09:59.946937 systemd-networkd[1872]: cali79e086e721b: Gained carrier Dec 16 02:09:59.970484 containerd[1959]: time="2025-12-16T02:09:59.970418246Z" level=info msg="connecting to shim 2bf16b7b53befdef37a9e63bd612df67ff3cf314059b2da88beb08e3941b1e2a" address="unix:///run/containerd/s/97715a5622e9a853ddb39163d1155bee6bdb6137bac12278254530ce687890b5" namespace=k8s.io protocol=ttrpc version=3 Dec 16 02:10:00.034316 containerd[1959]: 2025-12-16 02:09:58.973 [INFO][4681] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Dec 16 02:10:00.034316 containerd[1959]: 2025-12-16 02:09:59.040 [INFO][4681] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--29--223-k8s-csi--node--driver--ghk7w-eth0 csi-node-driver- calico-system f50b9bab-3859-4d1b-ba44-d918ecbff9d1 785 0 2025-12-16 02:09:37 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:857b56db8f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ip-172-31-29-223 csi-node-driver-ghk7w eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali79e086e721b [] [] }} ContainerID="e32b408f50b8dc418aaeaba87a58dd7d1bb79d4443ea91c26e42cef451719bd4" Namespace="calico-system" Pod="csi-node-driver-ghk7w" WorkloadEndpoint="ip--172--31--29--223-k8s-csi--node--driver--ghk7w-" Dec 16 02:10:00.034316 containerd[1959]: 2025-12-16 02:09:59.047 [INFO][4681] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="e32b408f50b8dc418aaeaba87a58dd7d1bb79d4443ea91c26e42cef451719bd4" Namespace="calico-system" Pod="csi-node-driver-ghk7w" WorkloadEndpoint="ip--172--31--29--223-k8s-csi--node--driver--ghk7w-eth0" Dec 16 02:10:00.034316 containerd[1959]: 2025-12-16 02:09:59.305 [INFO][4715] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e32b408f50b8dc418aaeaba87a58dd7d1bb79d4443ea91c26e42cef451719bd4" HandleID="k8s-pod-network.e32b408f50b8dc418aaeaba87a58dd7d1bb79d4443ea91c26e42cef451719bd4" Workload="ip--172--31--29--223-k8s-csi--node--driver--ghk7w-eth0" Dec 16 02:10:00.034316 containerd[1959]: 2025-12-16 02:09:59.306 [INFO][4715] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="e32b408f50b8dc418aaeaba87a58dd7d1bb79d4443ea91c26e42cef451719bd4" HandleID="k8s-pod-network.e32b408f50b8dc418aaeaba87a58dd7d1bb79d4443ea91c26e42cef451719bd4" Workload="ip--172--31--29--223-k8s-csi--node--driver--ghk7w-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000354250), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-29-223", "pod":"csi-node-driver-ghk7w", "timestamp":"2025-12-16 02:09:59.305788403 +0000 UTC"}, Hostname:"ip-172-31-29-223", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 02:10:00.034316 containerd[1959]: 2025-12-16 02:09:59.306 [INFO][4715] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 02:10:00.034316 containerd[1959]: 2025-12-16 02:09:59.552 [INFO][4715] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 02:10:00.034316 containerd[1959]: 2025-12-16 02:09:59.554 [INFO][4715] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-29-223' Dec 16 02:10:00.034316 containerd[1959]: 2025-12-16 02:09:59.644 [INFO][4715] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.e32b408f50b8dc418aaeaba87a58dd7d1bb79d4443ea91c26e42cef451719bd4" host="ip-172-31-29-223" Dec 16 02:10:00.034316 containerd[1959]: 2025-12-16 02:09:59.685 [INFO][4715] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-29-223" Dec 16 02:10:00.034316 containerd[1959]: 2025-12-16 02:09:59.721 [INFO][4715] ipam/ipam.go 511: Trying affinity for 192.168.54.128/26 host="ip-172-31-29-223" Dec 16 02:10:00.034316 containerd[1959]: 2025-12-16 02:09:59.739 [INFO][4715] ipam/ipam.go 158: Attempting to load block cidr=192.168.54.128/26 host="ip-172-31-29-223" Dec 16 02:10:00.034316 containerd[1959]: 2025-12-16 02:09:59.755 [INFO][4715] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.54.128/26 host="ip-172-31-29-223" Dec 16 02:10:00.034316 containerd[1959]: 2025-12-16 02:09:59.756 [INFO][4715] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.54.128/26 handle="k8s-pod-network.e32b408f50b8dc418aaeaba87a58dd7d1bb79d4443ea91c26e42cef451719bd4" host="ip-172-31-29-223" Dec 16 02:10:00.034316 containerd[1959]: 2025-12-16 02:09:59.767 [INFO][4715] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.e32b408f50b8dc418aaeaba87a58dd7d1bb79d4443ea91c26e42cef451719bd4 Dec 16 02:10:00.034316 containerd[1959]: 2025-12-16 02:09:59.797 [INFO][4715] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.54.128/26 handle="k8s-pod-network.e32b408f50b8dc418aaeaba87a58dd7d1bb79d4443ea91c26e42cef451719bd4" host="ip-172-31-29-223" Dec 16 02:10:00.034316 containerd[1959]: 2025-12-16 02:09:59.828 [INFO][4715] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.54.131/26] block=192.168.54.128/26 handle="k8s-pod-network.e32b408f50b8dc418aaeaba87a58dd7d1bb79d4443ea91c26e42cef451719bd4" host="ip-172-31-29-223" Dec 16 02:10:00.034316 containerd[1959]: 2025-12-16 02:09:59.829 [INFO][4715] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.54.131/26] handle="k8s-pod-network.e32b408f50b8dc418aaeaba87a58dd7d1bb79d4443ea91c26e42cef451719bd4" host="ip-172-31-29-223" Dec 16 02:10:00.034316 containerd[1959]: 2025-12-16 02:09:59.830 [INFO][4715] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 02:10:00.034316 containerd[1959]: 2025-12-16 02:09:59.830 [INFO][4715] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.54.131/26] IPv6=[] ContainerID="e32b408f50b8dc418aaeaba87a58dd7d1bb79d4443ea91c26e42cef451719bd4" HandleID="k8s-pod-network.e32b408f50b8dc418aaeaba87a58dd7d1bb79d4443ea91c26e42cef451719bd4" Workload="ip--172--31--29--223-k8s-csi--node--driver--ghk7w-eth0" Dec 16 02:10:00.038436 containerd[1959]: 2025-12-16 02:09:59.878 [INFO][4681] cni-plugin/k8s.go 418: Populated endpoint ContainerID="e32b408f50b8dc418aaeaba87a58dd7d1bb79d4443ea91c26e42cef451719bd4" Namespace="calico-system" Pod="csi-node-driver-ghk7w" WorkloadEndpoint="ip--172--31--29--223-k8s-csi--node--driver--ghk7w-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--29--223-k8s-csi--node--driver--ghk7w-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"f50b9bab-3859-4d1b-ba44-d918ecbff9d1", ResourceVersion:"785", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 2, 9, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-29-223", ContainerID:"", Pod:"csi-node-driver-ghk7w", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.54.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali79e086e721b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 02:10:00.038436 containerd[1959]: 2025-12-16 02:09:59.878 [INFO][4681] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.54.131/32] ContainerID="e32b408f50b8dc418aaeaba87a58dd7d1bb79d4443ea91c26e42cef451719bd4" Namespace="calico-system" Pod="csi-node-driver-ghk7w" WorkloadEndpoint="ip--172--31--29--223-k8s-csi--node--driver--ghk7w-eth0" Dec 16 02:10:00.038436 containerd[1959]: 2025-12-16 02:09:59.879 [INFO][4681] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali79e086e721b ContainerID="e32b408f50b8dc418aaeaba87a58dd7d1bb79d4443ea91c26e42cef451719bd4" Namespace="calico-system" Pod="csi-node-driver-ghk7w" WorkloadEndpoint="ip--172--31--29--223-k8s-csi--node--driver--ghk7w-eth0" Dec 16 02:10:00.038436 containerd[1959]: 2025-12-16 02:09:59.961 [INFO][4681] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e32b408f50b8dc418aaeaba87a58dd7d1bb79d4443ea91c26e42cef451719bd4" Namespace="calico-system" Pod="csi-node-driver-ghk7w" WorkloadEndpoint="ip--172--31--29--223-k8s-csi--node--driver--ghk7w-eth0" Dec 16 02:10:00.038436 containerd[1959]: 2025-12-16 02:09:59.965 [INFO][4681] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="e32b408f50b8dc418aaeaba87a58dd7d1bb79d4443ea91c26e42cef451719bd4" Namespace="calico-system" Pod="csi-node-driver-ghk7w" WorkloadEndpoint="ip--172--31--29--223-k8s-csi--node--driver--ghk7w-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--29--223-k8s-csi--node--driver--ghk7w-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"f50b9bab-3859-4d1b-ba44-d918ecbff9d1", ResourceVersion:"785", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 2, 9, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-29-223", ContainerID:"e32b408f50b8dc418aaeaba87a58dd7d1bb79d4443ea91c26e42cef451719bd4", Pod:"csi-node-driver-ghk7w", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.54.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali79e086e721b", MAC:"fa:57:3a:bf:48:df", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 02:10:00.038436 containerd[1959]: 2025-12-16 02:10:00.008 [INFO][4681] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="e32b408f50b8dc418aaeaba87a58dd7d1bb79d4443ea91c26e42cef451719bd4" Namespace="calico-system" Pod="csi-node-driver-ghk7w" WorkloadEndpoint="ip--172--31--29--223-k8s-csi--node--driver--ghk7w-eth0" Dec 16 02:10:00.145979 systemd[1]: Started cri-containerd-bcded796aa174b9950ce7a3948452babaddc1cf1a66ccacd8a9b9bb12626acc6.scope - libcontainer container bcded796aa174b9950ce7a3948452babaddc1cf1a66ccacd8a9b9bb12626acc6. Dec 16 02:10:00.278093 containerd[1959]: time="2025-12-16T02:10:00.276915888Z" level=info msg="connecting to shim e32b408f50b8dc418aaeaba87a58dd7d1bb79d4443ea91c26e42cef451719bd4" address="unix:///run/containerd/s/4b28d19329e93773ee8e1c5014bfe0370fabbae837fb23bd0f566eb7c299764e" namespace=k8s.io protocol=ttrpc version=3 Dec 16 02:10:00.298844 systemd-networkd[1872]: cali95aecd30ada: Link UP Dec 16 02:10:00.302035 systemd-networkd[1872]: cali95aecd30ada: Gained carrier Dec 16 02:10:00.326344 systemd[1]: Started cri-containerd-2bf16b7b53befdef37a9e63bd612df67ff3cf314059b2da88beb08e3941b1e2a.scope - libcontainer container 2bf16b7b53befdef37a9e63bd612df67ff3cf314059b2da88beb08e3941b1e2a. Dec 16 02:10:00.420765 containerd[1959]: 2025-12-16 02:09:58.960 [INFO][4679] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Dec 16 02:10:00.420765 containerd[1959]: 2025-12-16 02:09:59.035 [INFO][4679] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--29--223-k8s-goldmane--666569f655--7kpqw-eth0 goldmane-666569f655- calico-system 874b2e4b-6331-48a2-85aa-be2fc619cfc8 891 0 2025-12-16 02:09:29 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:666569f655 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ip-172-31-29-223 goldmane-666569f655-7kpqw eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali95aecd30ada [] [] }} ContainerID="16336430bd340de825f583c119360b6aaff38e32ca849a340cea72766b5dfc3f" Namespace="calico-system" Pod="goldmane-666569f655-7kpqw" WorkloadEndpoint="ip--172--31--29--223-k8s-goldmane--666569f655--7kpqw-" Dec 16 02:10:00.420765 containerd[1959]: 2025-12-16 02:09:59.036 [INFO][4679] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="16336430bd340de825f583c119360b6aaff38e32ca849a340cea72766b5dfc3f" Namespace="calico-system" Pod="goldmane-666569f655-7kpqw" WorkloadEndpoint="ip--172--31--29--223-k8s-goldmane--666569f655--7kpqw-eth0" Dec 16 02:10:00.420765 containerd[1959]: 2025-12-16 02:09:59.307 [INFO][4717] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="16336430bd340de825f583c119360b6aaff38e32ca849a340cea72766b5dfc3f" HandleID="k8s-pod-network.16336430bd340de825f583c119360b6aaff38e32ca849a340cea72766b5dfc3f" Workload="ip--172--31--29--223-k8s-goldmane--666569f655--7kpqw-eth0" Dec 16 02:10:00.420765 containerd[1959]: 2025-12-16 02:09:59.308 [INFO][4717] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="16336430bd340de825f583c119360b6aaff38e32ca849a340cea72766b5dfc3f" HandleID="k8s-pod-network.16336430bd340de825f583c119360b6aaff38e32ca849a340cea72766b5dfc3f" Workload="ip--172--31--29--223-k8s-goldmane--666569f655--7kpqw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000369d50), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-29-223", "pod":"goldmane-666569f655-7kpqw", "timestamp":"2025-12-16 02:09:59.306725831 +0000 UTC"}, Hostname:"ip-172-31-29-223", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 02:10:00.420765 containerd[1959]: 2025-12-16 02:09:59.309 [INFO][4717] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 02:10:00.420765 containerd[1959]: 2025-12-16 02:09:59.830 [INFO][4717] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 02:10:00.420765 containerd[1959]: 2025-12-16 02:09:59.831 [INFO][4717] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-29-223' Dec 16 02:10:00.420765 containerd[1959]: 2025-12-16 02:09:59.933 [INFO][4717] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.16336430bd340de825f583c119360b6aaff38e32ca849a340cea72766b5dfc3f" host="ip-172-31-29-223" Dec 16 02:10:00.420765 containerd[1959]: 2025-12-16 02:09:59.978 [INFO][4717] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-29-223" Dec 16 02:10:00.420765 containerd[1959]: 2025-12-16 02:10:00.018 [INFO][4717] ipam/ipam.go 511: Trying affinity for 192.168.54.128/26 host="ip-172-31-29-223" Dec 16 02:10:00.420765 containerd[1959]: 2025-12-16 02:10:00.040 [INFO][4717] ipam/ipam.go 158: Attempting to load block cidr=192.168.54.128/26 host="ip-172-31-29-223" Dec 16 02:10:00.420765 containerd[1959]: 2025-12-16 02:10:00.061 [INFO][4717] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.54.128/26 host="ip-172-31-29-223" Dec 16 02:10:00.420765 containerd[1959]: 2025-12-16 02:10:00.064 [INFO][4717] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.54.128/26 handle="k8s-pod-network.16336430bd340de825f583c119360b6aaff38e32ca849a340cea72766b5dfc3f" host="ip-172-31-29-223" Dec 16 02:10:00.420765 containerd[1959]: 2025-12-16 02:10:00.116 [INFO][4717] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.16336430bd340de825f583c119360b6aaff38e32ca849a340cea72766b5dfc3f Dec 16 02:10:00.420765 containerd[1959]: 2025-12-16 02:10:00.156 [INFO][4717] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.54.128/26 handle="k8s-pod-network.16336430bd340de825f583c119360b6aaff38e32ca849a340cea72766b5dfc3f" host="ip-172-31-29-223" Dec 16 02:10:00.420765 containerd[1959]: 2025-12-16 02:10:00.198 [INFO][4717] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.54.132/26] block=192.168.54.128/26 handle="k8s-pod-network.16336430bd340de825f583c119360b6aaff38e32ca849a340cea72766b5dfc3f" host="ip-172-31-29-223" Dec 16 02:10:00.420765 containerd[1959]: 2025-12-16 02:10:00.199 [INFO][4717] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.54.132/26] handle="k8s-pod-network.16336430bd340de825f583c119360b6aaff38e32ca849a340cea72766b5dfc3f" host="ip-172-31-29-223" Dec 16 02:10:00.420765 containerd[1959]: 2025-12-16 02:10:00.200 [INFO][4717] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 02:10:00.420765 containerd[1959]: 2025-12-16 02:10:00.203 [INFO][4717] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.54.132/26] IPv6=[] ContainerID="16336430bd340de825f583c119360b6aaff38e32ca849a340cea72766b5dfc3f" HandleID="k8s-pod-network.16336430bd340de825f583c119360b6aaff38e32ca849a340cea72766b5dfc3f" Workload="ip--172--31--29--223-k8s-goldmane--666569f655--7kpqw-eth0" Dec 16 02:10:00.426446 containerd[1959]: 2025-12-16 02:10:00.258 [INFO][4679] cni-plugin/k8s.go 418: Populated endpoint ContainerID="16336430bd340de825f583c119360b6aaff38e32ca849a340cea72766b5dfc3f" Namespace="calico-system" Pod="goldmane-666569f655-7kpqw" WorkloadEndpoint="ip--172--31--29--223-k8s-goldmane--666569f655--7kpqw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--29--223-k8s-goldmane--666569f655--7kpqw-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"874b2e4b-6331-48a2-85aa-be2fc619cfc8", ResourceVersion:"891", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 2, 9, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-29-223", ContainerID:"", Pod:"goldmane-666569f655-7kpqw", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.54.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali95aecd30ada", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 02:10:00.426446 containerd[1959]: 2025-12-16 02:10:00.260 [INFO][4679] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.54.132/32] ContainerID="16336430bd340de825f583c119360b6aaff38e32ca849a340cea72766b5dfc3f" Namespace="calico-system" Pod="goldmane-666569f655-7kpqw" WorkloadEndpoint="ip--172--31--29--223-k8s-goldmane--666569f655--7kpqw-eth0" Dec 16 02:10:00.426446 containerd[1959]: 2025-12-16 02:10:00.260 [INFO][4679] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali95aecd30ada ContainerID="16336430bd340de825f583c119360b6aaff38e32ca849a340cea72766b5dfc3f" Namespace="calico-system" Pod="goldmane-666569f655-7kpqw" WorkloadEndpoint="ip--172--31--29--223-k8s-goldmane--666569f655--7kpqw-eth0" Dec 16 02:10:00.426446 containerd[1959]: 2025-12-16 02:10:00.311 [INFO][4679] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="16336430bd340de825f583c119360b6aaff38e32ca849a340cea72766b5dfc3f" Namespace="calico-system" Pod="goldmane-666569f655-7kpqw" WorkloadEndpoint="ip--172--31--29--223-k8s-goldmane--666569f655--7kpqw-eth0" Dec 16 02:10:00.426446 containerd[1959]: 2025-12-16 02:10:00.322 [INFO][4679] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="16336430bd340de825f583c119360b6aaff38e32ca849a340cea72766b5dfc3f" Namespace="calico-system" Pod="goldmane-666569f655-7kpqw" WorkloadEndpoint="ip--172--31--29--223-k8s-goldmane--666569f655--7kpqw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--29--223-k8s-goldmane--666569f655--7kpqw-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"874b2e4b-6331-48a2-85aa-be2fc619cfc8", ResourceVersion:"891", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 2, 9, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-29-223", ContainerID:"16336430bd340de825f583c119360b6aaff38e32ca849a340cea72766b5dfc3f", Pod:"goldmane-666569f655-7kpqw", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.54.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali95aecd30ada", MAC:"e6:d8:44:19:b0:3e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 02:10:00.426446 containerd[1959]: 2025-12-16 02:10:00.397 [INFO][4679] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="16336430bd340de825f583c119360b6aaff38e32ca849a340cea72766b5dfc3f" Namespace="calico-system" Pod="goldmane-666569f655-7kpqw" WorkloadEndpoint="ip--172--31--29--223-k8s-goldmane--666569f655--7kpqw-eth0" Dec 16 02:10:00.457000 audit: BPF prog-id=178 op=LOAD Dec 16 02:10:00.537000 audit: BPF prog-id=179 op=LOAD Dec 16 02:10:00.537000 audit[4777]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a8180 a2=98 a3=0 items=0 ppid=4763 pid=4777 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:00.537000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6263646564373936616131373462393935306365376133393438343532 Dec 16 02:10:00.537000 audit: BPF prog-id=179 op=UNLOAD Dec 16 02:10:00.537000 audit[4777]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4763 pid=4777 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:00.537000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6263646564373936616131373462393935306365376133393438343532 Dec 16 02:10:00.540000 audit: BPF prog-id=180 op=LOAD Dec 16 02:10:00.540000 audit[4777]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a83e8 a2=98 a3=0 items=0 ppid=4763 pid=4777 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:00.540000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6263646564373936616131373462393935306365376133393438343532 Dec 16 02:10:00.540000 audit: BPF prog-id=181 op=LOAD Dec 16 02:10:00.540000 audit[4777]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=40001a8168 a2=98 a3=0 items=0 ppid=4763 pid=4777 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:00.540000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6263646564373936616131373462393935306365376133393438343532 Dec 16 02:10:00.541000 audit: BPF prog-id=181 op=UNLOAD Dec 16 02:10:00.541000 audit[4777]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4763 pid=4777 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:00.541000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6263646564373936616131373462393935306365376133393438343532 Dec 16 02:10:00.541000 audit: BPF prog-id=180 op=UNLOAD Dec 16 02:10:00.541000 audit[4777]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4763 pid=4777 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:00.541000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6263646564373936616131373462393935306365376133393438343532 Dec 16 02:10:00.541000 audit: BPF prog-id=182 op=LOAD Dec 16 02:10:00.541000 audit[4777]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a8648 a2=98 a3=0 items=0 ppid=4763 pid=4777 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:00.541000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6263646564373936616131373462393935306365376133393438343532 Dec 16 02:10:00.544713 systemd[1]: Started cri-containerd-e32b408f50b8dc418aaeaba87a58dd7d1bb79d4443ea91c26e42cef451719bd4.scope - libcontainer container e32b408f50b8dc418aaeaba87a58dd7d1bb79d4443ea91c26e42cef451719bd4. Dec 16 02:10:00.562816 containerd[1959]: time="2025-12-16T02:10:00.562741897Z" level=info msg="connecting to shim 16336430bd340de825f583c119360b6aaff38e32ca849a340cea72766b5dfc3f" address="unix:///run/containerd/s/8df29b91badbd3a2baad06c90f4027852ab7677a91ff91b9eb463d6b934a7bdd" namespace=k8s.io protocol=ttrpc version=3 Dec 16 02:10:00.606000 audit: BPF prog-id=183 op=LOAD Dec 16 02:10:00.610000 audit: BPF prog-id=184 op=LOAD Dec 16 02:10:00.610000 audit[4820]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000096180 a2=98 a3=0 items=0 ppid=4793 pid=4820 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:00.610000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3262663136623762353362656664656633376139653633626436313264 Dec 16 02:10:00.611000 audit: BPF prog-id=184 op=UNLOAD Dec 16 02:10:00.611000 audit[4820]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4793 pid=4820 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:00.611000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3262663136623762353362656664656633376139653633626436313264 Dec 16 02:10:00.613000 audit: BPF prog-id=185 op=LOAD Dec 16 02:10:00.613000 audit[4820]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40000963e8 a2=98 a3=0 items=0 ppid=4793 pid=4820 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:00.613000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3262663136623762353362656664656633376139653633626436313264 Dec 16 02:10:00.614000 audit: BPF prog-id=186 op=LOAD Dec 16 02:10:00.614000 audit[4820]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000096168 a2=98 a3=0 items=0 ppid=4793 pid=4820 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:00.614000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3262663136623762353362656664656633376139653633626436313264 Dec 16 02:10:00.617000 audit: BPF prog-id=186 op=UNLOAD Dec 16 02:10:00.617000 audit[4820]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4793 pid=4820 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:00.617000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3262663136623762353362656664656633376139653633626436313264 Dec 16 02:10:00.617000 audit: BPF prog-id=185 op=UNLOAD Dec 16 02:10:00.617000 audit[4820]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4793 pid=4820 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:00.617000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3262663136623762353362656664656633376139653633626436313264 Dec 16 02:10:00.617000 audit: BPF prog-id=187 op=LOAD Dec 16 02:10:00.617000 audit[4820]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000096648 a2=98 a3=0 items=0 ppid=4793 pid=4820 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:00.617000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3262663136623762353362656664656633376139653633626436313264 Dec 16 02:10:00.670493 systemd[1]: Started cri-containerd-16336430bd340de825f583c119360b6aaff38e32ca849a340cea72766b5dfc3f.scope - libcontainer container 16336430bd340de825f583c119360b6aaff38e32ca849a340cea72766b5dfc3f. Dec 16 02:10:00.820817 systemd-networkd[1872]: calia944d4a223a: Link UP Dec 16 02:10:00.830661 containerd[1959]: time="2025-12-16T02:10:00.829363142Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7bbfbc6879-szgl8,Uid:9240aeb5-ce88-4dd3-8f08-0a5e1f3a4a69,Namespace:calico-apiserver,Attempt:0,}" Dec 16 02:10:00.830661 containerd[1959]: time="2025-12-16T02:10:00.829496402Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-spv6q,Uid:6568dd52-5b39-4fa9-85f5-6b3edadbd038,Namespace:kube-system,Attempt:0,}" Dec 16 02:10:00.830392 systemd-networkd[1872]: calia944d4a223a: Gained carrier Dec 16 02:10:00.934196 containerd[1959]: 2025-12-16 02:10:00.329 [INFO][4778] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Dec 16 02:10:00.934196 containerd[1959]: 2025-12-16 02:10:00.435 [INFO][4778] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--29--223-k8s-calico--apiserver--7bbfbc6879--7gb6n-eth0 calico-apiserver-7bbfbc6879- calico-apiserver 495e068f-5f86-4d9c-b537-813d53666a90 890 0 2025-12-16 02:09:21 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7bbfbc6879 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-29-223 calico-apiserver-7bbfbc6879-7gb6n eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calia944d4a223a [] [] }} ContainerID="3e5f73680953b9df95baca06a16d280571b666b48f7bdbf98692a356f59100fb" Namespace="calico-apiserver" Pod="calico-apiserver-7bbfbc6879-7gb6n" WorkloadEndpoint="ip--172--31--29--223-k8s-calico--apiserver--7bbfbc6879--7gb6n-" Dec 16 02:10:00.934196 containerd[1959]: 2025-12-16 02:10:00.435 [INFO][4778] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="3e5f73680953b9df95baca06a16d280571b666b48f7bdbf98692a356f59100fb" Namespace="calico-apiserver" Pod="calico-apiserver-7bbfbc6879-7gb6n" WorkloadEndpoint="ip--172--31--29--223-k8s-calico--apiserver--7bbfbc6879--7gb6n-eth0" Dec 16 02:10:00.934196 containerd[1959]: 2025-12-16 02:10:00.589 [INFO][4884] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3e5f73680953b9df95baca06a16d280571b666b48f7bdbf98692a356f59100fb" HandleID="k8s-pod-network.3e5f73680953b9df95baca06a16d280571b666b48f7bdbf98692a356f59100fb" Workload="ip--172--31--29--223-k8s-calico--apiserver--7bbfbc6879--7gb6n-eth0" Dec 16 02:10:00.934196 containerd[1959]: 2025-12-16 02:10:00.592 [INFO][4884] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="3e5f73680953b9df95baca06a16d280571b666b48f7bdbf98692a356f59100fb" HandleID="k8s-pod-network.3e5f73680953b9df95baca06a16d280571b666b48f7bdbf98692a356f59100fb" Workload="ip--172--31--29--223-k8s-calico--apiserver--7bbfbc6879--7gb6n-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004c540), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ip-172-31-29-223", "pod":"calico-apiserver-7bbfbc6879-7gb6n", "timestamp":"2025-12-16 02:10:00.589774153 +0000 UTC"}, Hostname:"ip-172-31-29-223", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 02:10:00.934196 containerd[1959]: 2025-12-16 02:10:00.593 [INFO][4884] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 02:10:00.934196 containerd[1959]: 2025-12-16 02:10:00.593 [INFO][4884] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 02:10:00.934196 containerd[1959]: 2025-12-16 02:10:00.593 [INFO][4884] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-29-223' Dec 16 02:10:00.934196 containerd[1959]: 2025-12-16 02:10:00.656 [INFO][4884] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.3e5f73680953b9df95baca06a16d280571b666b48f7bdbf98692a356f59100fb" host="ip-172-31-29-223" Dec 16 02:10:00.934196 containerd[1959]: 2025-12-16 02:10:00.697 [INFO][4884] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-29-223" Dec 16 02:10:00.934196 containerd[1959]: 2025-12-16 02:10:00.711 [INFO][4884] ipam/ipam.go 511: Trying affinity for 192.168.54.128/26 host="ip-172-31-29-223" Dec 16 02:10:00.934196 containerd[1959]: 2025-12-16 02:10:00.722 [INFO][4884] ipam/ipam.go 158: Attempting to load block cidr=192.168.54.128/26 host="ip-172-31-29-223" Dec 16 02:10:00.934196 containerd[1959]: 2025-12-16 02:10:00.731 [INFO][4884] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.54.128/26 host="ip-172-31-29-223" Dec 16 02:10:00.934196 containerd[1959]: 2025-12-16 02:10:00.731 [INFO][4884] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.54.128/26 handle="k8s-pod-network.3e5f73680953b9df95baca06a16d280571b666b48f7bdbf98692a356f59100fb" host="ip-172-31-29-223" Dec 16 02:10:00.934196 containerd[1959]: 2025-12-16 02:10:00.736 [INFO][4884] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.3e5f73680953b9df95baca06a16d280571b666b48f7bdbf98692a356f59100fb Dec 16 02:10:00.934196 containerd[1959]: 2025-12-16 02:10:00.752 [INFO][4884] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.54.128/26 handle="k8s-pod-network.3e5f73680953b9df95baca06a16d280571b666b48f7bdbf98692a356f59100fb" host="ip-172-31-29-223" Dec 16 02:10:00.934196 containerd[1959]: 2025-12-16 02:10:00.773 [INFO][4884] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.54.133/26] block=192.168.54.128/26 handle="k8s-pod-network.3e5f73680953b9df95baca06a16d280571b666b48f7bdbf98692a356f59100fb" host="ip-172-31-29-223" Dec 16 02:10:00.934196 containerd[1959]: 2025-12-16 02:10:00.773 [INFO][4884] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.54.133/26] handle="k8s-pod-network.3e5f73680953b9df95baca06a16d280571b666b48f7bdbf98692a356f59100fb" host="ip-172-31-29-223" Dec 16 02:10:00.934196 containerd[1959]: 2025-12-16 02:10:00.773 [INFO][4884] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 02:10:00.934196 containerd[1959]: 2025-12-16 02:10:00.773 [INFO][4884] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.54.133/26] IPv6=[] ContainerID="3e5f73680953b9df95baca06a16d280571b666b48f7bdbf98692a356f59100fb" HandleID="k8s-pod-network.3e5f73680953b9df95baca06a16d280571b666b48f7bdbf98692a356f59100fb" Workload="ip--172--31--29--223-k8s-calico--apiserver--7bbfbc6879--7gb6n-eth0" Dec 16 02:10:00.938393 containerd[1959]: 2025-12-16 02:10:00.797 [INFO][4778] cni-plugin/k8s.go 418: Populated endpoint ContainerID="3e5f73680953b9df95baca06a16d280571b666b48f7bdbf98692a356f59100fb" Namespace="calico-apiserver" Pod="calico-apiserver-7bbfbc6879-7gb6n" WorkloadEndpoint="ip--172--31--29--223-k8s-calico--apiserver--7bbfbc6879--7gb6n-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--29--223-k8s-calico--apiserver--7bbfbc6879--7gb6n-eth0", GenerateName:"calico-apiserver-7bbfbc6879-", Namespace:"calico-apiserver", SelfLink:"", UID:"495e068f-5f86-4d9c-b537-813d53666a90", ResourceVersion:"890", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 2, 9, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7bbfbc6879", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-29-223", ContainerID:"", Pod:"calico-apiserver-7bbfbc6879-7gb6n", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.54.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calia944d4a223a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 02:10:00.938393 containerd[1959]: 2025-12-16 02:10:00.799 [INFO][4778] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.54.133/32] ContainerID="3e5f73680953b9df95baca06a16d280571b666b48f7bdbf98692a356f59100fb" Namespace="calico-apiserver" Pod="calico-apiserver-7bbfbc6879-7gb6n" WorkloadEndpoint="ip--172--31--29--223-k8s-calico--apiserver--7bbfbc6879--7gb6n-eth0" Dec 16 02:10:00.938393 containerd[1959]: 2025-12-16 02:10:00.802 [INFO][4778] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia944d4a223a ContainerID="3e5f73680953b9df95baca06a16d280571b666b48f7bdbf98692a356f59100fb" Namespace="calico-apiserver" Pod="calico-apiserver-7bbfbc6879-7gb6n" WorkloadEndpoint="ip--172--31--29--223-k8s-calico--apiserver--7bbfbc6879--7gb6n-eth0" Dec 16 02:10:00.938393 containerd[1959]: 2025-12-16 02:10:00.847 [INFO][4778] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3e5f73680953b9df95baca06a16d280571b666b48f7bdbf98692a356f59100fb" Namespace="calico-apiserver" Pod="calico-apiserver-7bbfbc6879-7gb6n" WorkloadEndpoint="ip--172--31--29--223-k8s-calico--apiserver--7bbfbc6879--7gb6n-eth0" Dec 16 02:10:00.938393 containerd[1959]: 2025-12-16 02:10:00.848 [INFO][4778] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="3e5f73680953b9df95baca06a16d280571b666b48f7bdbf98692a356f59100fb" Namespace="calico-apiserver" Pod="calico-apiserver-7bbfbc6879-7gb6n" WorkloadEndpoint="ip--172--31--29--223-k8s-calico--apiserver--7bbfbc6879--7gb6n-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--29--223-k8s-calico--apiserver--7bbfbc6879--7gb6n-eth0", GenerateName:"calico-apiserver-7bbfbc6879-", Namespace:"calico-apiserver", SelfLink:"", UID:"495e068f-5f86-4d9c-b537-813d53666a90", ResourceVersion:"890", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 2, 9, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7bbfbc6879", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-29-223", ContainerID:"3e5f73680953b9df95baca06a16d280571b666b48f7bdbf98692a356f59100fb", Pod:"calico-apiserver-7bbfbc6879-7gb6n", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.54.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calia944d4a223a", MAC:"3e:36:e4:af:cc:5b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 02:10:00.938393 containerd[1959]: 2025-12-16 02:10:00.927 [INFO][4778] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="3e5f73680953b9df95baca06a16d280571b666b48f7bdbf98692a356f59100fb" Namespace="calico-apiserver" Pod="calico-apiserver-7bbfbc6879-7gb6n" WorkloadEndpoint="ip--172--31--29--223-k8s-calico--apiserver--7bbfbc6879--7gb6n-eth0" Dec 16 02:10:00.952075 kernel: kauditd_printk_skb: 49 callbacks suppressed Dec 16 02:10:00.952229 kernel: audit: type=1334 audit(1765851000.947:600): prog-id=188 op=LOAD Dec 16 02:10:00.947000 audit: BPF prog-id=188 op=LOAD Dec 16 02:10:00.955000 audit: BPF prog-id=189 op=LOAD Dec 16 02:10:00.966782 kernel: audit: type=1334 audit(1765851000.955:601): prog-id=189 op=LOAD Dec 16 02:10:00.966915 kernel: audit: type=1300 audit(1765851000.955:601): arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a8180 a2=98 a3=0 items=0 ppid=4905 pid=4918 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:00.955000 audit[4918]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a8180 a2=98 a3=0 items=0 ppid=4905 pid=4918 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:00.955000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3136333336343330626433343064653832356635383363313139333630 Dec 16 02:10:00.955000 audit: BPF prog-id=189 op=UNLOAD Dec 16 02:10:00.983287 kernel: audit: type=1327 audit(1765851000.955:601): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3136333336343330626433343064653832356635383363313139333630 Dec 16 02:10:00.983425 kernel: audit: type=1334 audit(1765851000.955:602): prog-id=189 op=UNLOAD Dec 16 02:10:00.955000 audit[4918]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4905 pid=4918 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:00.991103 kernel: audit: type=1300 audit(1765851000.955:602): arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4905 pid=4918 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:00.955000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3136333336343330626433343064653832356635383363313139333630 Dec 16 02:10:00.998941 kernel: audit: type=1327 audit(1765851000.955:602): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3136333336343330626433343064653832356635383363313139333630 Dec 16 02:10:00.959000 audit: BPF prog-id=190 op=LOAD Dec 16 02:10:01.002011 kernel: audit: type=1334 audit(1765851000.959:603): prog-id=190 op=LOAD Dec 16 02:10:00.959000 audit[4918]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a83e8 a2=98 a3=0 items=0 ppid=4905 pid=4918 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:01.008578 kernel: audit: type=1300 audit(1765851000.959:603): arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a83e8 a2=98 a3=0 items=0 ppid=4905 pid=4918 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:00.959000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3136333336343330626433343064653832356635383363313139333630 Dec 16 02:10:01.019305 kernel: audit: type=1327 audit(1765851000.959:603): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3136333336343330626433343064653832356635383363313139333630 Dec 16 02:10:00.990000 audit: BPF prog-id=191 op=LOAD Dec 16 02:10:00.990000 audit[4918]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=40001a8168 a2=98 a3=0 items=0 ppid=4905 pid=4918 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:00.990000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3136333336343330626433343064653832356635383363313139333630 Dec 16 02:10:01.024000 audit: BPF prog-id=191 op=UNLOAD Dec 16 02:10:01.024000 audit[4918]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4905 pid=4918 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:01.024000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3136333336343330626433343064653832356635383363313139333630 Dec 16 02:10:01.025000 audit: BPF prog-id=190 op=UNLOAD Dec 16 02:10:01.025000 audit[4918]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4905 pid=4918 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:01.025000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3136333336343330626433343064653832356635383363313139333630 Dec 16 02:10:01.025000 audit: BPF prog-id=192 op=LOAD Dec 16 02:10:01.025000 audit[4918]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a8648 a2=98 a3=0 items=0 ppid=4905 pid=4918 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:01.025000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3136333336343330626433343064653832356635383363313139333630 Dec 16 02:10:01.055770 systemd-networkd[1872]: cali5db082ead3e: Gained IPv6LL Dec 16 02:10:01.118082 containerd[1959]: time="2025-12-16T02:10:01.104038812Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-zk92z,Uid:e5f8c11a-871b-4f4b-9b56-288874230e6c,Namespace:kube-system,Attempt:0,} returns sandbox id \"2bf16b7b53befdef37a9e63bd612df67ff3cf314059b2da88beb08e3941b1e2a\"" Dec 16 02:10:01.151885 containerd[1959]: time="2025-12-16T02:10:01.151728252Z" level=info msg="CreateContainer within sandbox \"2bf16b7b53befdef37a9e63bd612df67ff3cf314059b2da88beb08e3941b1e2a\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 16 02:10:01.193000 audit: BPF prog-id=193 op=LOAD Dec 16 02:10:01.197914 containerd[1959]: time="2025-12-16T02:10:01.197818764Z" level=info msg="connecting to shim 3e5f73680953b9df95baca06a16d280571b666b48f7bdbf98692a356f59100fb" address="unix:///run/containerd/s/f16005c03ebc29bdd51cd3ee0b22a4564a96372a051854655d5e349a082bc012" namespace=k8s.io protocol=ttrpc version=3 Dec 16 02:10:01.200000 audit: BPF prog-id=194 op=LOAD Dec 16 02:10:01.200000 audit[4869]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a0180 a2=98 a3=0 items=0 ppid=4844 pid=4869 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:01.200000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6533326234303866353062386463343138616165616261383761353864 Dec 16 02:10:01.200000 audit: BPF prog-id=194 op=UNLOAD Dec 16 02:10:01.200000 audit[4869]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4844 pid=4869 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:01.200000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6533326234303866353062386463343138616165616261383761353864 Dec 16 02:10:01.205000 audit: BPF prog-id=195 op=LOAD Dec 16 02:10:01.205000 audit[4869]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a03e8 a2=98 a3=0 items=0 ppid=4844 pid=4869 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:01.205000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6533326234303866353062386463343138616165616261383761353864 Dec 16 02:10:01.205000 audit: BPF prog-id=196 op=LOAD Dec 16 02:10:01.205000 audit[4869]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=40001a0168 a2=98 a3=0 items=0 ppid=4844 pid=4869 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:01.205000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6533326234303866353062386463343138616165616261383761353864 Dec 16 02:10:01.205000 audit: BPF prog-id=196 op=UNLOAD Dec 16 02:10:01.205000 audit[4869]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4844 pid=4869 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:01.205000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6533326234303866353062386463343138616165616261383761353864 Dec 16 02:10:01.219000 audit: BPF prog-id=195 op=UNLOAD Dec 16 02:10:01.219000 audit[4869]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4844 pid=4869 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:01.219000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6533326234303866353062386463343138616165616261383761353864 Dec 16 02:10:01.219000 audit: BPF prog-id=197 op=LOAD Dec 16 02:10:01.219000 audit[4869]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a0648 a2=98 a3=0 items=0 ppid=4844 pid=4869 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:01.219000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6533326234303866353062386463343138616165616261383761353864 Dec 16 02:10:01.324744 containerd[1959]: time="2025-12-16T02:10:01.324648805Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-78d8765688-2r4mt,Uid:e33f6f2f-50da-4153-83d7-2ef6a7213c28,Namespace:calico-system,Attempt:0,} returns sandbox id \"bcded796aa174b9950ce7a3948452babaddc1cf1a66ccacd8a9b9bb12626acc6\"" Dec 16 02:10:01.329679 containerd[1959]: time="2025-12-16T02:10:01.329594401Z" level=info msg="Container 78c60341f8fa203a5c8e74dc9e4fbf7c74b6b43084e859a4690ed19467b548a8: CDI devices from CRI Config.CDIDevices: []" Dec 16 02:10:01.335378 containerd[1959]: time="2025-12-16T02:10:01.335302273Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Dec 16 02:10:01.361000 audit: BPF prog-id=198 op=LOAD Dec 16 02:10:01.361000 audit[5021]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffc3e7a178 a2=98 a3=ffffc3e7a168 items=0 ppid=4606 pid=5021 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:01.361000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Dec 16 02:10:01.363000 audit: BPF prog-id=198 op=UNLOAD Dec 16 02:10:01.363000 audit[5021]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=ffffc3e7a148 a3=0 items=0 ppid=4606 pid=5021 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:01.363000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Dec 16 02:10:01.363000 audit: BPF prog-id=199 op=LOAD Dec 16 02:10:01.363000 audit[5021]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffc3e7a028 a2=74 a3=95 items=0 ppid=4606 pid=5021 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:01.363000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Dec 16 02:10:01.365000 audit: BPF prog-id=199 op=UNLOAD Dec 16 02:10:01.365000 audit[5021]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=74 a3=95 items=0 ppid=4606 pid=5021 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:01.365000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Dec 16 02:10:01.366000 audit: BPF prog-id=200 op=LOAD Dec 16 02:10:01.366000 audit[5021]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffc3e7a058 a2=40 a3=ffffc3e7a088 items=0 ppid=4606 pid=5021 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:01.366000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Dec 16 02:10:01.367000 audit: BPF prog-id=200 op=UNLOAD Dec 16 02:10:01.367000 audit[5021]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=40 a3=ffffc3e7a088 items=0 ppid=4606 pid=5021 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:01.367000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Dec 16 02:10:01.395000 audit: BPF prog-id=201 op=LOAD Dec 16 02:10:01.395000 audit[5030]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffee838878 a2=98 a3=ffffee838868 items=0 ppid=4606 pid=5030 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:01.395000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 02:10:01.398000 audit: BPF prog-id=201 op=UNLOAD Dec 16 02:10:01.398000 audit[5030]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=ffffee838848 a3=0 items=0 ppid=4606 pid=5030 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:01.398000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 02:10:01.400000 audit: BPF prog-id=202 op=LOAD Dec 16 02:10:01.400000 audit[5030]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=ffffee838508 a2=74 a3=95 items=0 ppid=4606 pid=5030 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:01.400000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 02:10:01.411079 containerd[1959]: time="2025-12-16T02:10:01.409982833Z" level=info msg="CreateContainer within sandbox \"2bf16b7b53befdef37a9e63bd612df67ff3cf314059b2da88beb08e3941b1e2a\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"78c60341f8fa203a5c8e74dc9e4fbf7c74b6b43084e859a4690ed19467b548a8\"" Dec 16 02:10:01.408000 audit: BPF prog-id=202 op=UNLOAD Dec 16 02:10:01.408000 audit[5030]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=4 a1=57156c a2=74 a3=95 items=0 ppid=4606 pid=5030 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:01.408000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 02:10:01.411000 audit: BPF prog-id=203 op=LOAD Dec 16 02:10:01.411000 audit[5030]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=ffffee838568 a2=94 a3=2 items=0 ppid=4606 pid=5030 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:01.411000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 02:10:01.413000 audit: BPF prog-id=203 op=UNLOAD Dec 16 02:10:01.413000 audit[5030]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=4 a1=57156c a2=70 a3=2 items=0 ppid=4606 pid=5030 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:01.413000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 02:10:01.420602 containerd[1959]: time="2025-12-16T02:10:01.419810041Z" level=info msg="StartContainer for \"78c60341f8fa203a5c8e74dc9e4fbf7c74b6b43084e859a4690ed19467b548a8\"" Dec 16 02:10:01.443911 containerd[1959]: time="2025-12-16T02:10:01.443802013Z" level=info msg="connecting to shim 78c60341f8fa203a5c8e74dc9e4fbf7c74b6b43084e859a4690ed19467b548a8" address="unix:///run/containerd/s/97715a5622e9a853ddb39163d1155bee6bdb6137bac12278254530ce687890b5" protocol=ttrpc version=3 Dec 16 02:10:01.464933 systemd[1]: Started cri-containerd-3e5f73680953b9df95baca06a16d280571b666b48f7bdbf98692a356f59100fb.scope - libcontainer container 3e5f73680953b9df95baca06a16d280571b666b48f7bdbf98692a356f59100fb. Dec 16 02:10:01.504287 systemd-networkd[1872]: cali603e7a43b8a: Gained IPv6LL Dec 16 02:10:01.513284 containerd[1959]: time="2025-12-16T02:10:01.512812466Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-ghk7w,Uid:f50b9bab-3859-4d1b-ba44-d918ecbff9d1,Namespace:calico-system,Attempt:0,} returns sandbox id \"e32b408f50b8dc418aaeaba87a58dd7d1bb79d4443ea91c26e42cef451719bd4\"" Dec 16 02:10:01.591317 systemd[1]: Started cri-containerd-78c60341f8fa203a5c8e74dc9e4fbf7c74b6b43084e859a4690ed19467b548a8.scope - libcontainer container 78c60341f8fa203a5c8e74dc9e4fbf7c74b6b43084e859a4690ed19467b548a8. Dec 16 02:10:01.656262 containerd[1959]: time="2025-12-16T02:10:01.655964198Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 02:10:01.663905 containerd[1959]: time="2025-12-16T02:10:01.663224102Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Dec 16 02:10:01.663905 containerd[1959]: time="2025-12-16T02:10:01.663371282Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Dec 16 02:10:01.666886 kubelet[3435]: E1216 02:10:01.663794 3435 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 02:10:01.666886 kubelet[3435]: E1216 02:10:01.663924 3435 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 02:10:01.669965 kubelet[3435]: E1216 02:10:01.669805 3435 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:117178efe1694065bf2a2114c9173c9e,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-9tb62,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-78d8765688-2r4mt_calico-system(e33f6f2f-50da-4153-83d7-2ef6a7213c28): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Dec 16 02:10:01.674173 containerd[1959]: time="2025-12-16T02:10:01.668689359Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Dec 16 02:10:01.719000 audit: BPF prog-id=204 op=LOAD Dec 16 02:10:01.722000 audit: BPF prog-id=205 op=LOAD Dec 16 02:10:01.722000 audit[5044]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a8180 a2=98 a3=0 items=0 ppid=4793 pid=5044 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:01.722000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3738633630333431663866613230336135633865373464633965346662 Dec 16 02:10:01.723000 audit: BPF prog-id=205 op=UNLOAD Dec 16 02:10:01.723000 audit[5044]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4793 pid=5044 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:01.723000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3738633630333431663866613230336135633865373464633965346662 Dec 16 02:10:01.723000 audit: BPF prog-id=206 op=LOAD Dec 16 02:10:01.723000 audit[5044]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a83e8 a2=98 a3=0 items=0 ppid=4793 pid=5044 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:01.723000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3738633630333431663866613230336135633865373464633965346662 Dec 16 02:10:01.723000 audit: BPF prog-id=207 op=LOAD Dec 16 02:10:01.723000 audit[5044]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=40001a8168 a2=98 a3=0 items=0 ppid=4793 pid=5044 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:01.723000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3738633630333431663866613230336135633865373464633965346662 Dec 16 02:10:01.724000 audit: BPF prog-id=207 op=UNLOAD Dec 16 02:10:01.724000 audit[5044]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4793 pid=5044 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:01.724000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3738633630333431663866613230336135633865373464633965346662 Dec 16 02:10:01.725000 audit: BPF prog-id=206 op=UNLOAD Dec 16 02:10:01.725000 audit[5044]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4793 pid=5044 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:01.725000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3738633630333431663866613230336135633865373464633965346662 Dec 16 02:10:01.725000 audit: BPF prog-id=208 op=LOAD Dec 16 02:10:01.725000 audit[5044]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a8648 a2=98 a3=0 items=0 ppid=4793 pid=5044 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:01.725000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3738633630333431663866613230336135633865373464633965346662 Dec 16 02:10:01.822346 systemd-networkd[1872]: cali95aecd30ada: Gained IPv6LL Dec 16 02:10:01.830266 containerd[1959]: time="2025-12-16T02:10:01.830152023Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-8478cfdcd8-h8dlm,Uid:e4cfe125-283d-49d5-a19e-c93c8097201d,Namespace:calico-system,Attempt:0,}" Dec 16 02:10:01.835319 containerd[1959]: time="2025-12-16T02:10:01.835226931Z" level=info msg="StartContainer for \"78c60341f8fa203a5c8e74dc9e4fbf7c74b6b43084e859a4690ed19467b548a8\" returns successfully" Dec 16 02:10:01.872265 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2740692813.mount: Deactivated successfully. Dec 16 02:10:01.938706 containerd[1959]: time="2025-12-16T02:10:01.938608168Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 02:10:01.942372 containerd[1959]: time="2025-12-16T02:10:01.942208240Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Dec 16 02:10:01.942543 containerd[1959]: time="2025-12-16T02:10:01.942271048Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Dec 16 02:10:01.943240 kubelet[3435]: E1216 02:10:01.942714 3435 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 16 02:10:01.943240 kubelet[3435]: E1216 02:10:01.942777 3435 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 16 02:10:01.944940 containerd[1959]: time="2025-12-16T02:10:01.944402116Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Dec 16 02:10:01.945009 kubelet[3435]: E1216 02:10:01.943100 3435 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6zdjz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-ghk7w_calico-system(f50b9bab-3859-4d1b-ba44-d918ecbff9d1): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Dec 16 02:10:01.952537 systemd-networkd[1872]: cali79e086e721b: Gained IPv6LL Dec 16 02:10:02.053645 systemd-networkd[1872]: calia0f7ad041ff: Link UP Dec 16 02:10:02.091253 systemd-networkd[1872]: calia0f7ad041ff: Gained carrier Dec 16 02:10:02.126743 containerd[1959]: time="2025-12-16T02:10:02.124721113Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-7kpqw,Uid:874b2e4b-6331-48a2-85aa-be2fc619cfc8,Namespace:calico-system,Attempt:0,} returns sandbox id \"16336430bd340de825f583c119360b6aaff38e32ca849a340cea72766b5dfc3f\"" Dec 16 02:10:02.169470 containerd[1959]: 2025-12-16 02:10:01.460 [INFO][4965] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--29--223-k8s-calico--apiserver--7bbfbc6879--szgl8-eth0 calico-apiserver-7bbfbc6879- calico-apiserver 9240aeb5-ce88-4dd3-8f08-0a5e1f3a4a69 893 0 2025-12-16 02:09:22 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7bbfbc6879 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-29-223 calico-apiserver-7bbfbc6879-szgl8 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calia0f7ad041ff [] [] }} ContainerID="13756e04c6614b49130a1d34ad5c7de9418bb03c8f6e281c41c826d53b0ad39f" Namespace="calico-apiserver" Pod="calico-apiserver-7bbfbc6879-szgl8" WorkloadEndpoint="ip--172--31--29--223-k8s-calico--apiserver--7bbfbc6879--szgl8-" Dec 16 02:10:02.169470 containerd[1959]: 2025-12-16 02:10:01.462 [INFO][4965] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="13756e04c6614b49130a1d34ad5c7de9418bb03c8f6e281c41c826d53b0ad39f" Namespace="calico-apiserver" Pod="calico-apiserver-7bbfbc6879-szgl8" WorkloadEndpoint="ip--172--31--29--223-k8s-calico--apiserver--7bbfbc6879--szgl8-eth0" Dec 16 02:10:02.169470 containerd[1959]: 2025-12-16 02:10:01.817 [INFO][5053] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="13756e04c6614b49130a1d34ad5c7de9418bb03c8f6e281c41c826d53b0ad39f" HandleID="k8s-pod-network.13756e04c6614b49130a1d34ad5c7de9418bb03c8f6e281c41c826d53b0ad39f" Workload="ip--172--31--29--223-k8s-calico--apiserver--7bbfbc6879--szgl8-eth0" Dec 16 02:10:02.169470 containerd[1959]: 2025-12-16 02:10:01.819 [INFO][5053] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="13756e04c6614b49130a1d34ad5c7de9418bb03c8f6e281c41c826d53b0ad39f" HandleID="k8s-pod-network.13756e04c6614b49130a1d34ad5c7de9418bb03c8f6e281c41c826d53b0ad39f" Workload="ip--172--31--29--223-k8s-calico--apiserver--7bbfbc6879--szgl8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000102380), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ip-172-31-29-223", "pod":"calico-apiserver-7bbfbc6879-szgl8", "timestamp":"2025-12-16 02:10:01.817232931 +0000 UTC"}, Hostname:"ip-172-31-29-223", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 02:10:02.169470 containerd[1959]: 2025-12-16 02:10:01.819 [INFO][5053] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 02:10:02.169470 containerd[1959]: 2025-12-16 02:10:01.819 [INFO][5053] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 02:10:02.169470 containerd[1959]: 2025-12-16 02:10:01.820 [INFO][5053] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-29-223' Dec 16 02:10:02.169470 containerd[1959]: 2025-12-16 02:10:01.877 [INFO][5053] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.13756e04c6614b49130a1d34ad5c7de9418bb03c8f6e281c41c826d53b0ad39f" host="ip-172-31-29-223" Dec 16 02:10:02.169470 containerd[1959]: 2025-12-16 02:10:01.895 [INFO][5053] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-29-223" Dec 16 02:10:02.169470 containerd[1959]: 2025-12-16 02:10:01.937 [INFO][5053] ipam/ipam.go 511: Trying affinity for 192.168.54.128/26 host="ip-172-31-29-223" Dec 16 02:10:02.169470 containerd[1959]: 2025-12-16 02:10:01.960 [INFO][5053] ipam/ipam.go 158: Attempting to load block cidr=192.168.54.128/26 host="ip-172-31-29-223" Dec 16 02:10:02.169470 containerd[1959]: 2025-12-16 02:10:01.976 [INFO][5053] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.54.128/26 host="ip-172-31-29-223" Dec 16 02:10:02.169470 containerd[1959]: 2025-12-16 02:10:01.977 [INFO][5053] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.54.128/26 handle="k8s-pod-network.13756e04c6614b49130a1d34ad5c7de9418bb03c8f6e281c41c826d53b0ad39f" host="ip-172-31-29-223" Dec 16 02:10:02.169470 containerd[1959]: 2025-12-16 02:10:01.985 [INFO][5053] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.13756e04c6614b49130a1d34ad5c7de9418bb03c8f6e281c41c826d53b0ad39f Dec 16 02:10:02.169470 containerd[1959]: 2025-12-16 02:10:02.002 [INFO][5053] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.54.128/26 handle="k8s-pod-network.13756e04c6614b49130a1d34ad5c7de9418bb03c8f6e281c41c826d53b0ad39f" host="ip-172-31-29-223" Dec 16 02:10:02.169470 containerd[1959]: 2025-12-16 02:10:02.030 [INFO][5053] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.54.134/26] block=192.168.54.128/26 handle="k8s-pod-network.13756e04c6614b49130a1d34ad5c7de9418bb03c8f6e281c41c826d53b0ad39f" host="ip-172-31-29-223" Dec 16 02:10:02.169470 containerd[1959]: 2025-12-16 02:10:02.030 [INFO][5053] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.54.134/26] handle="k8s-pod-network.13756e04c6614b49130a1d34ad5c7de9418bb03c8f6e281c41c826d53b0ad39f" host="ip-172-31-29-223" Dec 16 02:10:02.169470 containerd[1959]: 2025-12-16 02:10:02.031 [INFO][5053] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 02:10:02.169470 containerd[1959]: 2025-12-16 02:10:02.031 [INFO][5053] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.54.134/26] IPv6=[] ContainerID="13756e04c6614b49130a1d34ad5c7de9418bb03c8f6e281c41c826d53b0ad39f" HandleID="k8s-pod-network.13756e04c6614b49130a1d34ad5c7de9418bb03c8f6e281c41c826d53b0ad39f" Workload="ip--172--31--29--223-k8s-calico--apiserver--7bbfbc6879--szgl8-eth0" Dec 16 02:10:02.173430 containerd[1959]: 2025-12-16 02:10:02.036 [INFO][4965] cni-plugin/k8s.go 418: Populated endpoint ContainerID="13756e04c6614b49130a1d34ad5c7de9418bb03c8f6e281c41c826d53b0ad39f" Namespace="calico-apiserver" Pod="calico-apiserver-7bbfbc6879-szgl8" WorkloadEndpoint="ip--172--31--29--223-k8s-calico--apiserver--7bbfbc6879--szgl8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--29--223-k8s-calico--apiserver--7bbfbc6879--szgl8-eth0", GenerateName:"calico-apiserver-7bbfbc6879-", Namespace:"calico-apiserver", SelfLink:"", UID:"9240aeb5-ce88-4dd3-8f08-0a5e1f3a4a69", ResourceVersion:"893", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 2, 9, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7bbfbc6879", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-29-223", ContainerID:"", Pod:"calico-apiserver-7bbfbc6879-szgl8", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.54.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calia0f7ad041ff", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 02:10:02.173430 containerd[1959]: 2025-12-16 02:10:02.037 [INFO][4965] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.54.134/32] ContainerID="13756e04c6614b49130a1d34ad5c7de9418bb03c8f6e281c41c826d53b0ad39f" Namespace="calico-apiserver" Pod="calico-apiserver-7bbfbc6879-szgl8" WorkloadEndpoint="ip--172--31--29--223-k8s-calico--apiserver--7bbfbc6879--szgl8-eth0" Dec 16 02:10:02.173430 containerd[1959]: 2025-12-16 02:10:02.037 [INFO][4965] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia0f7ad041ff ContainerID="13756e04c6614b49130a1d34ad5c7de9418bb03c8f6e281c41c826d53b0ad39f" Namespace="calico-apiserver" Pod="calico-apiserver-7bbfbc6879-szgl8" WorkloadEndpoint="ip--172--31--29--223-k8s-calico--apiserver--7bbfbc6879--szgl8-eth0" Dec 16 02:10:02.173430 containerd[1959]: 2025-12-16 02:10:02.101 [INFO][4965] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="13756e04c6614b49130a1d34ad5c7de9418bb03c8f6e281c41c826d53b0ad39f" Namespace="calico-apiserver" Pod="calico-apiserver-7bbfbc6879-szgl8" WorkloadEndpoint="ip--172--31--29--223-k8s-calico--apiserver--7bbfbc6879--szgl8-eth0" Dec 16 02:10:02.173430 containerd[1959]: 2025-12-16 02:10:02.108 [INFO][4965] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="13756e04c6614b49130a1d34ad5c7de9418bb03c8f6e281c41c826d53b0ad39f" Namespace="calico-apiserver" Pod="calico-apiserver-7bbfbc6879-szgl8" WorkloadEndpoint="ip--172--31--29--223-k8s-calico--apiserver--7bbfbc6879--szgl8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--29--223-k8s-calico--apiserver--7bbfbc6879--szgl8-eth0", GenerateName:"calico-apiserver-7bbfbc6879-", Namespace:"calico-apiserver", SelfLink:"", UID:"9240aeb5-ce88-4dd3-8f08-0a5e1f3a4a69", ResourceVersion:"893", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 2, 9, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7bbfbc6879", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-29-223", ContainerID:"13756e04c6614b49130a1d34ad5c7de9418bb03c8f6e281c41c826d53b0ad39f", Pod:"calico-apiserver-7bbfbc6879-szgl8", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.54.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calia0f7ad041ff", MAC:"26:20:f8:bb:ce:d1", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 02:10:02.173430 containerd[1959]: 2025-12-16 02:10:02.162 [INFO][4965] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="13756e04c6614b49130a1d34ad5c7de9418bb03c8f6e281c41c826d53b0ad39f" Namespace="calico-apiserver" Pod="calico-apiserver-7bbfbc6879-szgl8" WorkloadEndpoint="ip--172--31--29--223-k8s-calico--apiserver--7bbfbc6879--szgl8-eth0" Dec 16 02:10:02.265497 containerd[1959]: time="2025-12-16T02:10:02.261911941Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 02:10:02.266194 containerd[1959]: time="2025-12-16T02:10:02.265621993Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Dec 16 02:10:02.267086 kubelet[3435]: E1216 02:10:02.266991 3435 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 02:10:02.267305 containerd[1959]: time="2025-12-16T02:10:02.267167557Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Dec 16 02:10:02.268646 kubelet[3435]: E1216 02:10:02.267977 3435 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 02:10:02.275986 kubelet[3435]: E1216 02:10:02.270026 3435 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9tb62,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-78d8765688-2r4mt_calico-system(e33f6f2f-50da-4153-83d7-2ef6a7213c28): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Dec 16 02:10:02.275986 kubelet[3435]: E1216 02:10:02.273998 3435 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-78d8765688-2r4mt" podUID="e33f6f2f-50da-4153-83d7-2ef6a7213c28" Dec 16 02:10:02.276438 containerd[1959]: time="2025-12-16T02:10:02.273694538Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Dec 16 02:10:02.306792 kubelet[3435]: E1216 02:10:02.306302 3435 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-78d8765688-2r4mt" podUID="e33f6f2f-50da-4153-83d7-2ef6a7213c28" Dec 16 02:10:02.338457 containerd[1959]: time="2025-12-16T02:10:02.338356670Z" level=info msg="connecting to shim 13756e04c6614b49130a1d34ad5c7de9418bb03c8f6e281c41c826d53b0ad39f" address="unix:///run/containerd/s/226d0957629c4c7882ebae3181146159dc125b3fae654113a34045408e1b0475" namespace=k8s.io protocol=ttrpc version=3 Dec 16 02:10:02.400224 systemd-networkd[1872]: cali1ad2bdef847: Link UP Dec 16 02:10:02.404533 systemd-networkd[1872]: cali1ad2bdef847: Gained carrier Dec 16 02:10:02.451889 kubelet[3435]: I1216 02:10:02.451797 3435 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-zk92z" podStartSLOduration=55.451771274 podStartE2EDuration="55.451771274s" podCreationTimestamp="2025-12-16 02:09:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 02:10:02.387289034 +0000 UTC m=+59.873490682" watchObservedRunningTime="2025-12-16 02:10:02.451771274 +0000 UTC m=+59.937972886" Dec 16 02:10:02.475415 containerd[1959]: 2025-12-16 02:10:01.525 [INFO][4960] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--29--223-k8s-coredns--674b8bbfcf--spv6q-eth0 coredns-674b8bbfcf- kube-system 6568dd52-5b39-4fa9-85f5-6b3edadbd038 889 0 2025-12-16 02:09:07 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-29-223 coredns-674b8bbfcf-spv6q eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali1ad2bdef847 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="3e6807fd8d723c789a9440c91b4d400e1877d1354afdf40fe47f8ab0db8dd9ae" Namespace="kube-system" Pod="coredns-674b8bbfcf-spv6q" WorkloadEndpoint="ip--172--31--29--223-k8s-coredns--674b8bbfcf--spv6q-" Dec 16 02:10:02.475415 containerd[1959]: 2025-12-16 02:10:01.525 [INFO][4960] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="3e6807fd8d723c789a9440c91b4d400e1877d1354afdf40fe47f8ab0db8dd9ae" Namespace="kube-system" Pod="coredns-674b8bbfcf-spv6q" WorkloadEndpoint="ip--172--31--29--223-k8s-coredns--674b8bbfcf--spv6q-eth0" Dec 16 02:10:02.475415 containerd[1959]: 2025-12-16 02:10:01.846 [INFO][5065] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3e6807fd8d723c789a9440c91b4d400e1877d1354afdf40fe47f8ab0db8dd9ae" HandleID="k8s-pod-network.3e6807fd8d723c789a9440c91b4d400e1877d1354afdf40fe47f8ab0db8dd9ae" Workload="ip--172--31--29--223-k8s-coredns--674b8bbfcf--spv6q-eth0" Dec 16 02:10:02.475415 containerd[1959]: 2025-12-16 02:10:01.846 [INFO][5065] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="3e6807fd8d723c789a9440c91b4d400e1877d1354afdf40fe47f8ab0db8dd9ae" HandleID="k8s-pod-network.3e6807fd8d723c789a9440c91b4d400e1877d1354afdf40fe47f8ab0db8dd9ae" Workload="ip--172--31--29--223-k8s-coredns--674b8bbfcf--spv6q-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003366f0), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-29-223", "pod":"coredns-674b8bbfcf-spv6q", "timestamp":"2025-12-16 02:10:01.846513591 +0000 UTC"}, Hostname:"ip-172-31-29-223", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 02:10:02.475415 containerd[1959]: 2025-12-16 02:10:01.846 [INFO][5065] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 02:10:02.475415 containerd[1959]: 2025-12-16 02:10:02.032 [INFO][5065] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 02:10:02.475415 containerd[1959]: 2025-12-16 02:10:02.032 [INFO][5065] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-29-223' Dec 16 02:10:02.475415 containerd[1959]: 2025-12-16 02:10:02.109 [INFO][5065] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.3e6807fd8d723c789a9440c91b4d400e1877d1354afdf40fe47f8ab0db8dd9ae" host="ip-172-31-29-223" Dec 16 02:10:02.475415 containerd[1959]: 2025-12-16 02:10:02.140 [INFO][5065] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-29-223" Dec 16 02:10:02.475415 containerd[1959]: 2025-12-16 02:10:02.170 [INFO][5065] ipam/ipam.go 511: Trying affinity for 192.168.54.128/26 host="ip-172-31-29-223" Dec 16 02:10:02.475415 containerd[1959]: 2025-12-16 02:10:02.182 [INFO][5065] ipam/ipam.go 158: Attempting to load block cidr=192.168.54.128/26 host="ip-172-31-29-223" Dec 16 02:10:02.475415 containerd[1959]: 2025-12-16 02:10:02.197 [INFO][5065] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.54.128/26 host="ip-172-31-29-223" Dec 16 02:10:02.475415 containerd[1959]: 2025-12-16 02:10:02.198 [INFO][5065] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.54.128/26 handle="k8s-pod-network.3e6807fd8d723c789a9440c91b4d400e1877d1354afdf40fe47f8ab0db8dd9ae" host="ip-172-31-29-223" Dec 16 02:10:02.475415 containerd[1959]: 2025-12-16 02:10:02.242 [INFO][5065] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.3e6807fd8d723c789a9440c91b4d400e1877d1354afdf40fe47f8ab0db8dd9ae Dec 16 02:10:02.475415 containerd[1959]: 2025-12-16 02:10:02.289 [INFO][5065] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.54.128/26 handle="k8s-pod-network.3e6807fd8d723c789a9440c91b4d400e1877d1354afdf40fe47f8ab0db8dd9ae" host="ip-172-31-29-223" Dec 16 02:10:02.475415 containerd[1959]: 2025-12-16 02:10:02.336 [INFO][5065] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.54.135/26] block=192.168.54.128/26 handle="k8s-pod-network.3e6807fd8d723c789a9440c91b4d400e1877d1354afdf40fe47f8ab0db8dd9ae" host="ip-172-31-29-223" Dec 16 02:10:02.475415 containerd[1959]: 2025-12-16 02:10:02.336 [INFO][5065] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.54.135/26] handle="k8s-pod-network.3e6807fd8d723c789a9440c91b4d400e1877d1354afdf40fe47f8ab0db8dd9ae" host="ip-172-31-29-223" Dec 16 02:10:02.475415 containerd[1959]: 2025-12-16 02:10:02.336 [INFO][5065] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 02:10:02.475415 containerd[1959]: 2025-12-16 02:10:02.336 [INFO][5065] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.54.135/26] IPv6=[] ContainerID="3e6807fd8d723c789a9440c91b4d400e1877d1354afdf40fe47f8ab0db8dd9ae" HandleID="k8s-pod-network.3e6807fd8d723c789a9440c91b4d400e1877d1354afdf40fe47f8ab0db8dd9ae" Workload="ip--172--31--29--223-k8s-coredns--674b8bbfcf--spv6q-eth0" Dec 16 02:10:02.476645 containerd[1959]: 2025-12-16 02:10:02.371 [INFO][4960] cni-plugin/k8s.go 418: Populated endpoint ContainerID="3e6807fd8d723c789a9440c91b4d400e1877d1354afdf40fe47f8ab0db8dd9ae" Namespace="kube-system" Pod="coredns-674b8bbfcf-spv6q" WorkloadEndpoint="ip--172--31--29--223-k8s-coredns--674b8bbfcf--spv6q-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--29--223-k8s-coredns--674b8bbfcf--spv6q-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"6568dd52-5b39-4fa9-85f5-6b3edadbd038", ResourceVersion:"889", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 2, 9, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-29-223", ContainerID:"", Pod:"coredns-674b8bbfcf-spv6q", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.54.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali1ad2bdef847", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 02:10:02.476645 containerd[1959]: 2025-12-16 02:10:02.373 [INFO][4960] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.54.135/32] ContainerID="3e6807fd8d723c789a9440c91b4d400e1877d1354afdf40fe47f8ab0db8dd9ae" Namespace="kube-system" Pod="coredns-674b8bbfcf-spv6q" WorkloadEndpoint="ip--172--31--29--223-k8s-coredns--674b8bbfcf--spv6q-eth0" Dec 16 02:10:02.476645 containerd[1959]: 2025-12-16 02:10:02.374 [INFO][4960] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1ad2bdef847 ContainerID="3e6807fd8d723c789a9440c91b4d400e1877d1354afdf40fe47f8ab0db8dd9ae" Namespace="kube-system" Pod="coredns-674b8bbfcf-spv6q" WorkloadEndpoint="ip--172--31--29--223-k8s-coredns--674b8bbfcf--spv6q-eth0" Dec 16 02:10:02.476645 containerd[1959]: 2025-12-16 02:10:02.409 [INFO][4960] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3e6807fd8d723c789a9440c91b4d400e1877d1354afdf40fe47f8ab0db8dd9ae" Namespace="kube-system" Pod="coredns-674b8bbfcf-spv6q" WorkloadEndpoint="ip--172--31--29--223-k8s-coredns--674b8bbfcf--spv6q-eth0" Dec 16 02:10:02.476645 containerd[1959]: 2025-12-16 02:10:02.415 [INFO][4960] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="3e6807fd8d723c789a9440c91b4d400e1877d1354afdf40fe47f8ab0db8dd9ae" Namespace="kube-system" Pod="coredns-674b8bbfcf-spv6q" WorkloadEndpoint="ip--172--31--29--223-k8s-coredns--674b8bbfcf--spv6q-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--29--223-k8s-coredns--674b8bbfcf--spv6q-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"6568dd52-5b39-4fa9-85f5-6b3edadbd038", ResourceVersion:"889", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 2, 9, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-29-223", ContainerID:"3e6807fd8d723c789a9440c91b4d400e1877d1354afdf40fe47f8ab0db8dd9ae", Pod:"coredns-674b8bbfcf-spv6q", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.54.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali1ad2bdef847", MAC:"0e:d5:ad:5a:0e:95", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 02:10:02.476645 containerd[1959]: 2025-12-16 02:10:02.468 [INFO][4960] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="3e6807fd8d723c789a9440c91b4d400e1877d1354afdf40fe47f8ab0db8dd9ae" Namespace="kube-system" Pod="coredns-674b8bbfcf-spv6q" WorkloadEndpoint="ip--172--31--29--223-k8s-coredns--674b8bbfcf--spv6q-eth0" Dec 16 02:10:02.578372 containerd[1959]: time="2025-12-16T02:10:02.577681851Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 02:10:02.586479 containerd[1959]: time="2025-12-16T02:10:02.586370427Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Dec 16 02:10:02.586672 containerd[1959]: time="2025-12-16T02:10:02.586543047Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Dec 16 02:10:02.586981 kubelet[3435]: E1216 02:10:02.586924 3435 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 16 02:10:02.587251 kubelet[3435]: E1216 02:10:02.587204 3435 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 16 02:10:02.588661 containerd[1959]: time="2025-12-16T02:10:02.588569235Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Dec 16 02:10:02.590557 kubelet[3435]: E1216 02:10:02.590121 3435 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6zdjz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-ghk7w_calico-system(f50b9bab-3859-4d1b-ba44-d918ecbff9d1): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Dec 16 02:10:02.593462 kubelet[3435]: E1216 02:10:02.593305 3435 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-ghk7w" podUID="f50b9bab-3859-4d1b-ba44-d918ecbff9d1" Dec 16 02:10:02.613226 containerd[1959]: time="2025-12-16T02:10:02.613131735Z" level=info msg="connecting to shim 3e6807fd8d723c789a9440c91b4d400e1877d1354afdf40fe47f8ab0db8dd9ae" address="unix:///run/containerd/s/3c298dd1263336ae79cd8e76f20bfdcbd6e3b40e989fe85d983a7066beee9ffc" namespace=k8s.io protocol=ttrpc version=3 Dec 16 02:10:02.645764 systemd[1]: Started cri-containerd-13756e04c6614b49130a1d34ad5c7de9418bb03c8f6e281c41c826d53b0ad39f.scope - libcontainer container 13756e04c6614b49130a1d34ad5c7de9418bb03c8f6e281c41c826d53b0ad39f. Dec 16 02:10:02.783312 systemd-networkd[1872]: calia944d4a223a: Gained IPv6LL Dec 16 02:10:02.816465 systemd[1]: Started cri-containerd-3e6807fd8d723c789a9440c91b4d400e1877d1354afdf40fe47f8ab0db8dd9ae.scope - libcontainer container 3e6807fd8d723c789a9440c91b4d400e1877d1354afdf40fe47f8ab0db8dd9ae. Dec 16 02:10:02.870000 audit: BPF prog-id=209 op=LOAD Dec 16 02:10:02.873000 audit: BPF prog-id=210 op=LOAD Dec 16 02:10:02.873000 audit[5020]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=400019e180 a2=98 a3=0 items=0 ppid=4993 pid=5020 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:02.873000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3365356637333638303935336239646639356261636130366131366432 Dec 16 02:10:02.873000 audit: BPF prog-id=210 op=UNLOAD Dec 16 02:10:02.873000 audit[5020]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4993 pid=5020 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:02.873000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3365356637333638303935336239646639356261636130366131366432 Dec 16 02:10:02.874000 audit: BPF prog-id=211 op=LOAD Dec 16 02:10:02.874000 audit[5020]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=400019e3e8 a2=98 a3=0 items=0 ppid=4993 pid=5020 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:02.874000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3365356637333638303935336239646639356261636130366131366432 Dec 16 02:10:02.875000 audit: BPF prog-id=212 op=LOAD Dec 16 02:10:02.875000 audit[5020]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=400019e168 a2=98 a3=0 items=0 ppid=4993 pid=5020 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:02.875000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3365356637333638303935336239646639356261636130366131366432 Dec 16 02:10:02.875000 audit: BPF prog-id=212 op=UNLOAD Dec 16 02:10:02.875000 audit[5020]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4993 pid=5020 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:02.875000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3365356637333638303935336239646639356261636130366131366432 Dec 16 02:10:02.875000 audit: BPF prog-id=211 op=UNLOAD Dec 16 02:10:02.875000 audit[5020]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4993 pid=5020 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:02.875000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3365356637333638303935336239646639356261636130366131366432 Dec 16 02:10:02.876000 audit: BPF prog-id=213 op=LOAD Dec 16 02:10:02.876000 audit[5020]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=400019e648 a2=98 a3=0 items=0 ppid=4993 pid=5020 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:02.876000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3365356637333638303935336239646639356261636130366131366432 Dec 16 02:10:02.887092 containerd[1959]: time="2025-12-16T02:10:02.886992917Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 02:10:02.889871 containerd[1959]: time="2025-12-16T02:10:02.889484945Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Dec 16 02:10:02.889871 containerd[1959]: time="2025-12-16T02:10:02.889523837Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Dec 16 02:10:02.890120 kubelet[3435]: E1216 02:10:02.889979 3435 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 16 02:10:02.892837 kubelet[3435]: E1216 02:10:02.890132 3435 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 16 02:10:02.892837 kubelet[3435]: E1216 02:10:02.890462 3435 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-cmqs6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-7kpqw_calico-system(874b2e4b-6331-48a2-85aa-be2fc619cfc8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Dec 16 02:10:02.892837 kubelet[3435]: E1216 02:10:02.892282 3435 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-7kpqw" podUID="874b2e4b-6331-48a2-85aa-be2fc619cfc8" Dec 16 02:10:02.935000 audit: BPF prog-id=214 op=LOAD Dec 16 02:10:02.937000 audit: BPF prog-id=215 op=LOAD Dec 16 02:10:02.937000 audit[5185]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a0180 a2=98 a3=0 items=0 ppid=5174 pid=5185 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:02.937000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3365363830376664386437323363373839613934343063393162346434 Dec 16 02:10:02.937000 audit: BPF prog-id=215 op=UNLOAD Dec 16 02:10:02.937000 audit[5185]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5174 pid=5185 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:02.937000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3365363830376664386437323363373839613934343063393162346434 Dec 16 02:10:02.937000 audit: BPF prog-id=216 op=LOAD Dec 16 02:10:02.937000 audit[5185]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a03e8 a2=98 a3=0 items=0 ppid=5174 pid=5185 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:02.937000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3365363830376664386437323363373839613934343063393162346434 Dec 16 02:10:02.938000 audit: BPF prog-id=217 op=LOAD Dec 16 02:10:02.938000 audit[5185]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=40001a0168 a2=98 a3=0 items=0 ppid=5174 pid=5185 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:02.938000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3365363830376664386437323363373839613934343063393162346434 Dec 16 02:10:02.939000 audit: BPF prog-id=217 op=UNLOAD Dec 16 02:10:02.939000 audit[5185]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5174 pid=5185 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:02.939000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3365363830376664386437323363373839613934343063393162346434 Dec 16 02:10:02.939000 audit: BPF prog-id=216 op=UNLOAD Dec 16 02:10:02.939000 audit[5185]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5174 pid=5185 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:02.939000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3365363830376664386437323363373839613934343063393162346434 Dec 16 02:10:02.939000 audit: BPF prog-id=218 op=LOAD Dec 16 02:10:02.939000 audit[5185]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a0648 a2=98 a3=0 items=0 ppid=5174 pid=5185 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:02.939000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3365363830376664386437323363373839613934343063393162346434 Dec 16 02:10:03.123524 containerd[1959]: time="2025-12-16T02:10:03.123447446Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-spv6q,Uid:6568dd52-5b39-4fa9-85f5-6b3edadbd038,Namespace:kube-system,Attempt:0,} returns sandbox id \"3e6807fd8d723c789a9440c91b4d400e1877d1354afdf40fe47f8ab0db8dd9ae\"" Dec 16 02:10:03.142000 audit[5232]: NETFILTER_CFG table=filter:123 family=2 entries=20 op=nft_register_rule pid=5232 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 02:10:03.147794 containerd[1959]: time="2025-12-16T02:10:03.146568482Z" level=info msg="CreateContainer within sandbox \"3e6807fd8d723c789a9440c91b4d400e1877d1354afdf40fe47f8ab0db8dd9ae\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 16 02:10:03.142000 audit[5232]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=7480 a0=3 a1=ffffd9bf5050 a2=0 a3=1 items=0 ppid=3591 pid=5232 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:03.142000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 02:10:03.191848 systemd-networkd[1872]: calie4e4b839ebf: Link UP Dec 16 02:10:03.195480 systemd-networkd[1872]: calie4e4b839ebf: Gained carrier Dec 16 02:10:03.199000 audit[5232]: NETFILTER_CFG table=nat:124 family=2 entries=14 op=nft_register_rule pid=5232 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 02:10:03.199000 audit[5232]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=3468 a0=3 a1=ffffd9bf5050 a2=0 a3=1 items=0 ppid=3591 pid=5232 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:03.199000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 02:10:03.205571 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1046077790.mount: Deactivated successfully. Dec 16 02:10:03.236378 containerd[1959]: time="2025-12-16T02:10:03.236276942Z" level=info msg="Container 3a17b2d4857e00e49c131281d856828458b308bcbe0d7b8a5e651eb76b0a14a7: CDI devices from CRI Config.CDIDevices: []" Dec 16 02:10:03.260960 containerd[1959]: 2025-12-16 02:10:02.836 [INFO][5160] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--29--223-k8s-calico--kube--controllers--8478cfdcd8--h8dlm-eth0 calico-kube-controllers-8478cfdcd8- calico-system e4cfe125-283d-49d5-a19e-c93c8097201d 888 0 2025-12-16 02:09:37 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:8478cfdcd8 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ip-172-31-29-223 calico-kube-controllers-8478cfdcd8-h8dlm eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calie4e4b839ebf [] [] }} ContainerID="75d96cb709c476697465f0baa44e77637dadc0cc61113d43ee0137ba98e39955" Namespace="calico-system" Pod="calico-kube-controllers-8478cfdcd8-h8dlm" WorkloadEndpoint="ip--172--31--29--223-k8s-calico--kube--controllers--8478cfdcd8--h8dlm-" Dec 16 02:10:03.260960 containerd[1959]: 2025-12-16 02:10:02.842 [INFO][5160] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="75d96cb709c476697465f0baa44e77637dadc0cc61113d43ee0137ba98e39955" Namespace="calico-system" Pod="calico-kube-controllers-8478cfdcd8-h8dlm" WorkloadEndpoint="ip--172--31--29--223-k8s-calico--kube--controllers--8478cfdcd8--h8dlm-eth0" Dec 16 02:10:03.260960 containerd[1959]: 2025-12-16 02:10:03.007 [INFO][5217] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="75d96cb709c476697465f0baa44e77637dadc0cc61113d43ee0137ba98e39955" HandleID="k8s-pod-network.75d96cb709c476697465f0baa44e77637dadc0cc61113d43ee0137ba98e39955" Workload="ip--172--31--29--223-k8s-calico--kube--controllers--8478cfdcd8--h8dlm-eth0" Dec 16 02:10:03.260960 containerd[1959]: 2025-12-16 02:10:03.008 [INFO][5217] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="75d96cb709c476697465f0baa44e77637dadc0cc61113d43ee0137ba98e39955" HandleID="k8s-pod-network.75d96cb709c476697465f0baa44e77637dadc0cc61113d43ee0137ba98e39955" Workload="ip--172--31--29--223-k8s-calico--kube--controllers--8478cfdcd8--h8dlm-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004cda0), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-29-223", "pod":"calico-kube-controllers-8478cfdcd8-h8dlm", "timestamp":"2025-12-16 02:10:03.007823329 +0000 UTC"}, Hostname:"ip-172-31-29-223", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 02:10:03.260960 containerd[1959]: 2025-12-16 02:10:03.008 [INFO][5217] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 02:10:03.260960 containerd[1959]: 2025-12-16 02:10:03.008 [INFO][5217] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 02:10:03.260960 containerd[1959]: 2025-12-16 02:10:03.009 [INFO][5217] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-29-223' Dec 16 02:10:03.260960 containerd[1959]: 2025-12-16 02:10:03.043 [INFO][5217] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.75d96cb709c476697465f0baa44e77637dadc0cc61113d43ee0137ba98e39955" host="ip-172-31-29-223" Dec 16 02:10:03.260960 containerd[1959]: 2025-12-16 02:10:03.059 [INFO][5217] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-29-223" Dec 16 02:10:03.260960 containerd[1959]: 2025-12-16 02:10:03.095 [INFO][5217] ipam/ipam.go 511: Trying affinity for 192.168.54.128/26 host="ip-172-31-29-223" Dec 16 02:10:03.260960 containerd[1959]: 2025-12-16 02:10:03.102 [INFO][5217] ipam/ipam.go 158: Attempting to load block cidr=192.168.54.128/26 host="ip-172-31-29-223" Dec 16 02:10:03.260960 containerd[1959]: 2025-12-16 02:10:03.113 [INFO][5217] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.54.128/26 host="ip-172-31-29-223" Dec 16 02:10:03.260960 containerd[1959]: 2025-12-16 02:10:03.114 [INFO][5217] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.54.128/26 handle="k8s-pod-network.75d96cb709c476697465f0baa44e77637dadc0cc61113d43ee0137ba98e39955" host="ip-172-31-29-223" Dec 16 02:10:03.260960 containerd[1959]: 2025-12-16 02:10:03.119 [INFO][5217] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.75d96cb709c476697465f0baa44e77637dadc0cc61113d43ee0137ba98e39955 Dec 16 02:10:03.260960 containerd[1959]: 2025-12-16 02:10:03.136 [INFO][5217] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.54.128/26 handle="k8s-pod-network.75d96cb709c476697465f0baa44e77637dadc0cc61113d43ee0137ba98e39955" host="ip-172-31-29-223" Dec 16 02:10:03.260960 containerd[1959]: 2025-12-16 02:10:03.157 [INFO][5217] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.54.136/26] block=192.168.54.128/26 handle="k8s-pod-network.75d96cb709c476697465f0baa44e77637dadc0cc61113d43ee0137ba98e39955" host="ip-172-31-29-223" Dec 16 02:10:03.260960 containerd[1959]: 2025-12-16 02:10:03.157 [INFO][5217] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.54.136/26] handle="k8s-pod-network.75d96cb709c476697465f0baa44e77637dadc0cc61113d43ee0137ba98e39955" host="ip-172-31-29-223" Dec 16 02:10:03.260960 containerd[1959]: 2025-12-16 02:10:03.157 [INFO][5217] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 02:10:03.260960 containerd[1959]: 2025-12-16 02:10:03.157 [INFO][5217] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.54.136/26] IPv6=[] ContainerID="75d96cb709c476697465f0baa44e77637dadc0cc61113d43ee0137ba98e39955" HandleID="k8s-pod-network.75d96cb709c476697465f0baa44e77637dadc0cc61113d43ee0137ba98e39955" Workload="ip--172--31--29--223-k8s-calico--kube--controllers--8478cfdcd8--h8dlm-eth0" Dec 16 02:10:03.263723 containerd[1959]: 2025-12-16 02:10:03.168 [INFO][5160] cni-plugin/k8s.go 418: Populated endpoint ContainerID="75d96cb709c476697465f0baa44e77637dadc0cc61113d43ee0137ba98e39955" Namespace="calico-system" Pod="calico-kube-controllers-8478cfdcd8-h8dlm" WorkloadEndpoint="ip--172--31--29--223-k8s-calico--kube--controllers--8478cfdcd8--h8dlm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--29--223-k8s-calico--kube--controllers--8478cfdcd8--h8dlm-eth0", GenerateName:"calico-kube-controllers-8478cfdcd8-", Namespace:"calico-system", SelfLink:"", UID:"e4cfe125-283d-49d5-a19e-c93c8097201d", ResourceVersion:"888", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 2, 9, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"8478cfdcd8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-29-223", ContainerID:"", Pod:"calico-kube-controllers-8478cfdcd8-h8dlm", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.54.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calie4e4b839ebf", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 02:10:03.263723 containerd[1959]: 2025-12-16 02:10:03.169 [INFO][5160] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.54.136/32] ContainerID="75d96cb709c476697465f0baa44e77637dadc0cc61113d43ee0137ba98e39955" Namespace="calico-system" Pod="calico-kube-controllers-8478cfdcd8-h8dlm" WorkloadEndpoint="ip--172--31--29--223-k8s-calico--kube--controllers--8478cfdcd8--h8dlm-eth0" Dec 16 02:10:03.263723 containerd[1959]: 2025-12-16 02:10:03.169 [INFO][5160] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie4e4b839ebf ContainerID="75d96cb709c476697465f0baa44e77637dadc0cc61113d43ee0137ba98e39955" Namespace="calico-system" Pod="calico-kube-controllers-8478cfdcd8-h8dlm" WorkloadEndpoint="ip--172--31--29--223-k8s-calico--kube--controllers--8478cfdcd8--h8dlm-eth0" Dec 16 02:10:03.263723 containerd[1959]: 2025-12-16 02:10:03.196 [INFO][5160] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="75d96cb709c476697465f0baa44e77637dadc0cc61113d43ee0137ba98e39955" Namespace="calico-system" Pod="calico-kube-controllers-8478cfdcd8-h8dlm" WorkloadEndpoint="ip--172--31--29--223-k8s-calico--kube--controllers--8478cfdcd8--h8dlm-eth0" Dec 16 02:10:03.263723 containerd[1959]: 2025-12-16 02:10:03.203 [INFO][5160] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="75d96cb709c476697465f0baa44e77637dadc0cc61113d43ee0137ba98e39955" Namespace="calico-system" Pod="calico-kube-controllers-8478cfdcd8-h8dlm" WorkloadEndpoint="ip--172--31--29--223-k8s-calico--kube--controllers--8478cfdcd8--h8dlm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--29--223-k8s-calico--kube--controllers--8478cfdcd8--h8dlm-eth0", GenerateName:"calico-kube-controllers-8478cfdcd8-", Namespace:"calico-system", SelfLink:"", UID:"e4cfe125-283d-49d5-a19e-c93c8097201d", ResourceVersion:"888", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 2, 9, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"8478cfdcd8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-29-223", ContainerID:"75d96cb709c476697465f0baa44e77637dadc0cc61113d43ee0137ba98e39955", Pod:"calico-kube-controllers-8478cfdcd8-h8dlm", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.54.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calie4e4b839ebf", MAC:"4a:2d:d9:90:6c:ba", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 02:10:03.263723 containerd[1959]: 2025-12-16 02:10:03.246 [INFO][5160] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="75d96cb709c476697465f0baa44e77637dadc0cc61113d43ee0137ba98e39955" Namespace="calico-system" Pod="calico-kube-controllers-8478cfdcd8-h8dlm" WorkloadEndpoint="ip--172--31--29--223-k8s-calico--kube--controllers--8478cfdcd8--h8dlm-eth0" Dec 16 02:10:03.283457 containerd[1959]: time="2025-12-16T02:10:03.283253895Z" level=info msg="CreateContainer within sandbox \"3e6807fd8d723c789a9440c91b4d400e1877d1354afdf40fe47f8ab0db8dd9ae\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"3a17b2d4857e00e49c131281d856828458b308bcbe0d7b8a5e651eb76b0a14a7\"" Dec 16 02:10:03.285136 containerd[1959]: time="2025-12-16T02:10:03.284793435Z" level=info msg="StartContainer for \"3a17b2d4857e00e49c131281d856828458b308bcbe0d7b8a5e651eb76b0a14a7\"" Dec 16 02:10:03.290930 containerd[1959]: time="2025-12-16T02:10:03.290334663Z" level=info msg="connecting to shim 3a17b2d4857e00e49c131281d856828458b308bcbe0d7b8a5e651eb76b0a14a7" address="unix:///run/containerd/s/3c298dd1263336ae79cd8e76f20bfdcbd6e3b40e989fe85d983a7066beee9ffc" protocol=ttrpc version=3 Dec 16 02:10:03.368724 containerd[1959]: time="2025-12-16T02:10:03.368628507Z" level=info msg="connecting to shim 75d96cb709c476697465f0baa44e77637dadc0cc61113d43ee0137ba98e39955" address="unix:///run/containerd/s/322a741fd09621e0f8fa683c335b145b5a4336b8fcdd9b003cfbc7c32936f094" namespace=k8s.io protocol=ttrpc version=3 Dec 16 02:10:03.378000 audit: BPF prog-id=219 op=LOAD Dec 16 02:10:03.385000 audit: BPF prog-id=220 op=LOAD Dec 16 02:10:03.385000 audit[5138]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000170180 a2=98 a3=0 items=0 ppid=5124 pid=5138 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:03.385000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3133373536653034633636313462343931333061316433346164356337 Dec 16 02:10:03.386000 audit: BPF prog-id=220 op=UNLOAD Dec 16 02:10:03.386000 audit[5138]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=5124 pid=5138 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:03.386000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3133373536653034633636313462343931333061316433346164356337 Dec 16 02:10:03.390000 audit[5260]: NETFILTER_CFG table=filter:125 family=2 entries=20 op=nft_register_rule pid=5260 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 02:10:03.390000 audit[5260]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=7480 a0=3 a1=fffffcf146f0 a2=0 a3=1 items=0 ppid=3591 pid=5260 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:03.390000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 02:10:03.395000 audit: BPF prog-id=221 op=LOAD Dec 16 02:10:03.395000 audit[5138]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001703e8 a2=98 a3=0 items=0 ppid=5124 pid=5138 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:03.395000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3133373536653034633636313462343931333061316433346164356337 Dec 16 02:10:03.402000 audit: BPF prog-id=222 op=LOAD Dec 16 02:10:03.402000 audit[5138]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=4000170168 a2=98 a3=0 items=0 ppid=5124 pid=5138 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:03.402000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3133373536653034633636313462343931333061316433346164356337 Dec 16 02:10:03.404000 audit: BPF prog-id=222 op=UNLOAD Dec 16 02:10:03.404000 audit[5138]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=5124 pid=5138 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:03.404000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3133373536653034633636313462343931333061316433346164356337 Dec 16 02:10:03.404000 audit: BPF prog-id=221 op=UNLOAD Dec 16 02:10:03.404000 audit[5138]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=5124 pid=5138 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:03.404000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3133373536653034633636313462343931333061316433346164356337 Dec 16 02:10:03.405000 audit[5260]: NETFILTER_CFG table=nat:126 family=2 entries=14 op=nft_register_rule pid=5260 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 02:10:03.405000 audit[5260]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=3468 a0=3 a1=fffffcf146f0 a2=0 a3=1 items=0 ppid=3591 pid=5260 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:03.405000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 02:10:03.404000 audit: BPF prog-id=223 op=LOAD Dec 16 02:10:03.404000 audit[5138]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000170648 a2=98 a3=0 items=0 ppid=5124 pid=5138 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:03.404000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3133373536653034633636313462343931333061316433346164356337 Dec 16 02:10:03.440085 kubelet[3435]: E1216 02:10:03.433915 3435 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-78d8765688-2r4mt" podUID="e33f6f2f-50da-4153-83d7-2ef6a7213c28" Dec 16 02:10:03.440085 kubelet[3435]: E1216 02:10:03.435580 3435 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-ghk7w" podUID="f50b9bab-3859-4d1b-ba44-d918ecbff9d1" Dec 16 02:10:03.440085 kubelet[3435]: E1216 02:10:03.435906 3435 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-7kpqw" podUID="874b2e4b-6331-48a2-85aa-be2fc619cfc8" Dec 16 02:10:03.602506 systemd[1]: Started cri-containerd-3a17b2d4857e00e49c131281d856828458b308bcbe0d7b8a5e651eb76b0a14a7.scope - libcontainer container 3a17b2d4857e00e49c131281d856828458b308bcbe0d7b8a5e651eb76b0a14a7. Dec 16 02:10:03.630460 systemd[1]: Started cri-containerd-75d96cb709c476697465f0baa44e77637dadc0cc61113d43ee0137ba98e39955.scope - libcontainer container 75d96cb709c476697465f0baa44e77637dadc0cc61113d43ee0137ba98e39955. Dec 16 02:10:03.734000 audit: BPF prog-id=224 op=LOAD Dec 16 02:10:03.742000 audit: BPF prog-id=225 op=LOAD Dec 16 02:10:03.742000 audit[5249]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178180 a2=98 a3=0 items=0 ppid=5174 pid=5249 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:03.742000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3361313762326434383537653030653439633133313238316438353638 Dec 16 02:10:03.743000 audit: BPF prog-id=225 op=UNLOAD Dec 16 02:10:03.743000 audit[5249]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5174 pid=5249 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:03.743000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3361313762326434383537653030653439633133313238316438353638 Dec 16 02:10:03.743000 audit: BPF prog-id=226 op=LOAD Dec 16 02:10:03.743000 audit[5249]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=5174 pid=5249 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:03.743000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3361313762326434383537653030653439633133313238316438353638 Dec 16 02:10:03.745000 audit: BPF prog-id=227 op=LOAD Dec 16 02:10:03.745000 audit[5249]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000178168 a2=98 a3=0 items=0 ppid=5174 pid=5249 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:03.745000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3361313762326434383537653030653439633133313238316438353638 Dec 16 02:10:03.746000 audit: BPF prog-id=227 op=UNLOAD Dec 16 02:10:03.746000 audit[5249]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5174 pid=5249 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:03.746000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3361313762326434383537653030653439633133313238316438353638 Dec 16 02:10:03.746000 audit: BPF prog-id=226 op=UNLOAD Dec 16 02:10:03.746000 audit[5249]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5174 pid=5249 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:03.746000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3361313762326434383537653030653439633133313238316438353638 Dec 16 02:10:03.746000 audit: BPF prog-id=228 op=LOAD Dec 16 02:10:03.746000 audit[5249]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178648 a2=98 a3=0 items=0 ppid=5174 pid=5249 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:03.746000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3361313762326434383537653030653439633133313238316438353638 Dec 16 02:10:03.751383 containerd[1959]: time="2025-12-16T02:10:03.749603093Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7bbfbc6879-7gb6n,Uid:495e068f-5f86-4d9c-b537-813d53666a90,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"3e5f73680953b9df95baca06a16d280571b666b48f7bdbf98692a356f59100fb\"" Dec 16 02:10:03.758090 containerd[1959]: time="2025-12-16T02:10:03.757091717Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 02:10:03.807676 systemd-networkd[1872]: calia0f7ad041ff: Gained IPv6LL Dec 16 02:10:03.864027 containerd[1959]: time="2025-12-16T02:10:03.863969501Z" level=info msg="StartContainer for \"3a17b2d4857e00e49c131281d856828458b308bcbe0d7b8a5e651eb76b0a14a7\" returns successfully" Dec 16 02:10:03.871497 systemd-networkd[1872]: cali1ad2bdef847: Gained IPv6LL Dec 16 02:10:03.960000 audit[5314]: NETFILTER_CFG table=filter:127 family=2 entries=17 op=nft_register_rule pid=5314 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 02:10:03.960000 audit[5314]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=fffff8e1e2d0 a2=0 a3=1 items=0 ppid=3591 pid=5314 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:03.960000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 02:10:03.985667 containerd[1959]: time="2025-12-16T02:10:03.985163502Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7bbfbc6879-szgl8,Uid:9240aeb5-ce88-4dd3-8f08-0a5e1f3a4a69,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"13756e04c6614b49130a1d34ad5c7de9418bb03c8f6e281c41c826d53b0ad39f\"" Dec 16 02:10:04.004000 audit[5314]: NETFILTER_CFG table=nat:128 family=2 entries=35 op=nft_register_chain pid=5314 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 02:10:04.004000 audit[5314]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=14196 a0=3 a1=fffff8e1e2d0 a2=0 a3=1 items=0 ppid=3591 pid=5314 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:04.004000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 02:10:04.054301 containerd[1959]: time="2025-12-16T02:10:04.054223622Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 02:10:04.057163 containerd[1959]: time="2025-12-16T02:10:04.056961482Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 02:10:04.057163 containerd[1959]: time="2025-12-16T02:10:04.057023690Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 16 02:10:04.057681 kubelet[3435]: E1216 02:10:04.057604 3435 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 02:10:04.058397 kubelet[3435]: E1216 02:10:04.057686 3435 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 02:10:04.060170 kubelet[3435]: E1216 02:10:04.059231 3435 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-p8nl9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-7bbfbc6879-7gb6n_calico-apiserver(495e068f-5f86-4d9c-b537-813d53666a90): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 02:10:04.060534 containerd[1959]: time="2025-12-16T02:10:04.059624162Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 02:10:04.061915 kubelet[3435]: E1216 02:10:04.061815 3435 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7bbfbc6879-7gb6n" podUID="495e068f-5f86-4d9c-b537-813d53666a90" Dec 16 02:10:04.114000 audit: BPF prog-id=229 op=LOAD Dec 16 02:10:04.117000 audit: BPF prog-id=230 op=LOAD Dec 16 02:10:04.117000 audit[5277]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001ee180 a2=98 a3=0 items=0 ppid=5259 pid=5277 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:04.117000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3735643936636237303963343736363937343635663062616134346537 Dec 16 02:10:04.117000 audit: BPF prog-id=230 op=UNLOAD Dec 16 02:10:04.117000 audit[5277]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5259 pid=5277 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:04.117000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3735643936636237303963343736363937343635663062616134346537 Dec 16 02:10:04.117000 audit: BPF prog-id=231 op=LOAD Dec 16 02:10:04.117000 audit[5277]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001ee3e8 a2=98 a3=0 items=0 ppid=5259 pid=5277 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:04.117000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3735643936636237303963343736363937343635663062616134346537 Dec 16 02:10:04.118000 audit: BPF prog-id=232 op=LOAD Dec 16 02:10:04.118000 audit[5277]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=40001ee168 a2=98 a3=0 items=0 ppid=5259 pid=5277 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:04.118000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3735643936636237303963343736363937343635663062616134346537 Dec 16 02:10:04.118000 audit: BPF prog-id=232 op=UNLOAD Dec 16 02:10:04.118000 audit[5277]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5259 pid=5277 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:04.118000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3735643936636237303963343736363937343635663062616134346537 Dec 16 02:10:04.118000 audit: BPF prog-id=231 op=UNLOAD Dec 16 02:10:04.118000 audit[5277]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5259 pid=5277 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:04.118000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3735643936636237303963343736363937343635663062616134346537 Dec 16 02:10:04.118000 audit: BPF prog-id=233 op=LOAD Dec 16 02:10:04.118000 audit[5277]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001ee648 a2=98 a3=0 items=0 ppid=5259 pid=5277 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:04.118000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3735643936636237303963343736363937343635663062616134346537 Dec 16 02:10:04.189000 audit: BPF prog-id=234 op=LOAD Dec 16 02:10:04.189000 audit[5030]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=ffffee838528 a2=40 a3=ffffee838558 items=0 ppid=4606 pid=5030 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:04.189000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 02:10:04.190000 audit: BPF prog-id=234 op=UNLOAD Dec 16 02:10:04.190000 audit[5030]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=4 a1=57156c a2=40 a3=ffffee838558 items=0 ppid=4606 pid=5030 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:04.190000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 02:10:04.223000 audit: BPF prog-id=235 op=LOAD Dec 16 02:10:04.223000 audit[5030]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=5 a1=ffffee838538 a2=94 a3=4 items=0 ppid=4606 pid=5030 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:04.223000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 02:10:04.224000 audit: BPF prog-id=235 op=UNLOAD Dec 16 02:10:04.224000 audit[5030]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=5 a1=57156c a2=70 a3=4 items=0 ppid=4606 pid=5030 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:04.224000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 02:10:04.224000 audit: BPF prog-id=236 op=LOAD Dec 16 02:10:04.224000 audit[5030]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=ffffee838378 a2=94 a3=5 items=0 ppid=4606 pid=5030 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:04.224000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 02:10:04.224000 audit: BPF prog-id=236 op=UNLOAD Dec 16 02:10:04.224000 audit[5030]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=6 a1=57156c a2=70 a3=5 items=0 ppid=4606 pid=5030 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:04.224000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 02:10:04.224000 audit: BPF prog-id=237 op=LOAD Dec 16 02:10:04.224000 audit[5030]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=5 a1=ffffee8385a8 a2=94 a3=6 items=0 ppid=4606 pid=5030 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:04.224000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 02:10:04.224000 audit: BPF prog-id=237 op=UNLOAD Dec 16 02:10:04.224000 audit[5030]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=5 a1=57156c a2=70 a3=6 items=0 ppid=4606 pid=5030 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:04.224000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 02:10:04.227000 audit: BPF prog-id=238 op=LOAD Dec 16 02:10:04.227000 audit[5030]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=5 a1=ffffee837d78 a2=94 a3=83 items=0 ppid=4606 pid=5030 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:04.227000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 02:10:04.228000 audit: BPF prog-id=239 op=LOAD Dec 16 02:10:04.228000 audit[5030]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=7 a0=5 a1=ffffee837b38 a2=94 a3=2 items=0 ppid=4606 pid=5030 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:04.228000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 02:10:04.228000 audit: BPF prog-id=239 op=UNLOAD Dec 16 02:10:04.228000 audit[5030]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=7 a1=57156c a2=c a3=0 items=0 ppid=4606 pid=5030 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:04.228000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 02:10:04.230000 audit: BPF prog-id=238 op=UNLOAD Dec 16 02:10:04.230000 audit[5030]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=5 a1=57156c a2=4700620 a3=46f3b00 items=0 ppid=4606 pid=5030 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:04.230000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 02:10:04.250597 containerd[1959]: time="2025-12-16T02:10:04.250398519Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-8478cfdcd8-h8dlm,Uid:e4cfe125-283d-49d5-a19e-c93c8097201d,Namespace:calico-system,Attempt:0,} returns sandbox id \"75d96cb709c476697465f0baa44e77637dadc0cc61113d43ee0137ba98e39955\"" Dec 16 02:10:04.273000 audit: BPF prog-id=240 op=LOAD Dec 16 02:10:04.273000 audit[5353]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffc46f2108 a2=98 a3=ffffc46f20f8 items=0 ppid=4606 pid=5353 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:04.273000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 16 02:10:04.274000 audit: BPF prog-id=240 op=UNLOAD Dec 16 02:10:04.274000 audit[5353]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=ffffc46f20d8 a3=0 items=0 ppid=4606 pid=5353 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:04.274000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 16 02:10:04.274000 audit: BPF prog-id=241 op=LOAD Dec 16 02:10:04.274000 audit[5353]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffc46f1fb8 a2=74 a3=95 items=0 ppid=4606 pid=5353 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:04.274000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 16 02:10:04.275000 audit: BPF prog-id=241 op=UNLOAD Dec 16 02:10:04.275000 audit[5353]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=74 a3=95 items=0 ppid=4606 pid=5353 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:04.275000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 16 02:10:04.275000 audit: BPF prog-id=242 op=LOAD Dec 16 02:10:04.275000 audit[5353]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffc46f1fe8 a2=40 a3=ffffc46f2018 items=0 ppid=4606 pid=5353 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:04.275000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 16 02:10:04.275000 audit: BPF prog-id=242 op=UNLOAD Dec 16 02:10:04.275000 audit[5353]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=40 a3=ffffc46f2018 items=0 ppid=4606 pid=5353 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:04.275000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 16 02:10:04.360642 containerd[1959]: time="2025-12-16T02:10:04.360561556Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 02:10:04.364284 containerd[1959]: time="2025-12-16T02:10:04.364206016Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 02:10:04.364535 containerd[1959]: time="2025-12-16T02:10:04.364347376Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 16 02:10:04.365382 kubelet[3435]: E1216 02:10:04.365288 3435 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 02:10:04.366438 kubelet[3435]: E1216 02:10:04.365396 3435 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 02:10:04.366574 containerd[1959]: time="2025-12-16T02:10:04.365935924Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Dec 16 02:10:04.367270 kubelet[3435]: E1216 02:10:04.366912 3435 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-85gh2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-7bbfbc6879-szgl8_calico-apiserver(9240aeb5-ce88-4dd3-8f08-0a5e1f3a4a69): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 02:10:04.369116 kubelet[3435]: E1216 02:10:04.369016 3435 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7bbfbc6879-szgl8" podUID="9240aeb5-ce88-4dd3-8f08-0a5e1f3a4a69" Dec 16 02:10:04.382268 systemd-networkd[1872]: calie4e4b839ebf: Gained IPv6LL Dec 16 02:10:04.402711 kubelet[3435]: E1216 02:10:04.402542 3435 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7bbfbc6879-szgl8" podUID="9240aeb5-ce88-4dd3-8f08-0a5e1f3a4a69" Dec 16 02:10:04.418461 kubelet[3435]: E1216 02:10:04.418358 3435 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7bbfbc6879-7gb6n" podUID="495e068f-5f86-4d9c-b537-813d53666a90" Dec 16 02:10:04.483710 systemd-networkd[1872]: vxlan.calico: Link UP Dec 16 02:10:04.483732 systemd-networkd[1872]: vxlan.calico: Gained carrier Dec 16 02:10:04.580000 audit[5370]: NETFILTER_CFG table=filter:129 family=2 entries=14 op=nft_register_rule pid=5370 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 02:10:04.580000 audit[5370]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=ffffc692a810 a2=0 a3=1 items=0 ppid=3591 pid=5370 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:04.580000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 02:10:04.589000 audit[5370]: NETFILTER_CFG table=nat:130 family=2 entries=20 op=nft_register_rule pid=5370 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 02:10:04.598936 kubelet[3435]: I1216 02:10:04.598824 3435 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-spv6q" podStartSLOduration=57.598797917 podStartE2EDuration="57.598797917s" podCreationTimestamp="2025-12-16 02:09:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 02:10:04.541653389 +0000 UTC m=+62.027855061" watchObservedRunningTime="2025-12-16 02:10:04.598797917 +0000 UTC m=+62.084999709" Dec 16 02:10:04.589000 audit[5370]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5772 a0=3 a1=ffffc692a810 a2=0 a3=1 items=0 ppid=3591 pid=5370 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:04.589000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 02:10:04.649000 audit: BPF prog-id=243 op=LOAD Dec 16 02:10:04.649000 audit[5384]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffe2f47b38 a2=98 a3=ffffe2f47b28 items=0 ppid=4606 pid=5384 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:04.649000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 02:10:04.649000 audit: BPF prog-id=243 op=UNLOAD Dec 16 02:10:04.649000 audit[5384]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=ffffe2f47b08 a3=0 items=0 ppid=4606 pid=5384 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:04.649000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 02:10:04.649000 audit: BPF prog-id=244 op=LOAD Dec 16 02:10:04.649000 audit[5384]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffe2f47818 a2=74 a3=95 items=0 ppid=4606 pid=5384 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:04.649000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 02:10:04.649000 audit: BPF prog-id=244 op=UNLOAD Dec 16 02:10:04.649000 audit[5384]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=74 a3=95 items=0 ppid=4606 pid=5384 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:04.649000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 02:10:04.649000 audit: BPF prog-id=245 op=LOAD Dec 16 02:10:04.649000 audit[5384]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffe2f47878 a2=94 a3=2 items=0 ppid=4606 pid=5384 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:04.649000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 02:10:04.650000 audit: BPF prog-id=245 op=UNLOAD Dec 16 02:10:04.650000 audit[5384]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=70 a3=2 items=0 ppid=4606 pid=5384 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:04.650000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 02:10:04.650000 audit: BPF prog-id=246 op=LOAD Dec 16 02:10:04.650000 audit[5384]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=ffffe2f476f8 a2=40 a3=ffffe2f47728 items=0 ppid=4606 pid=5384 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:04.650000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 02:10:04.650000 audit: BPF prog-id=246 op=UNLOAD Dec 16 02:10:04.650000 audit[5384]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=6 a1=57156c a2=40 a3=ffffe2f47728 items=0 ppid=4606 pid=5384 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:04.650000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 02:10:04.650000 audit: BPF prog-id=247 op=LOAD Dec 16 02:10:04.650000 audit[5384]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=ffffe2f47848 a2=94 a3=b7 items=0 ppid=4606 pid=5384 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:04.650000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 02:10:04.651000 audit: BPF prog-id=247 op=UNLOAD Dec 16 02:10:04.651000 audit[5384]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=6 a1=57156c a2=70 a3=b7 items=0 ppid=4606 pid=5384 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:04.651000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 02:10:04.654000 audit: BPF prog-id=248 op=LOAD Dec 16 02:10:04.654000 audit[5384]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=ffffe2f46ef8 a2=94 a3=2 items=0 ppid=4606 pid=5384 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:04.654000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 02:10:04.656000 audit: BPF prog-id=248 op=UNLOAD Dec 16 02:10:04.656000 audit[5384]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=6 a1=57156c a2=70 a3=2 items=0 ppid=4606 pid=5384 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:04.656000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 02:10:04.656000 audit: BPF prog-id=249 op=LOAD Dec 16 02:10:04.656000 audit[5384]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=ffffe2f47088 a2=94 a3=30 items=0 ppid=4606 pid=5384 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:04.656000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 02:10:04.667000 audit: BPF prog-id=250 op=LOAD Dec 16 02:10:04.667000 audit[5386]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffcd74fea8 a2=98 a3=ffffcd74fe98 items=0 ppid=4606 pid=5386 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:04.667000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 02:10:04.671000 audit: BPF prog-id=250 op=UNLOAD Dec 16 02:10:04.671000 audit[5386]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=ffffcd74fe78 a3=0 items=0 ppid=4606 pid=5386 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:04.671000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 02:10:04.671000 audit: BPF prog-id=251 op=LOAD Dec 16 02:10:04.671000 audit[5386]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=ffffcd74fb38 a2=74 a3=95 items=0 ppid=4606 pid=5386 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:04.671000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 02:10:04.671000 audit: BPF prog-id=251 op=UNLOAD Dec 16 02:10:04.671000 audit[5386]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=4 a1=57156c a2=74 a3=95 items=0 ppid=4606 pid=5386 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:04.671000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 02:10:04.671000 audit: BPF prog-id=252 op=LOAD Dec 16 02:10:04.671000 audit[5386]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=ffffcd74fb98 a2=94 a3=2 items=0 ppid=4606 pid=5386 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:04.671000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 02:10:04.671000 audit: BPF prog-id=252 op=UNLOAD Dec 16 02:10:04.671000 audit[5386]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=4 a1=57156c a2=70 a3=2 items=0 ppid=4606 pid=5386 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:04.671000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 02:10:04.680232 containerd[1959]: time="2025-12-16T02:10:04.680170421Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 02:10:04.682968 containerd[1959]: time="2025-12-16T02:10:04.682891961Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Dec 16 02:10:04.683458 containerd[1959]: time="2025-12-16T02:10:04.682912673Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Dec 16 02:10:04.684967 kubelet[3435]: E1216 02:10:04.684888 3435 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 16 02:10:04.685210 kubelet[3435]: E1216 02:10:04.684968 3435 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 16 02:10:04.686388 kubelet[3435]: E1216 02:10:04.686254 3435 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xmhxd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-8478cfdcd8-h8dlm_calico-system(e4cfe125-283d-49d5-a19e-c93c8097201d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Dec 16 02:10:04.687589 kubelet[3435]: E1216 02:10:04.687511 3435 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-8478cfdcd8-h8dlm" podUID="e4cfe125-283d-49d5-a19e-c93c8097201d" Dec 16 02:10:04.904000 audit: BPF prog-id=253 op=LOAD Dec 16 02:10:04.904000 audit[5386]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=ffffcd74fb58 a2=40 a3=ffffcd74fb88 items=0 ppid=4606 pid=5386 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:04.904000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 02:10:04.904000 audit: BPF prog-id=253 op=UNLOAD Dec 16 02:10:04.904000 audit[5386]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=4 a1=57156c a2=40 a3=ffffcd74fb88 items=0 ppid=4606 pid=5386 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:04.904000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 02:10:04.925000 audit: BPF prog-id=254 op=LOAD Dec 16 02:10:04.925000 audit[5386]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=5 a1=ffffcd74fb68 a2=94 a3=4 items=0 ppid=4606 pid=5386 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:04.925000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 02:10:04.926000 audit: BPF prog-id=254 op=UNLOAD Dec 16 02:10:04.926000 audit[5386]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=5 a1=57156c a2=70 a3=4 items=0 ppid=4606 pid=5386 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:04.926000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 02:10:04.926000 audit: BPF prog-id=255 op=LOAD Dec 16 02:10:04.926000 audit[5386]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=ffffcd74f9a8 a2=94 a3=5 items=0 ppid=4606 pid=5386 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:04.926000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 02:10:04.927000 audit: BPF prog-id=255 op=UNLOAD Dec 16 02:10:04.927000 audit[5386]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=6 a1=57156c a2=70 a3=5 items=0 ppid=4606 pid=5386 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:04.927000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 02:10:04.927000 audit: BPF prog-id=256 op=LOAD Dec 16 02:10:04.927000 audit[5386]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=5 a1=ffffcd74fbd8 a2=94 a3=6 items=0 ppid=4606 pid=5386 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:04.927000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 02:10:04.927000 audit: BPF prog-id=256 op=UNLOAD Dec 16 02:10:04.927000 audit[5386]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=5 a1=57156c a2=70 a3=6 items=0 ppid=4606 pid=5386 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:04.927000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 02:10:04.927000 audit: BPF prog-id=257 op=LOAD Dec 16 02:10:04.927000 audit[5386]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=5 a1=ffffcd74f3a8 a2=94 a3=83 items=0 ppid=4606 pid=5386 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:04.927000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 02:10:04.928000 audit: BPF prog-id=258 op=LOAD Dec 16 02:10:04.928000 audit[5386]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=7 a0=5 a1=ffffcd74f168 a2=94 a3=2 items=0 ppid=4606 pid=5386 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:04.928000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 02:10:04.928000 audit: BPF prog-id=258 op=UNLOAD Dec 16 02:10:04.928000 audit[5386]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=7 a1=57156c a2=c a3=0 items=0 ppid=4606 pid=5386 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:04.928000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 02:10:04.929000 audit: BPF prog-id=257 op=UNLOAD Dec 16 02:10:04.929000 audit[5386]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=5 a1=57156c a2=35f5c620 a3=35f4fb00 items=0 ppid=4606 pid=5386 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:04.929000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 02:10:04.942000 audit: BPF prog-id=249 op=UNLOAD Dec 16 02:10:04.942000 audit[4606]: SYSCALL arch=c00000b7 syscall=35 success=yes exit=0 a0=ffffffffffffff9c a1=40009300c0 a2=0 a3=0 items=0 ppid=4594 pid=4606 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="calico-node" exe="/usr/bin/calico-node" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:04.942000 audit: PROCTITLE proctitle=63616C69636F2D6E6F6465002D66656C6978 Dec 16 02:10:05.169000 audit[5414]: NETFILTER_CFG table=raw:131 family=2 entries=21 op=nft_register_chain pid=5414 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 16 02:10:05.169000 audit[5414]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=8452 a0=3 a1=ffffeae43960 a2=0 a3=ffffa0e84fa8 items=0 ppid=4606 pid=5414 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:05.169000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 16 02:10:05.170000 audit[5418]: NETFILTER_CFG table=nat:132 family=2 entries=15 op=nft_register_chain pid=5418 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 16 02:10:05.170000 audit[5418]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5084 a0=3 a1=ffffee50e530 a2=0 a3=ffffaf1ecfa8 items=0 ppid=4606 pid=5418 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:05.170000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 16 02:10:05.174000 audit[5422]: NETFILTER_CFG table=mangle:133 family=2 entries=16 op=nft_register_chain pid=5422 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 16 02:10:05.174000 audit[5422]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=6868 a0=3 a1=fffff59b8650 a2=0 a3=ffff93f15fa8 items=0 ppid=4606 pid=5422 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:05.174000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 16 02:10:05.195000 audit[5419]: NETFILTER_CFG table=filter:134 family=2 entries=321 op=nft_register_chain pid=5419 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 16 02:10:05.195000 audit[5419]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=190616 a0=3 a1=ffffdfacad80 a2=0 a3=ffffb77dbfa8 items=0 ppid=4606 pid=5419 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:05.195000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 16 02:10:05.451258 kubelet[3435]: E1216 02:10:05.450627 3435 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7bbfbc6879-szgl8" podUID="9240aeb5-ce88-4dd3-8f08-0a5e1f3a4a69" Dec 16 02:10:05.453892 kubelet[3435]: E1216 02:10:05.453613 3435 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7bbfbc6879-7gb6n" podUID="495e068f-5f86-4d9c-b537-813d53666a90" Dec 16 02:10:05.453892 kubelet[3435]: E1216 02:10:05.453806 3435 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-8478cfdcd8-h8dlm" podUID="e4cfe125-283d-49d5-a19e-c93c8097201d" Dec 16 02:10:05.642000 audit[5434]: NETFILTER_CFG table=filter:135 family=2 entries=14 op=nft_register_rule pid=5434 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 02:10:05.642000 audit[5434]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=fffff4351f30 a2=0 a3=1 items=0 ppid=3591 pid=5434 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:05.642000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 02:10:05.652000 audit[5434]: NETFILTER_CFG table=nat:136 family=2 entries=44 op=nft_register_rule pid=5434 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 02:10:05.652000 audit[5434]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=14196 a0=3 a1=fffff4351f30 a2=0 a3=1 items=0 ppid=3591 pid=5434 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:05.652000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 02:10:05.726922 systemd-networkd[1872]: vxlan.calico: Gained IPv6LL Dec 16 02:10:06.688000 audit[5436]: NETFILTER_CFG table=filter:137 family=2 entries=14 op=nft_register_rule pid=5436 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 02:10:06.690981 kernel: kauditd_printk_skb: 394 callbacks suppressed Dec 16 02:10:06.691156 kernel: audit: type=1325 audit(1765851006.688:740): table=filter:137 family=2 entries=14 op=nft_register_rule pid=5436 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 02:10:06.688000 audit[5436]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=ffffe702e0e0 a2=0 a3=1 items=0 ppid=3591 pid=5436 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:06.700969 kernel: audit: type=1300 audit(1765851006.688:740): arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=ffffe702e0e0 a2=0 a3=1 items=0 ppid=3591 pid=5436 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:06.688000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 02:10:06.704762 kernel: audit: type=1327 audit(1765851006.688:740): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 02:10:06.711000 audit[5436]: NETFILTER_CFG table=nat:138 family=2 entries=56 op=nft_register_chain pid=5436 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 02:10:06.711000 audit[5436]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=19860 a0=3 a1=ffffe702e0e0 a2=0 a3=1 items=0 ppid=3591 pid=5436 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:06.723851 kernel: audit: type=1325 audit(1765851006.711:741): table=nat:138 family=2 entries=56 op=nft_register_chain pid=5436 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 02:10:06.723972 kernel: audit: type=1300 audit(1765851006.711:741): arch=c00000b7 syscall=211 success=yes exit=19860 a0=3 a1=ffffe702e0e0 a2=0 a3=1 items=0 ppid=3591 pid=5436 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:06.711000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 02:10:06.729292 kernel: audit: type=1327 audit(1765851006.711:741): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 02:10:07.955474 ntpd[1933]: Listen normally on 7 vxlan.calico 192.168.54.128:123 Dec 16 02:10:07.955569 ntpd[1933]: Listen normally on 8 cali5db082ead3e [fe80::ecee:eeff:feee:eeee%4]:123 Dec 16 02:10:07.956838 ntpd[1933]: 16 Dec 02:10:07 ntpd[1933]: Listen normally on 7 vxlan.calico 192.168.54.128:123 Dec 16 02:10:07.956838 ntpd[1933]: 16 Dec 02:10:07 ntpd[1933]: Listen normally on 8 cali5db082ead3e [fe80::ecee:eeff:feee:eeee%4]:123 Dec 16 02:10:07.956838 ntpd[1933]: 16 Dec 02:10:07 ntpd[1933]: Listen normally on 9 cali603e7a43b8a [fe80::ecee:eeff:feee:eeee%5]:123 Dec 16 02:10:07.956838 ntpd[1933]: 16 Dec 02:10:07 ntpd[1933]: Listen normally on 10 cali79e086e721b [fe80::ecee:eeff:feee:eeee%6]:123 Dec 16 02:10:07.956838 ntpd[1933]: 16 Dec 02:10:07 ntpd[1933]: Listen normally on 11 cali95aecd30ada [fe80::ecee:eeff:feee:eeee%7]:123 Dec 16 02:10:07.956838 ntpd[1933]: 16 Dec 02:10:07 ntpd[1933]: Listen normally on 12 calia944d4a223a [fe80::ecee:eeff:feee:eeee%8]:123 Dec 16 02:10:07.956838 ntpd[1933]: 16 Dec 02:10:07 ntpd[1933]: Listen normally on 13 calia0f7ad041ff [fe80::ecee:eeff:feee:eeee%9]:123 Dec 16 02:10:07.956838 ntpd[1933]: 16 Dec 02:10:07 ntpd[1933]: Listen normally on 14 cali1ad2bdef847 [fe80::ecee:eeff:feee:eeee%10]:123 Dec 16 02:10:07.956838 ntpd[1933]: 16 Dec 02:10:07 ntpd[1933]: Listen normally on 15 calie4e4b839ebf [fe80::ecee:eeff:feee:eeee%11]:123 Dec 16 02:10:07.956838 ntpd[1933]: 16 Dec 02:10:07 ntpd[1933]: Listen normally on 16 vxlan.calico [fe80::6477:d0ff:feb7:2cae%12]:123 Dec 16 02:10:07.955621 ntpd[1933]: Listen normally on 9 cali603e7a43b8a [fe80::ecee:eeff:feee:eeee%5]:123 Dec 16 02:10:07.955671 ntpd[1933]: Listen normally on 10 cali79e086e721b [fe80::ecee:eeff:feee:eeee%6]:123 Dec 16 02:10:07.955725 ntpd[1933]: Listen normally on 11 cali95aecd30ada [fe80::ecee:eeff:feee:eeee%7]:123 Dec 16 02:10:07.955773 ntpd[1933]: Listen normally on 12 calia944d4a223a [fe80::ecee:eeff:feee:eeee%8]:123 Dec 16 02:10:07.955822 ntpd[1933]: Listen normally on 13 calia0f7ad041ff [fe80::ecee:eeff:feee:eeee%9]:123 Dec 16 02:10:07.955871 ntpd[1933]: Listen normally on 14 cali1ad2bdef847 [fe80::ecee:eeff:feee:eeee%10]:123 Dec 16 02:10:07.955920 ntpd[1933]: Listen normally on 15 calie4e4b839ebf [fe80::ecee:eeff:feee:eeee%11]:123 Dec 16 02:10:07.955977 ntpd[1933]: Listen normally on 16 vxlan.calico [fe80::6477:d0ff:feb7:2cae%12]:123 Dec 16 02:10:09.146230 systemd[1]: Started sshd@9-172.31.29.223:22-139.178.89.65:37568.service - OpenSSH per-connection server daemon (139.178.89.65:37568). Dec 16 02:10:09.145000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-172.31.29.223:22-139.178.89.65:37568 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:10:09.156110 kernel: audit: type=1130 audit(1765851009.145:742): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-172.31.29.223:22-139.178.89.65:37568 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:10:09.388000 audit[5446]: USER_ACCT pid=5446 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 02:10:09.395457 sshd[5446]: Accepted publickey for core from 139.178.89.65 port 37568 ssh2: RSA SHA256:GQgi8hrngD5IAzSBvjpWGNrbDxS4+WSDV6E9Am09kRw Dec 16 02:10:09.396132 kernel: audit: type=1101 audit(1765851009.388:743): pid=5446 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 02:10:09.395000 audit[5446]: CRED_ACQ pid=5446 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 02:10:09.398670 sshd-session[5446]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 02:10:09.405655 kernel: audit: type=1103 audit(1765851009.395:744): pid=5446 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 02:10:09.405815 kernel: audit: type=1006 audit(1765851009.395:745): pid=5446 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=11 res=1 Dec 16 02:10:09.395000 audit[5446]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=fffffda90ad0 a2=3 a3=0 items=0 ppid=1 pid=5446 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:09.395000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 02:10:09.415034 systemd-logind[1940]: New session 11 of user core. Dec 16 02:10:09.422481 systemd[1]: Started session-11.scope - Session 11 of User core. Dec 16 02:10:09.430000 audit[5446]: USER_START pid=5446 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 02:10:09.434000 audit[5450]: CRED_ACQ pid=5450 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 02:10:09.684891 sshd[5450]: Connection closed by 139.178.89.65 port 37568 Dec 16 02:10:09.686485 sshd-session[5446]: pam_unix(sshd:session): session closed for user core Dec 16 02:10:09.690000 audit[5446]: USER_END pid=5446 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 02:10:09.691000 audit[5446]: CRED_DISP pid=5446 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 02:10:09.699982 systemd-logind[1940]: Session 11 logged out. Waiting for processes to exit. Dec 16 02:10:09.700369 systemd[1]: sshd@9-172.31.29.223:22-139.178.89.65:37568.service: Deactivated successfully. Dec 16 02:10:09.700000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-172.31.29.223:22-139.178.89.65:37568 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:10:09.707837 systemd[1]: session-11.scope: Deactivated successfully. Dec 16 02:10:09.717139 systemd-logind[1940]: Removed session 11. Dec 16 02:10:13.827792 containerd[1959]: time="2025-12-16T02:10:13.827614743Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Dec 16 02:10:14.080482 containerd[1959]: time="2025-12-16T02:10:14.080287164Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 02:10:14.082606 containerd[1959]: time="2025-12-16T02:10:14.082536900Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Dec 16 02:10:14.082758 containerd[1959]: time="2025-12-16T02:10:14.082660788Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Dec 16 02:10:14.082951 kubelet[3435]: E1216 02:10:14.082889 3435 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 16 02:10:14.085171 kubelet[3435]: E1216 02:10:14.082951 3435 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 16 02:10:14.085171 kubelet[3435]: E1216 02:10:14.083274 3435 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-cmqs6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-7kpqw_calico-system(874b2e4b-6331-48a2-85aa-be2fc619cfc8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Dec 16 02:10:14.085171 kubelet[3435]: E1216 02:10:14.084635 3435 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-7kpqw" podUID="874b2e4b-6331-48a2-85aa-be2fc619cfc8" Dec 16 02:10:14.728470 systemd[1]: Started sshd@10-172.31.29.223:22-139.178.89.65:57134.service - OpenSSH per-connection server daemon (139.178.89.65:57134). Dec 16 02:10:14.736469 kernel: kauditd_printk_skb: 7 callbacks suppressed Dec 16 02:10:14.736606 kernel: audit: type=1130 audit(1765851014.728:751): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-172.31.29.223:22-139.178.89.65:57134 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:10:14.728000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-172.31.29.223:22-139.178.89.65:57134 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:10:14.828525 containerd[1959]: time="2025-12-16T02:10:14.828019912Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Dec 16 02:10:14.947000 audit[5472]: USER_ACCT pid=5472 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 02:10:14.948976 sshd[5472]: Accepted publickey for core from 139.178.89.65 port 57134 ssh2: RSA SHA256:GQgi8hrngD5IAzSBvjpWGNrbDxS4+WSDV6E9Am09kRw Dec 16 02:10:14.953000 audit[5472]: CRED_ACQ pid=5472 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 02:10:14.957117 sshd-session[5472]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 02:10:14.961315 kernel: audit: type=1101 audit(1765851014.947:752): pid=5472 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 02:10:14.961442 kernel: audit: type=1103 audit(1765851014.953:753): pid=5472 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 02:10:14.966396 kernel: audit: type=1006 audit(1765851014.954:754): pid=5472 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=12 res=1 Dec 16 02:10:14.954000 audit[5472]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffd6551df0 a2=3 a3=0 items=0 ppid=1 pid=5472 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:14.974664 kernel: audit: type=1300 audit(1765851014.954:754): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffd6551df0 a2=3 a3=0 items=0 ppid=1 pid=5472 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:14.975096 kernel: audit: type=1327 audit(1765851014.954:754): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 02:10:14.954000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 02:10:14.982438 systemd-logind[1940]: New session 12 of user core. Dec 16 02:10:14.992449 systemd[1]: Started session-12.scope - Session 12 of User core. Dec 16 02:10:14.999000 audit[5472]: USER_START pid=5472 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 02:10:15.008382 kernel: audit: type=1105 audit(1765851014.999:755): pid=5472 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 02:10:15.007000 audit[5476]: CRED_ACQ pid=5476 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 02:10:15.015230 kernel: audit: type=1103 audit(1765851015.007:756): pid=5476 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 02:10:15.097701 containerd[1959]: time="2025-12-16T02:10:15.097601605Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 02:10:15.100896 containerd[1959]: time="2025-12-16T02:10:15.100215385Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Dec 16 02:10:15.100896 containerd[1959]: time="2025-12-16T02:10:15.100353073Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Dec 16 02:10:15.102846 kubelet[3435]: E1216 02:10:15.101365 3435 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 02:10:15.102846 kubelet[3435]: E1216 02:10:15.101435 3435 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 02:10:15.102846 kubelet[3435]: E1216 02:10:15.101590 3435 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:117178efe1694065bf2a2114c9173c9e,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-9tb62,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-78d8765688-2r4mt_calico-system(e33f6f2f-50da-4153-83d7-2ef6a7213c28): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Dec 16 02:10:15.106773 containerd[1959]: time="2025-12-16T02:10:15.105971185Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Dec 16 02:10:15.227965 sshd[5476]: Connection closed by 139.178.89.65 port 57134 Dec 16 02:10:15.230374 sshd-session[5472]: pam_unix(sshd:session): session closed for user core Dec 16 02:10:15.235000 audit[5472]: USER_END pid=5472 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 02:10:15.246317 systemd[1]: sshd@10-172.31.29.223:22-139.178.89.65:57134.service: Deactivated successfully. Dec 16 02:10:15.236000 audit[5472]: CRED_DISP pid=5472 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 02:10:15.255326 kernel: audit: type=1106 audit(1765851015.235:757): pid=5472 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 02:10:15.255470 kernel: audit: type=1104 audit(1765851015.236:758): pid=5472 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 02:10:15.256423 systemd[1]: session-12.scope: Deactivated successfully. Dec 16 02:10:15.246000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-172.31.29.223:22-139.178.89.65:57134 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:10:15.261451 systemd-logind[1940]: Session 12 logged out. Waiting for processes to exit. Dec 16 02:10:15.267279 systemd-logind[1940]: Removed session 12. Dec 16 02:10:15.363479 containerd[1959]: time="2025-12-16T02:10:15.363396027Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 02:10:15.365747 containerd[1959]: time="2025-12-16T02:10:15.365626215Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Dec 16 02:10:15.365871 containerd[1959]: time="2025-12-16T02:10:15.365672115Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Dec 16 02:10:15.366394 kubelet[3435]: E1216 02:10:15.366247 3435 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 02:10:15.366614 kubelet[3435]: E1216 02:10:15.366533 3435 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 02:10:15.367705 kubelet[3435]: E1216 02:10:15.367561 3435 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9tb62,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-78d8765688-2r4mt_calico-system(e33f6f2f-50da-4153-83d7-2ef6a7213c28): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Dec 16 02:10:15.369379 kubelet[3435]: E1216 02:10:15.369177 3435 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-78d8765688-2r4mt" podUID="e33f6f2f-50da-4153-83d7-2ef6a7213c28" Dec 16 02:10:15.825500 containerd[1959]: time="2025-12-16T02:10:15.825372929Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 02:10:16.122693 containerd[1959]: time="2025-12-16T02:10:16.122524226Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 02:10:16.124907 containerd[1959]: time="2025-12-16T02:10:16.124825634Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 02:10:16.125027 containerd[1959]: time="2025-12-16T02:10:16.124867886Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 16 02:10:16.125491 kubelet[3435]: E1216 02:10:16.125420 3435 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 02:10:16.127145 kubelet[3435]: E1216 02:10:16.125988 3435 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 02:10:16.127654 kubelet[3435]: E1216 02:10:16.127511 3435 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-85gh2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-7bbfbc6879-szgl8_calico-apiserver(9240aeb5-ce88-4dd3-8f08-0a5e1f3a4a69): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 02:10:16.129183 kubelet[3435]: E1216 02:10:16.129036 3435 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7bbfbc6879-szgl8" podUID="9240aeb5-ce88-4dd3-8f08-0a5e1f3a4a69" Dec 16 02:10:16.827252 containerd[1959]: time="2025-12-16T02:10:16.826871130Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Dec 16 02:10:17.087990 containerd[1959]: time="2025-12-16T02:10:17.087850155Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 02:10:17.090487 containerd[1959]: time="2025-12-16T02:10:17.090428907Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Dec 16 02:10:17.090816 containerd[1959]: time="2025-12-16T02:10:17.090660435Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Dec 16 02:10:17.091242 kubelet[3435]: E1216 02:10:17.091170 3435 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 16 02:10:17.091367 kubelet[3435]: E1216 02:10:17.091242 3435 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 16 02:10:17.091537 kubelet[3435]: E1216 02:10:17.091442 3435 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6zdjz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-ghk7w_calico-system(f50b9bab-3859-4d1b-ba44-d918ecbff9d1): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Dec 16 02:10:17.095719 containerd[1959]: time="2025-12-16T02:10:17.095605647Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Dec 16 02:10:17.364313 containerd[1959]: time="2025-12-16T02:10:17.364127872Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 02:10:17.366878 containerd[1959]: time="2025-12-16T02:10:17.366754216Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Dec 16 02:10:17.367135 containerd[1959]: time="2025-12-16T02:10:17.366859624Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Dec 16 02:10:17.367565 kubelet[3435]: E1216 02:10:17.367488 3435 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 16 02:10:17.368357 kubelet[3435]: E1216 02:10:17.367564 3435 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 16 02:10:17.370708 kubelet[3435]: E1216 02:10:17.370593 3435 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6zdjz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-ghk7w_calico-system(f50b9bab-3859-4d1b-ba44-d918ecbff9d1): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Dec 16 02:10:17.372281 kubelet[3435]: E1216 02:10:17.372175 3435 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-ghk7w" podUID="f50b9bab-3859-4d1b-ba44-d918ecbff9d1" Dec 16 02:10:18.827466 containerd[1959]: time="2025-12-16T02:10:18.827343548Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 02:10:19.118008 containerd[1959]: time="2025-12-16T02:10:19.117840101Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 02:10:19.120193 containerd[1959]: time="2025-12-16T02:10:19.120111269Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 02:10:19.120756 containerd[1959]: time="2025-12-16T02:10:19.120275849Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 16 02:10:19.121789 kubelet[3435]: E1216 02:10:19.121035 3435 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 02:10:19.121789 kubelet[3435]: E1216 02:10:19.121136 3435 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 02:10:19.122962 kubelet[3435]: E1216 02:10:19.121372 3435 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-p8nl9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-7bbfbc6879-7gb6n_calico-apiserver(495e068f-5f86-4d9c-b537-813d53666a90): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 02:10:19.124837 kubelet[3435]: E1216 02:10:19.124754 3435 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7bbfbc6879-7gb6n" podUID="495e068f-5f86-4d9c-b537-813d53666a90" Dec 16 02:10:20.271686 systemd[1]: Started sshd@11-172.31.29.223:22-139.178.89.65:46712.service - OpenSSH per-connection server daemon (139.178.89.65:46712). Dec 16 02:10:20.271000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-172.31.29.223:22-139.178.89.65:46712 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:10:20.273530 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 16 02:10:20.273705 kernel: audit: type=1130 audit(1765851020.271:760): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-172.31.29.223:22-139.178.89.65:46712 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:10:20.477159 sshd[5497]: Accepted publickey for core from 139.178.89.65 port 46712 ssh2: RSA SHA256:GQgi8hrngD5IAzSBvjpWGNrbDxS4+WSDV6E9Am09kRw Dec 16 02:10:20.475000 audit[5497]: USER_ACCT pid=5497 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 02:10:20.485980 sshd-session[5497]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 02:10:20.483000 audit[5497]: CRED_ACQ pid=5497 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 02:10:20.493296 kernel: audit: type=1101 audit(1765851020.475:761): pid=5497 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 02:10:20.493380 kernel: audit: type=1103 audit(1765851020.483:762): pid=5497 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 02:10:20.494114 kernel: audit: type=1006 audit(1765851020.483:763): pid=5497 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=13 res=1 Dec 16 02:10:20.483000 audit[5497]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffedde4370 a2=3 a3=0 items=0 ppid=1 pid=5497 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:20.505105 kernel: audit: type=1300 audit(1765851020.483:763): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffedde4370 a2=3 a3=0 items=0 ppid=1 pid=5497 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:20.483000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 02:10:20.512116 kernel: audit: type=1327 audit(1765851020.483:763): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 02:10:20.520704 systemd-logind[1940]: New session 13 of user core. Dec 16 02:10:20.530530 systemd[1]: Started session-13.scope - Session 13 of User core. Dec 16 02:10:20.538000 audit[5497]: USER_START pid=5497 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 02:10:20.546000 audit[5501]: CRED_ACQ pid=5501 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 02:10:20.553020 kernel: audit: type=1105 audit(1765851020.538:764): pid=5497 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 02:10:20.553237 kernel: audit: type=1103 audit(1765851020.546:765): pid=5501 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 02:10:20.737293 sshd[5501]: Connection closed by 139.178.89.65 port 46712 Dec 16 02:10:20.737690 sshd-session[5497]: pam_unix(sshd:session): session closed for user core Dec 16 02:10:20.739000 audit[5497]: USER_END pid=5497 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 02:10:20.746765 systemd[1]: sshd@11-172.31.29.223:22-139.178.89.65:46712.service: Deactivated successfully. Dec 16 02:10:20.747337 systemd-logind[1940]: Session 13 logged out. Waiting for processes to exit. Dec 16 02:10:20.739000 audit[5497]: CRED_DISP pid=5497 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 02:10:20.753851 systemd[1]: session-13.scope: Deactivated successfully. Dec 16 02:10:20.754216 kernel: audit: type=1106 audit(1765851020.739:766): pid=5497 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 02:10:20.754400 kernel: audit: type=1104 audit(1765851020.739:767): pid=5497 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 02:10:20.747000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-172.31.29.223:22-139.178.89.65:46712 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:10:20.777592 systemd-logind[1940]: Removed session 13. Dec 16 02:10:20.778000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-172.31.29.223:22-139.178.89.65:46716 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:10:20.779419 systemd[1]: Started sshd@12-172.31.29.223:22-139.178.89.65:46716.service - OpenSSH per-connection server daemon (139.178.89.65:46716). Dec 16 02:10:20.840963 containerd[1959]: time="2025-12-16T02:10:20.838908226Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Dec 16 02:10:21.031000 audit[5514]: USER_ACCT pid=5514 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 02:10:21.032812 sshd[5514]: Accepted publickey for core from 139.178.89.65 port 46716 ssh2: RSA SHA256:GQgi8hrngD5IAzSBvjpWGNrbDxS4+WSDV6E9Am09kRw Dec 16 02:10:21.034000 audit[5514]: CRED_ACQ pid=5514 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 02:10:21.034000 audit[5514]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffd93d8450 a2=3 a3=0 items=0 ppid=1 pid=5514 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:21.034000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 02:10:21.037259 sshd-session[5514]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 02:10:21.047938 systemd-logind[1940]: New session 14 of user core. Dec 16 02:10:21.067459 systemd[1]: Started session-14.scope - Session 14 of User core. Dec 16 02:10:21.073000 audit[5514]: USER_START pid=5514 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 02:10:21.076000 audit[5518]: CRED_ACQ pid=5518 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 02:10:21.128122 containerd[1959]: time="2025-12-16T02:10:21.127912747Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 02:10:21.130505 containerd[1959]: time="2025-12-16T02:10:21.130432075Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Dec 16 02:10:21.130652 containerd[1959]: time="2025-12-16T02:10:21.130564591Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Dec 16 02:10:21.133240 kubelet[3435]: E1216 02:10:21.133037 3435 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 16 02:10:21.134446 kubelet[3435]: E1216 02:10:21.133267 3435 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 16 02:10:21.134446 kubelet[3435]: E1216 02:10:21.133553 3435 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xmhxd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-8478cfdcd8-h8dlm_calico-system(e4cfe125-283d-49d5-a19e-c93c8097201d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Dec 16 02:10:21.135039 kubelet[3435]: E1216 02:10:21.134827 3435 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-8478cfdcd8-h8dlm" podUID="e4cfe125-283d-49d5-a19e-c93c8097201d" Dec 16 02:10:21.381834 sshd[5518]: Connection closed by 139.178.89.65 port 46716 Dec 16 02:10:21.383570 sshd-session[5514]: pam_unix(sshd:session): session closed for user core Dec 16 02:10:21.386000 audit[5514]: USER_END pid=5514 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 02:10:21.388000 audit[5514]: CRED_DISP pid=5514 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 02:10:21.398495 systemd-logind[1940]: Session 14 logged out. Waiting for processes to exit. Dec 16 02:10:21.401535 systemd[1]: sshd@12-172.31.29.223:22-139.178.89.65:46716.service: Deactivated successfully. Dec 16 02:10:21.401000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-172.31.29.223:22-139.178.89.65:46716 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:10:21.409715 systemd[1]: session-14.scope: Deactivated successfully. Dec 16 02:10:21.431375 systemd-logind[1940]: Removed session 14. Dec 16 02:10:21.434889 systemd[1]: Started sshd@13-172.31.29.223:22-139.178.89.65:46720.service - OpenSSH per-connection server daemon (139.178.89.65:46720). Dec 16 02:10:21.434000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-172.31.29.223:22-139.178.89.65:46720 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:10:21.650000 audit[5530]: USER_ACCT pid=5530 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 02:10:21.652352 sshd[5530]: Accepted publickey for core from 139.178.89.65 port 46720 ssh2: RSA SHA256:GQgi8hrngD5IAzSBvjpWGNrbDxS4+WSDV6E9Am09kRw Dec 16 02:10:21.653000 audit[5530]: CRED_ACQ pid=5530 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 02:10:21.653000 audit[5530]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffe2581e30 a2=3 a3=0 items=0 ppid=1 pid=5530 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:21.653000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 02:10:21.657114 sshd-session[5530]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 02:10:21.668261 systemd-logind[1940]: New session 15 of user core. Dec 16 02:10:21.678416 systemd[1]: Started session-15.scope - Session 15 of User core. Dec 16 02:10:21.685000 audit[5530]: USER_START pid=5530 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 02:10:21.691000 audit[5534]: CRED_ACQ pid=5534 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 02:10:21.916169 sshd[5534]: Connection closed by 139.178.89.65 port 46720 Dec 16 02:10:21.917456 sshd-session[5530]: pam_unix(sshd:session): session closed for user core Dec 16 02:10:21.920000 audit[5530]: USER_END pid=5530 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 02:10:21.921000 audit[5530]: CRED_DISP pid=5530 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 02:10:21.927274 systemd[1]: sshd@13-172.31.29.223:22-139.178.89.65:46720.service: Deactivated successfully. Dec 16 02:10:21.927000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-172.31.29.223:22-139.178.89.65:46720 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:10:21.933118 systemd[1]: session-15.scope: Deactivated successfully. Dec 16 02:10:21.936114 systemd-logind[1940]: Session 15 logged out. Waiting for processes to exit. Dec 16 02:10:21.940596 systemd-logind[1940]: Removed session 15. Dec 16 02:10:25.826374 kubelet[3435]: E1216 02:10:25.826302 3435 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-7kpqw" podUID="874b2e4b-6331-48a2-85aa-be2fc619cfc8" Dec 16 02:10:26.956000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-172.31.29.223:22-139.178.89.65:46734 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:10:26.957617 systemd[1]: Started sshd@14-172.31.29.223:22-139.178.89.65:46734.service - OpenSSH per-connection server daemon (139.178.89.65:46734). Dec 16 02:10:26.961085 kernel: kauditd_printk_skb: 23 callbacks suppressed Dec 16 02:10:26.961284 kernel: audit: type=1130 audit(1765851026.956:787): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-172.31.29.223:22-139.178.89.65:46734 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:10:27.165000 audit[5554]: USER_ACCT pid=5554 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 02:10:27.166777 sshd[5554]: Accepted publickey for core from 139.178.89.65 port 46734 ssh2: RSA SHA256:GQgi8hrngD5IAzSBvjpWGNrbDxS4+WSDV6E9Am09kRw Dec 16 02:10:27.174110 kernel: audit: type=1101 audit(1765851027.165:788): pid=5554 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 02:10:27.174000 audit[5554]: CRED_ACQ pid=5554 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 02:10:27.183973 sshd-session[5554]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 02:10:27.188225 kernel: audit: type=1103 audit(1765851027.174:789): pid=5554 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 02:10:27.189434 kernel: audit: type=1006 audit(1765851027.180:790): pid=5554 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=16 res=1 Dec 16 02:10:27.180000 audit[5554]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffc415b880 a2=3 a3=0 items=0 ppid=1 pid=5554 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:27.196676 kernel: audit: type=1300 audit(1765851027.180:790): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffc415b880 a2=3 a3=0 items=0 ppid=1 pid=5554 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:27.180000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 02:10:27.202916 kernel: audit: type=1327 audit(1765851027.180:790): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 02:10:27.217332 systemd-logind[1940]: New session 16 of user core. Dec 16 02:10:27.225463 systemd[1]: Started session-16.scope - Session 16 of User core. Dec 16 02:10:27.240000 audit[5554]: USER_START pid=5554 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 02:10:27.253000 audit[5566]: CRED_ACQ pid=5566 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 02:10:27.260773 kernel: audit: type=1105 audit(1765851027.240:791): pid=5554 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 02:10:27.262445 kernel: audit: type=1103 audit(1765851027.253:792): pid=5566 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 02:10:27.562415 sshd[5566]: Connection closed by 139.178.89.65 port 46734 Dec 16 02:10:27.561780 sshd-session[5554]: pam_unix(sshd:session): session closed for user core Dec 16 02:10:27.568000 audit[5554]: USER_END pid=5554 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 02:10:27.569000 audit[5554]: CRED_DISP pid=5554 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 02:10:27.580695 systemd[1]: sshd@14-172.31.29.223:22-139.178.89.65:46734.service: Deactivated successfully. Dec 16 02:10:27.585575 kernel: audit: type=1106 audit(1765851027.568:793): pid=5554 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 02:10:27.585728 kernel: audit: type=1104 audit(1765851027.569:794): pid=5554 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 02:10:27.580000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-172.31.29.223:22-139.178.89.65:46734 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:10:27.587392 systemd[1]: session-16.scope: Deactivated successfully. Dec 16 02:10:27.590990 systemd-logind[1940]: Session 16 logged out. Waiting for processes to exit. Dec 16 02:10:27.595418 systemd-logind[1940]: Removed session 16. Dec 16 02:10:27.830241 kubelet[3435]: E1216 02:10:27.830067 3435 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-ghk7w" podUID="f50b9bab-3859-4d1b-ba44-d918ecbff9d1" Dec 16 02:10:27.830884 kubelet[3435]: E1216 02:10:27.830271 3435 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-78d8765688-2r4mt" podUID="e33f6f2f-50da-4153-83d7-2ef6a7213c28" Dec 16 02:10:28.826607 kubelet[3435]: E1216 02:10:28.826464 3435 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7bbfbc6879-szgl8" podUID="9240aeb5-ce88-4dd3-8f08-0a5e1f3a4a69" Dec 16 02:10:29.826700 kubelet[3435]: E1216 02:10:29.826447 3435 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7bbfbc6879-7gb6n" podUID="495e068f-5f86-4d9c-b537-813d53666a90" Dec 16 02:10:32.602087 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 16 02:10:32.602222 kernel: audit: type=1130 audit(1765851032.599:796): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-172.31.29.223:22-139.178.89.65:40750 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:10:32.599000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-172.31.29.223:22-139.178.89.65:40750 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:10:32.599983 systemd[1]: Started sshd@15-172.31.29.223:22-139.178.89.65:40750.service - OpenSSH per-connection server daemon (139.178.89.65:40750). Dec 16 02:10:32.796000 audit[5601]: USER_ACCT pid=5601 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 02:10:32.797692 sshd[5601]: Accepted publickey for core from 139.178.89.65 port 40750 ssh2: RSA SHA256:GQgi8hrngD5IAzSBvjpWGNrbDxS4+WSDV6E9Am09kRw Dec 16 02:10:32.804287 kernel: audit: type=1101 audit(1765851032.796:797): pid=5601 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 02:10:32.804000 audit[5601]: CRED_ACQ pid=5601 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 02:10:32.807873 sshd-session[5601]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 02:10:32.814519 kernel: audit: type=1103 audit(1765851032.804:798): pid=5601 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 02:10:32.814802 kernel: audit: type=1006 audit(1765851032.805:799): pid=5601 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=17 res=1 Dec 16 02:10:32.805000 audit[5601]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffc80a5330 a2=3 a3=0 items=0 ppid=1 pid=5601 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:32.821354 kernel: audit: type=1300 audit(1765851032.805:799): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffc80a5330 a2=3 a3=0 items=0 ppid=1 pid=5601 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:32.805000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 02:10:32.824044 kernel: audit: type=1327 audit(1765851032.805:799): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 02:10:32.832747 systemd-logind[1940]: New session 17 of user core. Dec 16 02:10:32.845521 systemd[1]: Started session-17.scope - Session 17 of User core. Dec 16 02:10:32.858000 audit[5601]: USER_START pid=5601 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 02:10:32.872099 kernel: audit: type=1105 audit(1765851032.858:800): pid=5601 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 02:10:32.872000 audit[5606]: CRED_ACQ pid=5606 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 02:10:32.879121 kernel: audit: type=1103 audit(1765851032.872:801): pid=5606 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 02:10:33.134020 sshd[5606]: Connection closed by 139.178.89.65 port 40750 Dec 16 02:10:33.132752 sshd-session[5601]: pam_unix(sshd:session): session closed for user core Dec 16 02:10:33.134000 audit[5601]: USER_END pid=5601 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 02:10:33.136000 audit[5601]: CRED_DISP pid=5601 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 02:10:33.143407 systemd[1]: sshd@15-172.31.29.223:22-139.178.89.65:40750.service: Deactivated successfully. Dec 16 02:10:33.148847 kernel: audit: type=1106 audit(1765851033.134:802): pid=5601 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 02:10:33.148956 kernel: audit: type=1104 audit(1765851033.136:803): pid=5601 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 02:10:33.143000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-172.31.29.223:22-139.178.89.65:40750 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:10:33.149760 systemd[1]: session-17.scope: Deactivated successfully. Dec 16 02:10:33.155070 systemd-logind[1940]: Session 17 logged out. Waiting for processes to exit. Dec 16 02:10:33.157577 systemd-logind[1940]: Removed session 17. Dec 16 02:10:33.827836 kubelet[3435]: E1216 02:10:33.827006 3435 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-8478cfdcd8-h8dlm" podUID="e4cfe125-283d-49d5-a19e-c93c8097201d" Dec 16 02:10:37.826558 containerd[1959]: time="2025-12-16T02:10:37.826269218Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Dec 16 02:10:38.153043 containerd[1959]: time="2025-12-16T02:10:38.152703120Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 02:10:38.155071 containerd[1959]: time="2025-12-16T02:10:38.154952652Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Dec 16 02:10:38.156841 containerd[1959]: time="2025-12-16T02:10:38.155266176Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Dec 16 02:10:38.157178 kubelet[3435]: E1216 02:10:38.157108 3435 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 16 02:10:38.157867 kubelet[3435]: E1216 02:10:38.157185 3435 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 16 02:10:38.157867 kubelet[3435]: E1216 02:10:38.157442 3435 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-cmqs6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-7kpqw_calico-system(874b2e4b-6331-48a2-85aa-be2fc619cfc8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Dec 16 02:10:38.159211 kubelet[3435]: E1216 02:10:38.158704 3435 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-7kpqw" podUID="874b2e4b-6331-48a2-85aa-be2fc619cfc8" Dec 16 02:10:38.181837 systemd[1]: Started sshd@16-172.31.29.223:22-139.178.89.65:40754.service - OpenSSH per-connection server daemon (139.178.89.65:40754). Dec 16 02:10:38.181000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-172.31.29.223:22-139.178.89.65:40754 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:10:38.185139 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 16 02:10:38.185273 kernel: audit: type=1130 audit(1765851038.181:805): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-172.31.29.223:22-139.178.89.65:40754 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:10:38.430000 audit[5618]: USER_ACCT pid=5618 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 02:10:38.433360 sshd[5618]: Accepted publickey for core from 139.178.89.65 port 40754 ssh2: RSA SHA256:GQgi8hrngD5IAzSBvjpWGNrbDxS4+WSDV6E9Am09kRw Dec 16 02:10:38.438378 kernel: audit: type=1101 audit(1765851038.430:806): pid=5618 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 02:10:38.438000 audit[5618]: CRED_ACQ pid=5618 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 02:10:38.447490 sshd-session[5618]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 02:10:38.456031 kernel: audit: type=1103 audit(1765851038.438:807): pid=5618 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 02:10:38.456214 kernel: audit: type=1006 audit(1765851038.438:808): pid=5618 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=18 res=1 Dec 16 02:10:38.438000 audit[5618]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=fffffb7bf110 a2=3 a3=0 items=0 ppid=1 pid=5618 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:38.465997 kernel: audit: type=1300 audit(1765851038.438:808): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=fffffb7bf110 a2=3 a3=0 items=0 ppid=1 pid=5618 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:38.438000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 02:10:38.472122 kernel: audit: type=1327 audit(1765851038.438:808): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 02:10:38.474988 systemd-logind[1940]: New session 18 of user core. Dec 16 02:10:38.483456 systemd[1]: Started session-18.scope - Session 18 of User core. Dec 16 02:10:38.492000 audit[5618]: USER_START pid=5618 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 02:10:38.504096 kernel: audit: type=1105 audit(1765851038.492:809): pid=5618 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 02:10:38.504000 audit[5622]: CRED_ACQ pid=5622 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 02:10:38.512125 kernel: audit: type=1103 audit(1765851038.504:810): pid=5622 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 02:10:38.758670 sshd[5622]: Connection closed by 139.178.89.65 port 40754 Dec 16 02:10:38.757447 sshd-session[5618]: pam_unix(sshd:session): session closed for user core Dec 16 02:10:38.761000 audit[5618]: USER_END pid=5618 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 02:10:38.770828 systemd[1]: sshd@16-172.31.29.223:22-139.178.89.65:40754.service: Deactivated successfully. Dec 16 02:10:38.761000 audit[5618]: CRED_DISP pid=5618 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 02:10:38.777719 systemd[1]: session-18.scope: Deactivated successfully. Dec 16 02:10:38.779301 kernel: audit: type=1106 audit(1765851038.761:811): pid=5618 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 02:10:38.779405 kernel: audit: type=1104 audit(1765851038.761:812): pid=5618 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 02:10:38.784399 systemd-logind[1940]: Session 18 logged out. Waiting for processes to exit. Dec 16 02:10:38.772000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-172.31.29.223:22-139.178.89.65:40754 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:10:38.788822 systemd-logind[1940]: Removed session 18. Dec 16 02:10:39.827469 containerd[1959]: time="2025-12-16T02:10:39.827315416Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Dec 16 02:10:40.105645 containerd[1959]: time="2025-12-16T02:10:40.105462553Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 02:10:40.107947 containerd[1959]: time="2025-12-16T02:10:40.107774005Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Dec 16 02:10:40.107947 containerd[1959]: time="2025-12-16T02:10:40.107811301Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Dec 16 02:10:40.108233 kubelet[3435]: E1216 02:10:40.108159 3435 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 02:10:40.108233 kubelet[3435]: E1216 02:10:40.108227 3435 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 02:10:40.109723 kubelet[3435]: E1216 02:10:40.108396 3435 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:117178efe1694065bf2a2114c9173c9e,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-9tb62,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-78d8765688-2r4mt_calico-system(e33f6f2f-50da-4153-83d7-2ef6a7213c28): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Dec 16 02:10:40.112607 containerd[1959]: time="2025-12-16T02:10:40.112526053Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Dec 16 02:10:40.375414 containerd[1959]: time="2025-12-16T02:10:40.375228603Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 02:10:40.378523 containerd[1959]: time="2025-12-16T02:10:40.378424995Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Dec 16 02:10:40.378791 containerd[1959]: time="2025-12-16T02:10:40.378436035Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Dec 16 02:10:40.378980 kubelet[3435]: E1216 02:10:40.378903 3435 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 02:10:40.378980 kubelet[3435]: E1216 02:10:40.378973 3435 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 02:10:40.379470 kubelet[3435]: E1216 02:10:40.379347 3435 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9tb62,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-78d8765688-2r4mt_calico-system(e33f6f2f-50da-4153-83d7-2ef6a7213c28): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Dec 16 02:10:40.379470 kubelet[3435]: E1216 02:10:40.380760 3435 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-78d8765688-2r4mt" podUID="e33f6f2f-50da-4153-83d7-2ef6a7213c28" Dec 16 02:10:41.829458 containerd[1959]: time="2025-12-16T02:10:41.829385958Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 02:10:42.094579 containerd[1959]: time="2025-12-16T02:10:42.094394367Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 02:10:42.098605 containerd[1959]: time="2025-12-16T02:10:42.098294163Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 02:10:42.099715 containerd[1959]: time="2025-12-16T02:10:42.098396511Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 16 02:10:42.099908 kubelet[3435]: E1216 02:10:42.099785 3435 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 02:10:42.099908 kubelet[3435]: E1216 02:10:42.099858 3435 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 02:10:42.100571 kubelet[3435]: E1216 02:10:42.100171 3435 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-85gh2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-7bbfbc6879-szgl8_calico-apiserver(9240aeb5-ce88-4dd3-8f08-0a5e1f3a4a69): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 02:10:42.102850 kubelet[3435]: E1216 02:10:42.102198 3435 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7bbfbc6879-szgl8" podUID="9240aeb5-ce88-4dd3-8f08-0a5e1f3a4a69" Dec 16 02:10:42.103033 containerd[1959]: time="2025-12-16T02:10:42.102584115Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Dec 16 02:10:42.394241 containerd[1959]: time="2025-12-16T02:10:42.393423905Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 02:10:42.395942 containerd[1959]: time="2025-12-16T02:10:42.395707649Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Dec 16 02:10:42.395942 containerd[1959]: time="2025-12-16T02:10:42.395845745Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Dec 16 02:10:42.396256 kubelet[3435]: E1216 02:10:42.396119 3435 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 16 02:10:42.396256 kubelet[3435]: E1216 02:10:42.396190 3435 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 16 02:10:42.396504 kubelet[3435]: E1216 02:10:42.396398 3435 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6zdjz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-ghk7w_calico-system(f50b9bab-3859-4d1b-ba44-d918ecbff9d1): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Dec 16 02:10:42.400524 containerd[1959]: time="2025-12-16T02:10:42.400367453Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Dec 16 02:10:42.680827 containerd[1959]: time="2025-12-16T02:10:42.680343714Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 02:10:42.682868 containerd[1959]: time="2025-12-16T02:10:42.682729110Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Dec 16 02:10:42.684392 containerd[1959]: time="2025-12-16T02:10:42.682786974Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Dec 16 02:10:42.685022 kubelet[3435]: E1216 02:10:42.684893 3435 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 16 02:10:42.685414 kubelet[3435]: E1216 02:10:42.685290 3435 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 16 02:10:42.685785 kubelet[3435]: E1216 02:10:42.685659 3435 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6zdjz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-ghk7w_calico-system(f50b9bab-3859-4d1b-ba44-d918ecbff9d1): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Dec 16 02:10:42.687347 kubelet[3435]: E1216 02:10:42.687258 3435 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-ghk7w" podUID="f50b9bab-3859-4d1b-ba44-d918ecbff9d1" Dec 16 02:10:42.832481 containerd[1959]: time="2025-12-16T02:10:42.830955787Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 02:10:43.104149 containerd[1959]: time="2025-12-16T02:10:43.104078512Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 02:10:43.107085 containerd[1959]: time="2025-12-16T02:10:43.106693840Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 02:10:43.107445 containerd[1959]: time="2025-12-16T02:10:43.107289724Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 16 02:10:43.108030 kubelet[3435]: E1216 02:10:43.107954 3435 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 02:10:43.109229 kubelet[3435]: E1216 02:10:43.109164 3435 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 02:10:43.109915 kubelet[3435]: E1216 02:10:43.109543 3435 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-p8nl9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-7bbfbc6879-7gb6n_calico-apiserver(495e068f-5f86-4d9c-b537-813d53666a90): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 02:10:43.111209 kubelet[3435]: E1216 02:10:43.111126 3435 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7bbfbc6879-7gb6n" podUID="495e068f-5f86-4d9c-b537-813d53666a90" Dec 16 02:10:43.811575 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 16 02:10:43.811725 kernel: audit: type=1130 audit(1765851043.800:814): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-172.31.29.223:22-139.178.89.65:48814 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:10:43.800000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-172.31.29.223:22-139.178.89.65:48814 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:10:43.801603 systemd[1]: Started sshd@17-172.31.29.223:22-139.178.89.65:48814.service - OpenSSH per-connection server daemon (139.178.89.65:48814). Dec 16 02:10:44.036000 audit[5636]: USER_ACCT pid=5636 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 02:10:44.044834 sshd[5636]: Accepted publickey for core from 139.178.89.65 port 48814 ssh2: RSA SHA256:GQgi8hrngD5IAzSBvjpWGNrbDxS4+WSDV6E9Am09kRw Dec 16 02:10:44.046175 kernel: audit: type=1101 audit(1765851044.036:815): pid=5636 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 02:10:44.045000 audit[5636]: CRED_ACQ pid=5636 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 02:10:44.047796 sshd-session[5636]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 02:10:44.060088 kernel: audit: type=1103 audit(1765851044.045:816): pid=5636 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 02:10:44.060219 kernel: audit: type=1006 audit(1765851044.045:817): pid=5636 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=19 res=1 Dec 16 02:10:44.069732 kernel: audit: type=1300 audit(1765851044.045:817): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=fffff59ef7b0 a2=3 a3=0 items=0 ppid=1 pid=5636 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:44.045000 audit[5636]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=fffff59ef7b0 a2=3 a3=0 items=0 ppid=1 pid=5636 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:44.045000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 02:10:44.075280 kernel: audit: type=1327 audit(1765851044.045:817): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 02:10:44.082910 systemd-logind[1940]: New session 19 of user core. Dec 16 02:10:44.089422 systemd[1]: Started session-19.scope - Session 19 of User core. Dec 16 02:10:44.097000 audit[5636]: USER_START pid=5636 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 02:10:44.105000 audit[5640]: CRED_ACQ pid=5640 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 02:10:44.112361 kernel: audit: type=1105 audit(1765851044.097:818): pid=5636 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 02:10:44.112503 kernel: audit: type=1103 audit(1765851044.105:819): pid=5640 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 02:10:44.360456 sshd[5640]: Connection closed by 139.178.89.65 port 48814 Dec 16 02:10:44.361538 sshd-session[5636]: pam_unix(sshd:session): session closed for user core Dec 16 02:10:44.366000 audit[5636]: USER_END pid=5636 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 02:10:44.366000 audit[5636]: CRED_DISP pid=5636 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 02:10:44.386232 kernel: audit: type=1106 audit(1765851044.366:820): pid=5636 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 02:10:44.386391 kernel: audit: type=1104 audit(1765851044.366:821): pid=5636 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 02:10:44.387801 systemd[1]: sshd@17-172.31.29.223:22-139.178.89.65:48814.service: Deactivated successfully. Dec 16 02:10:44.389182 systemd-logind[1940]: Session 19 logged out. Waiting for processes to exit. Dec 16 02:10:44.390000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-172.31.29.223:22-139.178.89.65:48814 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:10:44.399944 systemd[1]: session-19.scope: Deactivated successfully. Dec 16 02:10:44.431000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-172.31.29.223:22-139.178.89.65:48826 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:10:44.432521 systemd[1]: Started sshd@18-172.31.29.223:22-139.178.89.65:48826.service - OpenSSH per-connection server daemon (139.178.89.65:48826). Dec 16 02:10:44.440874 systemd-logind[1940]: Removed session 19. Dec 16 02:10:44.707000 audit[5652]: USER_ACCT pid=5652 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 02:10:44.709109 sshd[5652]: Accepted publickey for core from 139.178.89.65 port 48826 ssh2: RSA SHA256:GQgi8hrngD5IAzSBvjpWGNrbDxS4+WSDV6E9Am09kRw Dec 16 02:10:44.710000 audit[5652]: CRED_ACQ pid=5652 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 02:10:44.710000 audit[5652]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=fffff5c8ec40 a2=3 a3=0 items=0 ppid=1 pid=5652 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=20 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:44.710000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 02:10:44.714517 sshd-session[5652]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 02:10:44.729136 systemd-logind[1940]: New session 20 of user core. Dec 16 02:10:44.735476 systemd[1]: Started session-20.scope - Session 20 of User core. Dec 16 02:10:44.744000 audit[5652]: USER_START pid=5652 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 02:10:44.748000 audit[5656]: CRED_ACQ pid=5656 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 02:10:45.241583 sshd[5656]: Connection closed by 139.178.89.65 port 48826 Dec 16 02:10:45.242794 sshd-session[5652]: pam_unix(sshd:session): session closed for user core Dec 16 02:10:45.247000 audit[5652]: USER_END pid=5652 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 02:10:45.248000 audit[5652]: CRED_DISP pid=5652 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 02:10:45.257782 systemd-logind[1940]: Session 20 logged out. Waiting for processes to exit. Dec 16 02:10:45.258252 systemd[1]: sshd@18-172.31.29.223:22-139.178.89.65:48826.service: Deactivated successfully. Dec 16 02:10:45.259000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-172.31.29.223:22-139.178.89.65:48826 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:10:45.265992 systemd[1]: session-20.scope: Deactivated successfully. Dec 16 02:10:45.293317 systemd-logind[1940]: Removed session 20. Dec 16 02:10:45.296033 systemd[1]: Started sshd@19-172.31.29.223:22-139.178.89.65:48834.service - OpenSSH per-connection server daemon (139.178.89.65:48834). Dec 16 02:10:45.295000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-172.31.29.223:22-139.178.89.65:48834 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:10:45.537000 audit[5669]: USER_ACCT pid=5669 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 02:10:45.539037 sshd[5669]: Accepted publickey for core from 139.178.89.65 port 48834 ssh2: RSA SHA256:GQgi8hrngD5IAzSBvjpWGNrbDxS4+WSDV6E9Am09kRw Dec 16 02:10:45.539000 audit[5669]: CRED_ACQ pid=5669 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 02:10:45.539000 audit[5669]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffdde39640 a2=3 a3=0 items=0 ppid=1 pid=5669 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=21 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:45.539000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 02:10:45.542505 sshd-session[5669]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 02:10:45.552523 systemd-logind[1940]: New session 21 of user core. Dec 16 02:10:45.564269 systemd[1]: Started session-21.scope - Session 21 of User core. Dec 16 02:10:45.573000 audit[5669]: USER_START pid=5669 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 02:10:45.577000 audit[5678]: CRED_ACQ pid=5678 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 02:10:47.129464 sshd[5678]: Connection closed by 139.178.89.65 port 48834 Dec 16 02:10:47.130498 sshd-session[5669]: pam_unix(sshd:session): session closed for user core Dec 16 02:10:47.135000 audit[5709]: NETFILTER_CFG table=filter:139 family=2 entries=26 op=nft_register_rule pid=5709 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 02:10:47.137000 audit[5669]: USER_END pid=5669 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 02:10:47.138000 audit[5669]: CRED_DISP pid=5669 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 02:10:47.135000 audit[5709]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=14176 a0=3 a1=ffffc8159210 a2=0 a3=1 items=0 ppid=3591 pid=5709 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:47.135000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 02:10:47.147894 systemd[1]: sshd@19-172.31.29.223:22-139.178.89.65:48834.service: Deactivated successfully. Dec 16 02:10:47.147000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-172.31.29.223:22-139.178.89.65:48834 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:10:47.149157 systemd-logind[1940]: Session 21 logged out. Waiting for processes to exit. Dec 16 02:10:47.155855 systemd[1]: session-21.scope: Deactivated successfully. Dec 16 02:10:47.165000 audit[5709]: NETFILTER_CFG table=nat:140 family=2 entries=20 op=nft_register_rule pid=5709 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 02:10:47.165000 audit[5709]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5772 a0=3 a1=ffffc8159210 a2=0 a3=1 items=0 ppid=3591 pid=5709 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:47.165000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 02:10:47.195346 systemd-logind[1940]: Removed session 21. Dec 16 02:10:47.198650 systemd[1]: Started sshd@20-172.31.29.223:22-139.178.89.65:48840.service - OpenSSH per-connection server daemon (139.178.89.65:48840). Dec 16 02:10:47.197000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-172.31.29.223:22-139.178.89.65:48840 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:10:47.236000 audit[5716]: NETFILTER_CFG table=filter:141 family=2 entries=38 op=nft_register_rule pid=5716 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 02:10:47.236000 audit[5716]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=14176 a0=3 a1=fffff93a2500 a2=0 a3=1 items=0 ppid=3591 pid=5716 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:47.236000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 02:10:47.240000 audit[5716]: NETFILTER_CFG table=nat:142 family=2 entries=20 op=nft_register_rule pid=5716 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 02:10:47.240000 audit[5716]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5772 a0=3 a1=fffff93a2500 a2=0 a3=1 items=0 ppid=3591 pid=5716 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:47.240000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 02:10:47.429000 audit[5714]: USER_ACCT pid=5714 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 02:10:47.430933 sshd[5714]: Accepted publickey for core from 139.178.89.65 port 48840 ssh2: RSA SHA256:GQgi8hrngD5IAzSBvjpWGNrbDxS4+WSDV6E9Am09kRw Dec 16 02:10:47.433000 audit[5714]: CRED_ACQ pid=5714 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 02:10:47.434000 audit[5714]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffcdb07400 a2=3 a3=0 items=0 ppid=1 pid=5714 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:47.434000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 02:10:47.437868 sshd-session[5714]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 02:10:47.451475 systemd-logind[1940]: New session 22 of user core. Dec 16 02:10:47.460834 systemd[1]: Started session-22.scope - Session 22 of User core. Dec 16 02:10:47.469000 audit[5714]: USER_START pid=5714 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 02:10:47.474000 audit[5720]: CRED_ACQ pid=5720 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 02:10:48.194461 sshd[5720]: Connection closed by 139.178.89.65 port 48840 Dec 16 02:10:48.198098 sshd-session[5714]: pam_unix(sshd:session): session closed for user core Dec 16 02:10:48.201000 audit[5714]: USER_END pid=5714 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 02:10:48.201000 audit[5714]: CRED_DISP pid=5714 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 02:10:48.208734 systemd[1]: sshd@20-172.31.29.223:22-139.178.89.65:48840.service: Deactivated successfully. Dec 16 02:10:48.210000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-172.31.29.223:22-139.178.89.65:48840 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:10:48.217672 systemd[1]: session-22.scope: Deactivated successfully. Dec 16 02:10:48.223528 systemd-logind[1940]: Session 22 logged out. Waiting for processes to exit. Dec 16 02:10:48.253637 systemd[1]: Started sshd@21-172.31.29.223:22-139.178.89.65:48854.service - OpenSSH per-connection server daemon (139.178.89.65:48854). Dec 16 02:10:48.252000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-172.31.29.223:22-139.178.89.65:48854 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:10:48.261522 systemd-logind[1940]: Removed session 22. Dec 16 02:10:48.463000 audit[5730]: USER_ACCT pid=5730 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 02:10:48.466712 sshd[5730]: Accepted publickey for core from 139.178.89.65 port 48854 ssh2: RSA SHA256:GQgi8hrngD5IAzSBvjpWGNrbDxS4+WSDV6E9Am09kRw Dec 16 02:10:48.467000 audit[5730]: CRED_ACQ pid=5730 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 02:10:48.467000 audit[5730]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffff695070 a2=3 a3=0 items=0 ppid=1 pid=5730 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:48.467000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 02:10:48.471310 sshd-session[5730]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 02:10:48.484386 systemd-logind[1940]: New session 23 of user core. Dec 16 02:10:48.492347 systemd[1]: Started session-23.scope - Session 23 of User core. Dec 16 02:10:48.501000 audit[5730]: USER_START pid=5730 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 02:10:48.506000 audit[5734]: CRED_ACQ pid=5734 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 02:10:48.741482 sshd[5734]: Connection closed by 139.178.89.65 port 48854 Dec 16 02:10:48.745337 sshd-session[5730]: pam_unix(sshd:session): session closed for user core Dec 16 02:10:48.749000 audit[5730]: USER_END pid=5730 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 02:10:48.749000 audit[5730]: CRED_DISP pid=5730 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 02:10:48.756293 systemd[1]: sshd@21-172.31.29.223:22-139.178.89.65:48854.service: Deactivated successfully. Dec 16 02:10:48.756000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-172.31.29.223:22-139.178.89.65:48854 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:10:48.761900 systemd[1]: session-23.scope: Deactivated successfully. Dec 16 02:10:48.772716 systemd-logind[1940]: Session 23 logged out. Waiting for processes to exit. Dec 16 02:10:48.775160 systemd-logind[1940]: Removed session 23. Dec 16 02:10:48.829928 containerd[1959]: time="2025-12-16T02:10:48.829870477Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Dec 16 02:10:49.108139 containerd[1959]: time="2025-12-16T02:10:49.107221210Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 02:10:49.109996 containerd[1959]: time="2025-12-16T02:10:49.109724854Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Dec 16 02:10:49.110502 containerd[1959]: time="2025-12-16T02:10:49.109926850Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Dec 16 02:10:49.111964 kubelet[3435]: E1216 02:10:49.111901 3435 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 16 02:10:49.113217 kubelet[3435]: E1216 02:10:49.112641 3435 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 16 02:10:49.113217 kubelet[3435]: E1216 02:10:49.113095 3435 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xmhxd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-8478cfdcd8-h8dlm_calico-system(e4cfe125-283d-49d5-a19e-c93c8097201d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Dec 16 02:10:49.114440 kubelet[3435]: E1216 02:10:49.114360 3435 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-8478cfdcd8-h8dlm" podUID="e4cfe125-283d-49d5-a19e-c93c8097201d" Dec 16 02:10:50.829833 kubelet[3435]: E1216 02:10:50.829726 3435 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-7kpqw" podUID="874b2e4b-6331-48a2-85aa-be2fc619cfc8" Dec 16 02:10:52.835039 kubelet[3435]: E1216 02:10:52.834290 3435 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-78d8765688-2r4mt" podUID="e33f6f2f-50da-4153-83d7-2ef6a7213c28" Dec 16 02:10:53.788918 systemd[1]: Started sshd@22-172.31.29.223:22-139.178.89.65:44784.service - OpenSSH per-connection server daemon (139.178.89.65:44784). Dec 16 02:10:53.788000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-172.31.29.223:22-139.178.89.65:44784 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:10:53.792278 kernel: kauditd_printk_skb: 57 callbacks suppressed Dec 16 02:10:53.792389 kernel: audit: type=1130 audit(1765851053.788:863): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-172.31.29.223:22-139.178.89.65:44784 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:10:54.041000 audit[5747]: USER_ACCT pid=5747 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 02:10:54.048519 sshd[5747]: Accepted publickey for core from 139.178.89.65 port 44784 ssh2: RSA SHA256:GQgi8hrngD5IAzSBvjpWGNrbDxS4+WSDV6E9Am09kRw Dec 16 02:10:54.053206 kernel: audit: type=1101 audit(1765851054.041:864): pid=5747 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 02:10:54.053000 audit[5747]: CRED_ACQ pid=5747 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 02:10:54.062447 sshd-session[5747]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 02:10:54.066259 kernel: audit: type=1103 audit(1765851054.053:865): pid=5747 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 02:10:54.066773 kernel: audit: type=1006 audit(1765851054.053:866): pid=5747 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=24 res=1 Dec 16 02:10:54.053000 audit[5747]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffee4bdbc0 a2=3 a3=0 items=0 ppid=1 pid=5747 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:54.074683 kernel: audit: type=1300 audit(1765851054.053:866): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffee4bdbc0 a2=3 a3=0 items=0 ppid=1 pid=5747 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:54.053000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 02:10:54.077602 kernel: audit: type=1327 audit(1765851054.053:866): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 02:10:54.077709 kernel: audit: type=1325 audit(1765851054.056:867): table=filter:143 family=2 entries=26 op=nft_register_rule pid=5752 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 02:10:54.056000 audit[5752]: NETFILTER_CFG table=filter:143 family=2 entries=26 op=nft_register_rule pid=5752 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 02:10:54.056000 audit[5752]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=ffffe6cfb420 a2=0 a3=1 items=0 ppid=3591 pid=5752 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:54.087749 kernel: audit: type=1300 audit(1765851054.056:867): arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=ffffe6cfb420 a2=0 a3=1 items=0 ppid=3591 pid=5752 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:54.056000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 02:10:54.092159 kernel: audit: type=1327 audit(1765851054.056:867): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 02:10:54.092279 kernel: audit: type=1325 audit(1765851054.077:868): table=nat:144 family=2 entries=104 op=nft_register_chain pid=5752 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 02:10:54.077000 audit[5752]: NETFILTER_CFG table=nat:144 family=2 entries=104 op=nft_register_chain pid=5752 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 02:10:54.077000 audit[5752]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=48684 a0=3 a1=ffffe6cfb420 a2=0 a3=1 items=0 ppid=3591 pid=5752 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:54.077000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 02:10:54.099161 systemd-logind[1940]: New session 24 of user core. Dec 16 02:10:54.106498 systemd[1]: Started session-24.scope - Session 24 of User core. Dec 16 02:10:54.113000 audit[5747]: USER_START pid=5747 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 02:10:54.118000 audit[5753]: CRED_ACQ pid=5753 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 02:10:54.382335 sshd[5753]: Connection closed by 139.178.89.65 port 44784 Dec 16 02:10:54.382085 sshd-session[5747]: pam_unix(sshd:session): session closed for user core Dec 16 02:10:54.386000 audit[5747]: USER_END pid=5747 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 02:10:54.386000 audit[5747]: CRED_DISP pid=5747 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 02:10:54.396630 systemd[1]: sshd@22-172.31.29.223:22-139.178.89.65:44784.service: Deactivated successfully. Dec 16 02:10:54.399000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-172.31.29.223:22-139.178.89.65:44784 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:10:54.407216 systemd[1]: session-24.scope: Deactivated successfully. Dec 16 02:10:54.412989 systemd-logind[1940]: Session 24 logged out. Waiting for processes to exit. Dec 16 02:10:54.415791 systemd-logind[1940]: Removed session 24. Dec 16 02:10:54.829218 kubelet[3435]: E1216 02:10:54.829079 3435 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7bbfbc6879-szgl8" podUID="9240aeb5-ce88-4dd3-8f08-0a5e1f3a4a69" Dec 16 02:10:55.831232 kubelet[3435]: E1216 02:10:55.831134 3435 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-ghk7w" podUID="f50b9bab-3859-4d1b-ba44-d918ecbff9d1" Dec 16 02:10:56.827987 kubelet[3435]: E1216 02:10:56.827309 3435 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7bbfbc6879-7gb6n" podUID="495e068f-5f86-4d9c-b537-813d53666a90" Dec 16 02:10:59.429780 kernel: kauditd_printk_skb: 7 callbacks suppressed Dec 16 02:10:59.429951 kernel: audit: type=1130 audit(1765851059.422:874): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-172.31.29.223:22-139.178.89.65:44788 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:10:59.422000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-172.31.29.223:22-139.178.89.65:44788 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:10:59.422947 systemd[1]: Started sshd@23-172.31.29.223:22-139.178.89.65:44788.service - OpenSSH per-connection server daemon (139.178.89.65:44788). Dec 16 02:10:59.647000 audit[5788]: USER_ACCT pid=5788 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 02:10:59.657332 sshd[5788]: Accepted publickey for core from 139.178.89.65 port 44788 ssh2: RSA SHA256:GQgi8hrngD5IAzSBvjpWGNrbDxS4+WSDV6E9Am09kRw Dec 16 02:10:59.656000 audit[5788]: CRED_ACQ pid=5788 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 02:10:59.666561 kernel: audit: type=1101 audit(1765851059.647:875): pid=5788 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 02:10:59.666671 kernel: audit: type=1103 audit(1765851059.656:876): pid=5788 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 02:10:59.662811 sshd-session[5788]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 02:10:59.671808 kernel: audit: type=1006 audit(1765851059.656:877): pid=5788 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=25 res=1 Dec 16 02:10:59.656000 audit[5788]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffed026f20 a2=3 a3=0 items=0 ppid=1 pid=5788 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:59.680001 kernel: audit: type=1300 audit(1765851059.656:877): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffed026f20 a2=3 a3=0 items=0 ppid=1 pid=5788 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:59.656000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 02:10:59.683192 kernel: audit: type=1327 audit(1765851059.656:877): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 02:10:59.693208 systemd-logind[1940]: New session 25 of user core. Dec 16 02:10:59.703897 systemd[1]: Started session-25.scope - Session 25 of User core. Dec 16 02:10:59.712000 audit[5788]: USER_START pid=5788 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 02:10:59.721000 audit[5792]: CRED_ACQ pid=5792 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 02:10:59.728169 kernel: audit: type=1105 audit(1765851059.712:878): pid=5788 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 02:10:59.728378 kernel: audit: type=1103 audit(1765851059.721:879): pid=5792 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 02:10:59.972803 sshd[5792]: Connection closed by 139.178.89.65 port 44788 Dec 16 02:10:59.970711 sshd-session[5788]: pam_unix(sshd:session): session closed for user core Dec 16 02:10:59.974000 audit[5788]: USER_END pid=5788 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 02:10:59.989473 systemd[1]: sshd@23-172.31.29.223:22-139.178.89.65:44788.service: Deactivated successfully. Dec 16 02:10:59.975000 audit[5788]: CRED_DISP pid=5788 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 02:10:59.995921 systemd[1]: session-25.scope: Deactivated successfully. Dec 16 02:10:59.996364 kernel: audit: type=1106 audit(1765851059.974:880): pid=5788 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 02:10:59.996584 kernel: audit: type=1104 audit(1765851059.975:881): pid=5788 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 02:10:59.990000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-172.31.29.223:22-139.178.89.65:44788 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:11:00.006965 systemd-logind[1940]: Session 25 logged out. Waiting for processes to exit. Dec 16 02:11:00.013104 systemd-logind[1940]: Removed session 25. Dec 16 02:11:00.831099 kubelet[3435]: E1216 02:11:00.829245 3435 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-8478cfdcd8-h8dlm" podUID="e4cfe125-283d-49d5-a19e-c93c8097201d" Dec 16 02:11:01.826612 kubelet[3435]: E1216 02:11:01.826099 3435 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-7kpqw" podUID="874b2e4b-6331-48a2-85aa-be2fc619cfc8" Dec 16 02:11:03.831092 kubelet[3435]: E1216 02:11:03.830436 3435 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-78d8765688-2r4mt" podUID="e33f6f2f-50da-4153-83d7-2ef6a7213c28" Dec 16 02:11:05.014000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-172.31.29.223:22-139.178.89.65:36806 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:11:05.015671 systemd[1]: Started sshd@24-172.31.29.223:22-139.178.89.65:36806.service - OpenSSH per-connection server daemon (139.178.89.65:36806). Dec 16 02:11:05.018470 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 16 02:11:05.018609 kernel: audit: type=1130 audit(1765851065.014:883): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-172.31.29.223:22-139.178.89.65:36806 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:11:05.245000 audit[5809]: USER_ACCT pid=5809 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 02:11:05.247415 sshd[5809]: Accepted publickey for core from 139.178.89.65 port 36806 ssh2: RSA SHA256:GQgi8hrngD5IAzSBvjpWGNrbDxS4+WSDV6E9Am09kRw Dec 16 02:11:05.252000 audit[5809]: CRED_ACQ pid=5809 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 02:11:05.256891 sshd-session[5809]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 02:11:05.262638 kernel: audit: type=1101 audit(1765851065.245:884): pid=5809 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 02:11:05.262752 kernel: audit: type=1103 audit(1765851065.252:885): pid=5809 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 02:11:05.269031 kernel: audit: type=1006 audit(1765851065.253:886): pid=5809 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=26 res=1 Dec 16 02:11:05.253000 audit[5809]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffec8e3710 a2=3 a3=0 items=0 ppid=1 pid=5809 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=26 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:11:05.279197 kernel: audit: type=1300 audit(1765851065.253:886): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffec8e3710 a2=3 a3=0 items=0 ppid=1 pid=5809 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=26 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:11:05.279320 kernel: audit: type=1327 audit(1765851065.253:886): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 02:11:05.253000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 02:11:05.276473 systemd-logind[1940]: New session 26 of user core. Dec 16 02:11:05.288832 systemd[1]: Started session-26.scope - Session 26 of User core. Dec 16 02:11:05.301000 audit[5809]: USER_START pid=5809 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 02:11:05.313088 kernel: audit: type=1105 audit(1765851065.301:887): pid=5809 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 02:11:05.313217 kernel: audit: type=1103 audit(1765851065.310:888): pid=5813 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 02:11:05.310000 audit[5813]: CRED_ACQ pid=5813 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 02:11:05.583122 sshd[5813]: Connection closed by 139.178.89.65 port 36806 Dec 16 02:11:05.584353 sshd-session[5809]: pam_unix(sshd:session): session closed for user core Dec 16 02:11:05.586000 audit[5809]: USER_END pid=5809 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 02:11:05.599812 systemd[1]: sshd@24-172.31.29.223:22-139.178.89.65:36806.service: Deactivated successfully. Dec 16 02:11:05.586000 audit[5809]: CRED_DISP pid=5809 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 02:11:05.607854 kernel: audit: type=1106 audit(1765851065.586:889): pid=5809 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 02:11:05.607982 kernel: audit: type=1104 audit(1765851065.586:890): pid=5809 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 02:11:05.609949 systemd[1]: session-26.scope: Deactivated successfully. Dec 16 02:11:05.599000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-172.31.29.223:22-139.178.89.65:36806 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:11:05.617576 systemd-logind[1940]: Session 26 logged out. Waiting for processes to exit. Dec 16 02:11:05.622371 systemd-logind[1940]: Removed session 26. Dec 16 02:11:07.826842 kubelet[3435]: E1216 02:11:07.826729 3435 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7bbfbc6879-szgl8" podUID="9240aeb5-ce88-4dd3-8f08-0a5e1f3a4a69" Dec 16 02:11:09.826797 kubelet[3435]: E1216 02:11:09.826694 3435 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7bbfbc6879-7gb6n" podUID="495e068f-5f86-4d9c-b537-813d53666a90" Dec 16 02:11:10.629310 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 16 02:11:10.629561 kernel: audit: type=1130 audit(1765851070.621:892): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-172.31.29.223:22-139.178.89.65:45932 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:11:10.621000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-172.31.29.223:22-139.178.89.65:45932 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:11:10.622639 systemd[1]: Started sshd@25-172.31.29.223:22-139.178.89.65:45932.service - OpenSSH per-connection server daemon (139.178.89.65:45932). Dec 16 02:11:10.833952 kubelet[3435]: E1216 02:11:10.833862 3435 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-ghk7w" podUID="f50b9bab-3859-4d1b-ba44-d918ecbff9d1" Dec 16 02:11:10.840410 sshd[5828]: Accepted publickey for core from 139.178.89.65 port 45932 ssh2: RSA SHA256:GQgi8hrngD5IAzSBvjpWGNrbDxS4+WSDV6E9Am09kRw Dec 16 02:11:10.839000 audit[5828]: USER_ACCT pid=5828 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 02:11:10.849000 audit[5828]: CRED_ACQ pid=5828 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 02:11:10.859304 kernel: audit: type=1101 audit(1765851070.839:893): pid=5828 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 02:11:10.859453 kernel: audit: type=1103 audit(1765851070.849:894): pid=5828 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 02:11:10.860378 sshd-session[5828]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 02:11:10.865782 kernel: audit: type=1006 audit(1765851070.849:895): pid=5828 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=27 res=1 Dec 16 02:11:10.849000 audit[5828]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffc8dcac70 a2=3 a3=0 items=0 ppid=1 pid=5828 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=27 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:11:10.876036 kernel: audit: type=1300 audit(1765851070.849:895): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffc8dcac70 a2=3 a3=0 items=0 ppid=1 pid=5828 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=27 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:11:10.885104 kernel: audit: type=1327 audit(1765851070.849:895): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 02:11:10.849000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 02:11:10.887996 systemd-logind[1940]: New session 27 of user core. Dec 16 02:11:10.897576 systemd[1]: Started session-27.scope - Session 27 of User core. Dec 16 02:11:10.910000 audit[5828]: USER_START pid=5828 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 02:11:10.923324 kernel: audit: type=1105 audit(1765851070.910:896): pid=5828 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 02:11:10.923000 audit[5832]: CRED_ACQ pid=5832 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 02:11:10.931124 kernel: audit: type=1103 audit(1765851070.923:897): pid=5832 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 02:11:11.174383 sshd[5832]: Connection closed by 139.178.89.65 port 45932 Dec 16 02:11:11.175633 sshd-session[5828]: pam_unix(sshd:session): session closed for user core Dec 16 02:11:11.179000 audit[5828]: USER_END pid=5828 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 02:11:11.190831 systemd[1]: sshd@25-172.31.29.223:22-139.178.89.65:45932.service: Deactivated successfully. Dec 16 02:11:11.199897 kernel: audit: type=1106 audit(1765851071.179:898): pid=5828 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 02:11:11.200138 kernel: audit: type=1104 audit(1765851071.184:899): pid=5828 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 02:11:11.184000 audit[5828]: CRED_DISP pid=5828 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 02:11:11.202795 systemd[1]: session-27.scope: Deactivated successfully. Dec 16 02:11:11.208832 systemd-logind[1940]: Session 27 logged out. Waiting for processes to exit. Dec 16 02:11:11.190000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-172.31.29.223:22-139.178.89.65:45932 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:11:11.215409 systemd-logind[1940]: Removed session 27. Dec 16 02:11:11.827475 kubelet[3435]: E1216 02:11:11.827389 3435 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-8478cfdcd8-h8dlm" podUID="e4cfe125-283d-49d5-a19e-c93c8097201d" Dec 16 02:11:13.825223 kubelet[3435]: E1216 02:11:13.825141 3435 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-7kpqw" podUID="874b2e4b-6331-48a2-85aa-be2fc619cfc8" Dec 16 02:11:15.830076 kubelet[3435]: E1216 02:11:15.829903 3435 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-78d8765688-2r4mt" podUID="e33f6f2f-50da-4153-83d7-2ef6a7213c28" Dec 16 02:11:16.223652 systemd[1]: Started sshd@26-172.31.29.223:22-139.178.89.65:45948.service - OpenSSH per-connection server daemon (139.178.89.65:45948). Dec 16 02:11:16.232243 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 16 02:11:16.232395 kernel: audit: type=1130 audit(1765851076.223:901): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@26-172.31.29.223:22-139.178.89.65:45948 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:11:16.223000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@26-172.31.29.223:22-139.178.89.65:45948 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:11:16.465000 audit[5845]: USER_ACCT pid=5845 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 02:11:16.467319 sshd[5845]: Accepted publickey for core from 139.178.89.65 port 45948 ssh2: RSA SHA256:GQgi8hrngD5IAzSBvjpWGNrbDxS4+WSDV6E9Am09kRw Dec 16 02:11:16.473000 audit[5845]: CRED_ACQ pid=5845 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 02:11:16.479342 sshd-session[5845]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 02:11:16.481601 kernel: audit: type=1101 audit(1765851076.465:902): pid=5845 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 02:11:16.481755 kernel: audit: type=1103 audit(1765851076.473:903): pid=5845 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 02:11:16.481838 kernel: audit: type=1006 audit(1765851076.473:904): pid=5845 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=28 res=1 Dec 16 02:11:16.473000 audit[5845]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffe551fa70 a2=3 a3=0 items=0 ppid=1 pid=5845 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=28 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:11:16.493091 kernel: audit: type=1300 audit(1765851076.473:904): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffe551fa70 a2=3 a3=0 items=0 ppid=1 pid=5845 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=28 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:11:16.473000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 02:11:16.497210 kernel: audit: type=1327 audit(1765851076.473:904): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 02:11:16.506281 systemd-logind[1940]: New session 28 of user core. Dec 16 02:11:16.516832 systemd[1]: Started session-28.scope - Session 28 of User core. Dec 16 02:11:16.530000 audit[5845]: USER_START pid=5845 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 02:11:16.543091 kernel: audit: type=1105 audit(1765851076.530:905): pid=5845 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 02:11:16.542000 audit[5849]: CRED_ACQ pid=5849 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 02:11:16.550144 kernel: audit: type=1103 audit(1765851076.542:906): pid=5849 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 02:11:16.801202 sshd[5849]: Connection closed by 139.178.89.65 port 45948 Dec 16 02:11:16.801978 sshd-session[5845]: pam_unix(sshd:session): session closed for user core Dec 16 02:11:16.806000 audit[5845]: USER_END pid=5845 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 02:11:16.816862 systemd[1]: sshd@26-172.31.29.223:22-139.178.89.65:45948.service: Deactivated successfully. Dec 16 02:11:16.807000 audit[5845]: CRED_DISP pid=5845 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 02:11:16.830851 kernel: audit: type=1106 audit(1765851076.806:907): pid=5845 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 02:11:16.830992 kernel: audit: type=1104 audit(1765851076.807:908): pid=5845 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 02:11:16.829182 systemd[1]: session-28.scope: Deactivated successfully. Dec 16 02:11:16.816000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@26-172.31.29.223:22-139.178.89.65:45948 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:11:16.838030 systemd-logind[1940]: Session 28 logged out. Waiting for processes to exit. Dec 16 02:11:16.841475 systemd-logind[1940]: Removed session 28. Dec 16 02:11:18.829319 kubelet[3435]: E1216 02:11:18.826826 3435 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7bbfbc6879-szgl8" podUID="9240aeb5-ce88-4dd3-8f08-0a5e1f3a4a69" Dec 16 02:11:20.826156 kubelet[3435]: E1216 02:11:20.825942 3435 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7bbfbc6879-7gb6n" podUID="495e068f-5f86-4d9c-b537-813d53666a90" Dec 16 02:11:21.851750 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 16 02:11:21.851910 kernel: audit: type=1130 audit(1765851081.844:910): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@27-172.31.29.223:22-139.178.89.65:57518 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:11:21.844000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@27-172.31.29.223:22-139.178.89.65:57518 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:11:21.844646 systemd[1]: Started sshd@27-172.31.29.223:22-139.178.89.65:57518.service - OpenSSH per-connection server daemon (139.178.89.65:57518). Dec 16 02:11:22.073000 audit[5863]: USER_ACCT pid=5863 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 02:11:22.081276 sshd[5863]: Accepted publickey for core from 139.178.89.65 port 57518 ssh2: RSA SHA256:GQgi8hrngD5IAzSBvjpWGNrbDxS4+WSDV6E9Am09kRw Dec 16 02:11:22.080000 audit[5863]: CRED_ACQ pid=5863 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 02:11:22.090795 kernel: audit: type=1101 audit(1765851082.073:911): pid=5863 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 02:11:22.090959 kernel: audit: type=1103 audit(1765851082.080:912): pid=5863 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 02:11:22.086457 sshd-session[5863]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 02:11:22.098234 kernel: audit: type=1006 audit(1765851082.080:913): pid=5863 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=29 res=1 Dec 16 02:11:22.080000 audit[5863]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffd47d8480 a2=3 a3=0 items=0 ppid=1 pid=5863 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=29 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:11:22.110533 kernel: audit: type=1300 audit(1765851082.080:913): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffd47d8480 a2=3 a3=0 items=0 ppid=1 pid=5863 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=29 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:11:22.110775 kernel: audit: type=1327 audit(1765851082.080:913): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 02:11:22.080000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 02:11:22.122172 systemd-logind[1940]: New session 29 of user core. Dec 16 02:11:22.129393 systemd[1]: Started session-29.scope - Session 29 of User core. Dec 16 02:11:22.143000 audit[5863]: USER_START pid=5863 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 02:11:22.157117 kernel: audit: type=1105 audit(1765851082.143:914): pid=5863 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 02:11:22.156000 audit[5867]: CRED_ACQ pid=5867 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 02:11:22.166099 kernel: audit: type=1103 audit(1765851082.156:915): pid=5867 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 02:11:22.416805 sshd[5867]: Connection closed by 139.178.89.65 port 57518 Dec 16 02:11:22.417955 sshd-session[5863]: pam_unix(sshd:session): session closed for user core Dec 16 02:11:22.423000 audit[5863]: USER_END pid=5863 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 02:11:22.423000 audit[5863]: CRED_DISP pid=5863 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 02:11:22.439818 kernel: audit: type=1106 audit(1765851082.423:916): pid=5863 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 02:11:22.439976 kernel: audit: type=1104 audit(1765851082.423:917): pid=5863 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 02:11:22.441787 systemd-logind[1940]: Session 29 logged out. Waiting for processes to exit. Dec 16 02:11:22.442946 systemd[1]: sshd@27-172.31.29.223:22-139.178.89.65:57518.service: Deactivated successfully. Dec 16 02:11:22.445000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@27-172.31.29.223:22-139.178.89.65:57518 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:11:22.450448 systemd[1]: session-29.scope: Deactivated successfully. Dec 16 02:11:22.456568 systemd-logind[1940]: Removed session 29. Dec 16 02:11:22.830714 containerd[1959]: time="2025-12-16T02:11:22.830651218Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Dec 16 02:11:23.127453 containerd[1959]: time="2025-12-16T02:11:23.126898483Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 02:11:23.129837 containerd[1959]: time="2025-12-16T02:11:23.129729067Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Dec 16 02:11:23.130939 containerd[1959]: time="2025-12-16T02:11:23.129949111Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Dec 16 02:11:23.131627 kubelet[3435]: E1216 02:11:23.131427 3435 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 16 02:11:23.131627 kubelet[3435]: E1216 02:11:23.131500 3435 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 16 02:11:23.133922 kubelet[3435]: E1216 02:11:23.131679 3435 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6zdjz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-ghk7w_calico-system(f50b9bab-3859-4d1b-ba44-d918ecbff9d1): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Dec 16 02:11:23.135755 containerd[1959]: time="2025-12-16T02:11:23.135360979Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Dec 16 02:11:23.433036 containerd[1959]: time="2025-12-16T02:11:23.432462405Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 02:11:23.434840 containerd[1959]: time="2025-12-16T02:11:23.434740953Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Dec 16 02:11:23.435032 containerd[1959]: time="2025-12-16T02:11:23.434759637Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Dec 16 02:11:23.435741 kubelet[3435]: E1216 02:11:23.435670 3435 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 16 02:11:23.436070 kubelet[3435]: E1216 02:11:23.435936 3435 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 16 02:11:23.437326 kubelet[3435]: E1216 02:11:23.437195 3435 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6zdjz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-ghk7w_calico-system(f50b9bab-3859-4d1b-ba44-d918ecbff9d1): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Dec 16 02:11:23.438639 kubelet[3435]: E1216 02:11:23.438532 3435 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-ghk7w" podUID="f50b9bab-3859-4d1b-ba44-d918ecbff9d1" Dec 16 02:11:25.826099 kubelet[3435]: E1216 02:11:25.825978 3435 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-8478cfdcd8-h8dlm" podUID="e4cfe125-283d-49d5-a19e-c93c8097201d" Dec 16 02:11:25.828378 containerd[1959]: time="2025-12-16T02:11:25.827273389Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Dec 16 02:11:26.116130 containerd[1959]: time="2025-12-16T02:11:26.115915774Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 02:11:26.118516 containerd[1959]: time="2025-12-16T02:11:26.118421242Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Dec 16 02:11:26.118885 containerd[1959]: time="2025-12-16T02:11:26.118568038Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Dec 16 02:11:26.119160 kubelet[3435]: E1216 02:11:26.119100 3435 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 16 02:11:26.119344 kubelet[3435]: E1216 02:11:26.119311 3435 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 16 02:11:26.119921 kubelet[3435]: E1216 02:11:26.119771 3435 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-cmqs6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-7kpqw_calico-system(874b2e4b-6331-48a2-85aa-be2fc619cfc8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Dec 16 02:11:26.121227 kubelet[3435]: E1216 02:11:26.121138 3435 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-7kpqw" podUID="874b2e4b-6331-48a2-85aa-be2fc619cfc8" Dec 16 02:11:29.825882 containerd[1959]: time="2025-12-16T02:11:29.825836176Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Dec 16 02:11:30.069388 containerd[1959]: time="2025-12-16T02:11:30.069319262Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 02:11:30.071928 containerd[1959]: time="2025-12-16T02:11:30.071791370Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Dec 16 02:11:30.071928 containerd[1959]: time="2025-12-16T02:11:30.071870570Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Dec 16 02:11:30.072489 kubelet[3435]: E1216 02:11:30.072395 3435 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 02:11:30.073149 kubelet[3435]: E1216 02:11:30.072500 3435 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 02:11:30.073149 kubelet[3435]: E1216 02:11:30.072675 3435 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:117178efe1694065bf2a2114c9173c9e,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-9tb62,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-78d8765688-2r4mt_calico-system(e33f6f2f-50da-4153-83d7-2ef6a7213c28): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Dec 16 02:11:30.076011 containerd[1959]: time="2025-12-16T02:11:30.075559538Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Dec 16 02:11:30.375613 containerd[1959]: time="2025-12-16T02:11:30.375433035Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 02:11:30.377687 containerd[1959]: time="2025-12-16T02:11:30.377584095Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Dec 16 02:11:30.377897 containerd[1959]: time="2025-12-16T02:11:30.377598099Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Dec 16 02:11:30.378222 kubelet[3435]: E1216 02:11:30.378168 3435 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 02:11:30.378842 kubelet[3435]: E1216 02:11:30.378459 3435 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 02:11:30.378842 kubelet[3435]: E1216 02:11:30.378658 3435 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9tb62,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-78d8765688-2r4mt_calico-system(e33f6f2f-50da-4153-83d7-2ef6a7213c28): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Dec 16 02:11:30.379982 kubelet[3435]: E1216 02:11:30.379905 3435 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-78d8765688-2r4mt" podUID="e33f6f2f-50da-4153-83d7-2ef6a7213c28" Dec 16 02:11:30.826543 containerd[1959]: time="2025-12-16T02:11:30.826484237Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 02:11:31.089902 containerd[1959]: time="2025-12-16T02:11:31.089408727Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 02:11:31.092936 containerd[1959]: time="2025-12-16T02:11:31.092735799Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 02:11:31.092936 containerd[1959]: time="2025-12-16T02:11:31.092856819Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 16 02:11:31.093258 kubelet[3435]: E1216 02:11:31.093155 3435 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 02:11:31.093822 kubelet[3435]: E1216 02:11:31.093258 3435 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 02:11:31.093822 kubelet[3435]: E1216 02:11:31.093597 3435 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-85gh2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-7bbfbc6879-szgl8_calico-apiserver(9240aeb5-ce88-4dd3-8f08-0a5e1f3a4a69): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 02:11:31.094926 kubelet[3435]: E1216 02:11:31.094865 3435 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7bbfbc6879-szgl8" podUID="9240aeb5-ce88-4dd3-8f08-0a5e1f3a4a69" Dec 16 02:11:34.826599 containerd[1959]: time="2025-12-16T02:11:34.826459845Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 02:11:35.108962 containerd[1959]: time="2025-12-16T02:11:35.108754171Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 02:11:35.111036 containerd[1959]: time="2025-12-16T02:11:35.110967259Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 02:11:35.111189 containerd[1959]: time="2025-12-16T02:11:35.111121039Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 16 02:11:35.111456 kubelet[3435]: E1216 02:11:35.111382 3435 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 02:11:35.112309 kubelet[3435]: E1216 02:11:35.111449 3435 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 02:11:35.112309 kubelet[3435]: E1216 02:11:35.111660 3435 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-p8nl9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-7bbfbc6879-7gb6n_calico-apiserver(495e068f-5f86-4d9c-b537-813d53666a90): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 02:11:35.113023 kubelet[3435]: E1216 02:11:35.112947 3435 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7bbfbc6879-7gb6n" podUID="495e068f-5f86-4d9c-b537-813d53666a90" Dec 16 02:11:36.553289 systemd[1]: cri-containerd-e41bcdd08568003f5ad0e6a7abd80b8baaeefde1bdaebe2d29cd48abb1d72a0e.scope: Deactivated successfully. Dec 16 02:11:36.553964 systemd[1]: cri-containerd-e41bcdd08568003f5ad0e6a7abd80b8baaeefde1bdaebe2d29cd48abb1d72a0e.scope: Consumed 6.350s CPU time, 65M memory peak. Dec 16 02:11:36.557000 audit: BPF prog-id=259 op=LOAD Dec 16 02:11:36.560030 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 16 02:11:36.560155 kernel: audit: type=1334 audit(1765851096.557:919): prog-id=259 op=LOAD Dec 16 02:11:36.565141 kernel: audit: type=1334 audit(1765851096.560:920): prog-id=86 op=UNLOAD Dec 16 02:11:36.560000 audit: BPF prog-id=86 op=UNLOAD Dec 16 02:11:36.565382 containerd[1959]: time="2025-12-16T02:11:36.565252414Z" level=info msg="received container exit event container_id:\"e41bcdd08568003f5ad0e6a7abd80b8baaeefde1bdaebe2d29cd48abb1d72a0e\" id:\"e41bcdd08568003f5ad0e6a7abd80b8baaeefde1bdaebe2d29cd48abb1d72a0e\" pid:3237 exit_status:1 exited_at:{seconds:1765851096 nanos:562506730}" Dec 16 02:11:36.562000 audit: BPF prog-id=101 op=UNLOAD Dec 16 02:11:36.567390 kernel: audit: type=1334 audit(1765851096.562:921): prog-id=101 op=UNLOAD Dec 16 02:11:36.562000 audit: BPF prog-id=105 op=UNLOAD Dec 16 02:11:36.570522 kernel: audit: type=1334 audit(1765851096.562:922): prog-id=105 op=UNLOAD Dec 16 02:11:36.614944 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e41bcdd08568003f5ad0e6a7abd80b8baaeefde1bdaebe2d29cd48abb1d72a0e-rootfs.mount: Deactivated successfully. Dec 16 02:11:36.818378 kubelet[3435]: I1216 02:11:36.817375 3435 scope.go:117] "RemoveContainer" containerID="e41bcdd08568003f5ad0e6a7abd80b8baaeefde1bdaebe2d29cd48abb1d72a0e" Dec 16 02:11:36.827955 kubelet[3435]: E1216 02:11:36.827677 3435 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-ghk7w" podUID="f50b9bab-3859-4d1b-ba44-d918ecbff9d1" Dec 16 02:11:36.828186 kubelet[3435]: E1216 02:11:36.828016 3435 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-7kpqw" podUID="874b2e4b-6331-48a2-85aa-be2fc619cfc8" Dec 16 02:11:36.850744 containerd[1959]: time="2025-12-16T02:11:36.850688627Z" level=info msg="CreateContainer within sandbox \"4e7db25c43351e3d77437ddd208027ef9016b5f6e9fd623a0e0bb907e4836f83\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Dec 16 02:11:36.882092 containerd[1959]: time="2025-12-16T02:11:36.879813875Z" level=info msg="Container d5db9ee8f83a00658dba2b3518b8b2ce68e864aa2d837f0dc9b9feec54f23d2e: CDI devices from CRI Config.CDIDevices: []" Dec 16 02:11:36.896638 systemd[1]: cri-containerd-0fbf977894cb77aa6ed906034392b3e42e4ef070d2da98f83bdbfdc679c48f40.scope: Deactivated successfully. Dec 16 02:11:36.898006 systemd[1]: cri-containerd-0fbf977894cb77aa6ed906034392b3e42e4ef070d2da98f83bdbfdc679c48f40.scope: Consumed 28.331s CPU time, 102.5M memory peak. Dec 16 02:11:36.899000 audit: BPF prog-id=149 op=UNLOAD Dec 16 02:11:36.899000 audit: BPF prog-id=153 op=UNLOAD Dec 16 02:11:36.904548 kernel: audit: type=1334 audit(1765851096.899:923): prog-id=149 op=UNLOAD Dec 16 02:11:36.904777 kernel: audit: type=1334 audit(1765851096.899:924): prog-id=153 op=UNLOAD Dec 16 02:11:36.905669 containerd[1959]: time="2025-12-16T02:11:36.905433024Z" level=info msg="received container exit event container_id:\"0fbf977894cb77aa6ed906034392b3e42e4ef070d2da98f83bdbfdc679c48f40\" id:\"0fbf977894cb77aa6ed906034392b3e42e4ef070d2da98f83bdbfdc679c48f40\" pid:3762 exit_status:1 exited_at:{seconds:1765851096 nanos:904019304}" Dec 16 02:11:36.915971 containerd[1959]: time="2025-12-16T02:11:36.915861672Z" level=info msg="CreateContainer within sandbox \"4e7db25c43351e3d77437ddd208027ef9016b5f6e9fd623a0e0bb907e4836f83\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"d5db9ee8f83a00658dba2b3518b8b2ce68e864aa2d837f0dc9b9feec54f23d2e\"" Dec 16 02:11:36.919120 containerd[1959]: time="2025-12-16T02:11:36.918988344Z" level=info msg="StartContainer for \"d5db9ee8f83a00658dba2b3518b8b2ce68e864aa2d837f0dc9b9feec54f23d2e\"" Dec 16 02:11:36.921966 containerd[1959]: time="2025-12-16T02:11:36.921769932Z" level=info msg="connecting to shim d5db9ee8f83a00658dba2b3518b8b2ce68e864aa2d837f0dc9b9feec54f23d2e" address="unix:///run/containerd/s/00709b6ed7c947ba0b40ee5d1a4ca3668c34c7aef478121ca003f9de94a4ab95" protocol=ttrpc version=3 Dec 16 02:11:36.980445 systemd[1]: Started cri-containerd-d5db9ee8f83a00658dba2b3518b8b2ce68e864aa2d837f0dc9b9feec54f23d2e.scope - libcontainer container d5db9ee8f83a00658dba2b3518b8b2ce68e864aa2d837f0dc9b9feec54f23d2e. Dec 16 02:11:37.026000 audit: BPF prog-id=260 op=LOAD Dec 16 02:11:37.030128 kernel: audit: type=1334 audit(1765851097.026:925): prog-id=260 op=LOAD Dec 16 02:11:37.028000 audit: BPF prog-id=261 op=LOAD Dec 16 02:11:37.028000 audit[5944]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a8180 a2=98 a3=0 items=0 ppid=2996 pid=5944 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:11:37.039647 kernel: audit: type=1334 audit(1765851097.028:926): prog-id=261 op=LOAD Dec 16 02:11:37.039852 kernel: audit: type=1300 audit(1765851097.028:926): arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a8180 a2=98 a3=0 items=0 ppid=2996 pid=5944 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:11:37.028000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6435646239656538663833613030363538646261326233353138623862 Dec 16 02:11:37.049786 kernel: audit: type=1327 audit(1765851097.028:926): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6435646239656538663833613030363538646261326233353138623862 Dec 16 02:11:37.031000 audit: BPF prog-id=261 op=UNLOAD Dec 16 02:11:37.031000 audit[5944]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2996 pid=5944 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:11:37.031000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6435646239656538663833613030363538646261326233353138623862 Dec 16 02:11:37.031000 audit: BPF prog-id=262 op=LOAD Dec 16 02:11:37.031000 audit[5944]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a83e8 a2=98 a3=0 items=0 ppid=2996 pid=5944 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:11:37.031000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6435646239656538663833613030363538646261326233353138623862 Dec 16 02:11:37.031000 audit: BPF prog-id=263 op=LOAD Dec 16 02:11:37.031000 audit[5944]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=40001a8168 a2=98 a3=0 items=0 ppid=2996 pid=5944 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:11:37.031000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6435646239656538663833613030363538646261326233353138623862 Dec 16 02:11:37.032000 audit: BPF prog-id=263 op=UNLOAD Dec 16 02:11:37.032000 audit[5944]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2996 pid=5944 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:11:37.032000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6435646239656538663833613030363538646261326233353138623862 Dec 16 02:11:37.032000 audit: BPF prog-id=262 op=UNLOAD Dec 16 02:11:37.032000 audit[5944]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2996 pid=5944 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:11:37.032000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6435646239656538663833613030363538646261326233353138623862 Dec 16 02:11:37.032000 audit: BPF prog-id=264 op=LOAD Dec 16 02:11:37.032000 audit[5944]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a8648 a2=98 a3=0 items=0 ppid=2996 pid=5944 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:11:37.032000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6435646239656538663833613030363538646261326233353138623862 Dec 16 02:11:37.108145 containerd[1959]: time="2025-12-16T02:11:37.107779917Z" level=info msg="StartContainer for \"d5db9ee8f83a00658dba2b3518b8b2ce68e864aa2d837f0dc9b9feec54f23d2e\" returns successfully" Dec 16 02:11:37.615490 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0fbf977894cb77aa6ed906034392b3e42e4ef070d2da98f83bdbfdc679c48f40-rootfs.mount: Deactivated successfully. Dec 16 02:11:37.833748 kubelet[3435]: I1216 02:11:37.833394 3435 scope.go:117] "RemoveContainer" containerID="0fbf977894cb77aa6ed906034392b3e42e4ef070d2da98f83bdbfdc679c48f40" Dec 16 02:11:37.838614 containerd[1959]: time="2025-12-16T02:11:37.838541388Z" level=info msg="CreateContainer within sandbox \"b63291b75d01cead7f9c3e97a5b649da7dc38036757cde1171aa8320a87b4245\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Dec 16 02:11:37.860014 containerd[1959]: time="2025-12-16T02:11:37.859948608Z" level=info msg="Container d0b165588a53ae3b6171210ee2e871f1953dde006024fd1c48091c358d1d78d1: CDI devices from CRI Config.CDIDevices: []" Dec 16 02:11:37.890543 containerd[1959]: time="2025-12-16T02:11:37.890216700Z" level=info msg="CreateContainer within sandbox \"b63291b75d01cead7f9c3e97a5b649da7dc38036757cde1171aa8320a87b4245\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"d0b165588a53ae3b6171210ee2e871f1953dde006024fd1c48091c358d1d78d1\"" Dec 16 02:11:37.891134 containerd[1959]: time="2025-12-16T02:11:37.891074412Z" level=info msg="StartContainer for \"d0b165588a53ae3b6171210ee2e871f1953dde006024fd1c48091c358d1d78d1\"" Dec 16 02:11:37.893304 containerd[1959]: time="2025-12-16T02:11:37.893222052Z" level=info msg="connecting to shim d0b165588a53ae3b6171210ee2e871f1953dde006024fd1c48091c358d1d78d1" address="unix:///run/containerd/s/489a8202ae0919288f30ab3e5b8c1ea308dacaf680847191dd9871ad8be8f75a" protocol=ttrpc version=3 Dec 16 02:11:37.943425 systemd[1]: Started cri-containerd-d0b165588a53ae3b6171210ee2e871f1953dde006024fd1c48091c358d1d78d1.scope - libcontainer container d0b165588a53ae3b6171210ee2e871f1953dde006024fd1c48091c358d1d78d1. Dec 16 02:11:37.977000 audit: BPF prog-id=265 op=LOAD Dec 16 02:11:37.979000 audit: BPF prog-id=266 op=LOAD Dec 16 02:11:37.979000 audit[5979]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001b0180 a2=98 a3=0 items=0 ppid=3490 pid=5979 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:11:37.979000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6430623136353538386135336165336236313731323130656532653837 Dec 16 02:11:37.981000 audit: BPF prog-id=266 op=UNLOAD Dec 16 02:11:37.981000 audit[5979]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3490 pid=5979 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:11:37.981000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6430623136353538386135336165336236313731323130656532653837 Dec 16 02:11:37.981000 audit: BPF prog-id=267 op=LOAD Dec 16 02:11:37.981000 audit[5979]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001b03e8 a2=98 a3=0 items=0 ppid=3490 pid=5979 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:11:37.981000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6430623136353538386135336165336236313731323130656532653837 Dec 16 02:11:37.982000 audit: BPF prog-id=268 op=LOAD Dec 16 02:11:37.982000 audit[5979]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=40001b0168 a2=98 a3=0 items=0 ppid=3490 pid=5979 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:11:37.982000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6430623136353538386135336165336236313731323130656532653837 Dec 16 02:11:37.982000 audit: BPF prog-id=268 op=UNLOAD Dec 16 02:11:37.982000 audit[5979]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3490 pid=5979 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:11:37.982000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6430623136353538386135336165336236313731323130656532653837 Dec 16 02:11:37.982000 audit: BPF prog-id=267 op=UNLOAD Dec 16 02:11:37.982000 audit[5979]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3490 pid=5979 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:11:37.982000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6430623136353538386135336165336236313731323130656532653837 Dec 16 02:11:37.982000 audit: BPF prog-id=269 op=LOAD Dec 16 02:11:37.982000 audit[5979]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001b0648 a2=98 a3=0 items=0 ppid=3490 pid=5979 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:11:37.982000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6430623136353538386135336165336236313731323130656532653837 Dec 16 02:11:38.038297 containerd[1959]: time="2025-12-16T02:11:38.038239881Z" level=info msg="StartContainer for \"d0b165588a53ae3b6171210ee2e871f1953dde006024fd1c48091c358d1d78d1\" returns successfully" Dec 16 02:11:39.827122 containerd[1959]: time="2025-12-16T02:11:39.826409162Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Dec 16 02:11:40.080562 containerd[1959]: time="2025-12-16T02:11:40.080219507Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 02:11:40.082840 containerd[1959]: time="2025-12-16T02:11:40.082603175Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Dec 16 02:11:40.082840 containerd[1959]: time="2025-12-16T02:11:40.082757219Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Dec 16 02:11:40.083517 kubelet[3435]: E1216 02:11:40.083306 3435 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 16 02:11:40.084390 kubelet[3435]: E1216 02:11:40.083719 3435 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 16 02:11:40.085085 kubelet[3435]: E1216 02:11:40.084855 3435 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xmhxd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-8478cfdcd8-h8dlm_calico-system(e4cfe125-283d-49d5-a19e-c93c8097201d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Dec 16 02:11:40.086289 kubelet[3435]: E1216 02:11:40.086193 3435 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-8478cfdcd8-h8dlm" podUID="e4cfe125-283d-49d5-a19e-c93c8097201d" Dec 16 02:11:42.424037 systemd[1]: cri-containerd-5ae4ba12859df0eed902adb2a3647f51dc0f53313205d88514a98df7b79cce45.scope: Deactivated successfully. Dec 16 02:11:42.425874 systemd[1]: cri-containerd-5ae4ba12859df0eed902adb2a3647f51dc0f53313205d88514a98df7b79cce45.scope: Consumed 6.356s CPU time, 22.3M memory peak. Dec 16 02:11:42.432308 kernel: kauditd_printk_skb: 40 callbacks suppressed Dec 16 02:11:42.432477 kernel: audit: type=1334 audit(1765851102.427:941): prog-id=106 op=UNLOAD Dec 16 02:11:42.427000 audit: BPF prog-id=106 op=UNLOAD Dec 16 02:11:42.432610 containerd[1959]: time="2025-12-16T02:11:42.430911279Z" level=info msg="received container exit event container_id:\"5ae4ba12859df0eed902adb2a3647f51dc0f53313205d88514a98df7b79cce45\" id:\"5ae4ba12859df0eed902adb2a3647f51dc0f53313205d88514a98df7b79cce45\" pid:3271 exit_status:1 exited_at:{seconds:1765851102 nanos:430234431}" Dec 16 02:11:42.427000 audit: BPF prog-id=110 op=UNLOAD Dec 16 02:11:42.434322 kernel: audit: type=1334 audit(1765851102.427:942): prog-id=110 op=UNLOAD Dec 16 02:11:42.430000 audit: BPF prog-id=270 op=LOAD Dec 16 02:11:42.438153 kernel: audit: type=1334 audit(1765851102.430:943): prog-id=270 op=LOAD Dec 16 02:11:42.430000 audit: BPF prog-id=91 op=UNLOAD Dec 16 02:11:42.440466 kernel: audit: type=1334 audit(1765851102.430:944): prog-id=91 op=UNLOAD Dec 16 02:11:42.485239 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5ae4ba12859df0eed902adb2a3647f51dc0f53313205d88514a98df7b79cce45-rootfs.mount: Deactivated successfully. Dec 16 02:11:42.825976 kubelet[3435]: E1216 02:11:42.825777 3435 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7bbfbc6879-szgl8" podUID="9240aeb5-ce88-4dd3-8f08-0a5e1f3a4a69" Dec 16 02:11:42.882369 kubelet[3435]: I1216 02:11:42.882314 3435 scope.go:117] "RemoveContainer" containerID="5ae4ba12859df0eed902adb2a3647f51dc0f53313205d88514a98df7b79cce45" Dec 16 02:11:42.888155 containerd[1959]: time="2025-12-16T02:11:42.887342237Z" level=info msg="CreateContainer within sandbox \"fc2847b871daa6423cec58b1912ce924aec31573b83f454ea633c4b0cf4622fd\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Dec 16 02:11:42.910475 containerd[1959]: time="2025-12-16T02:11:42.909320153Z" level=info msg="Container 5ad95c1fd3c5f5c3866093ebdd41ec000e82d0b4017e63459aad838e8cdd8edc: CDI devices from CRI Config.CDIDevices: []" Dec 16 02:11:42.930041 containerd[1959]: time="2025-12-16T02:11:42.929963369Z" level=info msg="CreateContainer within sandbox \"fc2847b871daa6423cec58b1912ce924aec31573b83f454ea633c4b0cf4622fd\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"5ad95c1fd3c5f5c3866093ebdd41ec000e82d0b4017e63459aad838e8cdd8edc\"" Dec 16 02:11:42.931369 containerd[1959]: time="2025-12-16T02:11:42.931302269Z" level=info msg="StartContainer for \"5ad95c1fd3c5f5c3866093ebdd41ec000e82d0b4017e63459aad838e8cdd8edc\"" Dec 16 02:11:42.934754 containerd[1959]: time="2025-12-16T02:11:42.934680114Z" level=info msg="connecting to shim 5ad95c1fd3c5f5c3866093ebdd41ec000e82d0b4017e63459aad838e8cdd8edc" address="unix:///run/containerd/s/53b216779e8bcf3e12b0282b2c74bfeb8504d107402118a1ead1614607c46f89" protocol=ttrpc version=3 Dec 16 02:11:42.972447 systemd[1]: Started cri-containerd-5ad95c1fd3c5f5c3866093ebdd41ec000e82d0b4017e63459aad838e8cdd8edc.scope - libcontainer container 5ad95c1fd3c5f5c3866093ebdd41ec000e82d0b4017e63459aad838e8cdd8edc. Dec 16 02:11:43.012000 audit: BPF prog-id=271 op=LOAD Dec 16 02:11:43.016091 kernel: audit: type=1334 audit(1765851103.012:945): prog-id=271 op=LOAD Dec 16 02:11:43.014000 audit: BPF prog-id=272 op=LOAD Dec 16 02:11:43.014000 audit[6035]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000228180 a2=98 a3=0 items=0 ppid=3033 pid=6035 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:11:43.025989 kernel: audit: type=1334 audit(1765851103.014:946): prog-id=272 op=LOAD Dec 16 02:11:43.026573 kernel: audit: type=1300 audit(1765851103.014:946): arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000228180 a2=98 a3=0 items=0 ppid=3033 pid=6035 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:11:43.026624 kernel: audit: type=1327 audit(1765851103.014:946): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3561643935633166643363356635633338363630393365626464343165 Dec 16 02:11:43.014000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3561643935633166643363356635633338363630393365626464343165 Dec 16 02:11:43.034835 kernel: audit: type=1334 audit(1765851103.017:947): prog-id=272 op=UNLOAD Dec 16 02:11:43.035013 kernel: audit: type=1300 audit(1765851103.017:947): arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3033 pid=6035 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:11:43.017000 audit: BPF prog-id=272 op=UNLOAD Dec 16 02:11:43.017000 audit[6035]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3033 pid=6035 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:11:43.017000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3561643935633166643363356635633338363630393365626464343165 Dec 16 02:11:43.018000 audit: BPF prog-id=273 op=LOAD Dec 16 02:11:43.018000 audit[6035]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40002283e8 a2=98 a3=0 items=0 ppid=3033 pid=6035 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:11:43.018000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3561643935633166643363356635633338363630393365626464343165 Dec 16 02:11:43.018000 audit: BPF prog-id=274 op=LOAD Dec 16 02:11:43.018000 audit[6035]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000228168 a2=98 a3=0 items=0 ppid=3033 pid=6035 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:11:43.018000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3561643935633166643363356635633338363630393365626464343165 Dec 16 02:11:43.018000 audit: BPF prog-id=274 op=UNLOAD Dec 16 02:11:43.018000 audit[6035]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3033 pid=6035 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:11:43.018000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3561643935633166643363356635633338363630393365626464343165 Dec 16 02:11:43.018000 audit: BPF prog-id=273 op=UNLOAD Dec 16 02:11:43.018000 audit[6035]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3033 pid=6035 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:11:43.018000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3561643935633166643363356635633338363630393365626464343165 Dec 16 02:11:43.018000 audit: BPF prog-id=275 op=LOAD Dec 16 02:11:43.018000 audit[6035]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000228648 a2=98 a3=0 items=0 ppid=3033 pid=6035 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:11:43.018000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3561643935633166643363356635633338363630393365626464343165 Dec 16 02:11:43.105474 containerd[1959]: time="2025-12-16T02:11:43.105297326Z" level=info msg="StartContainer for \"5ad95c1fd3c5f5c3866093ebdd41ec000e82d0b4017e63459aad838e8cdd8edc\" returns successfully" Dec 16 02:11:44.830239 kubelet[3435]: E1216 02:11:44.830162 3435 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-78d8765688-2r4mt" podUID="e33f6f2f-50da-4153-83d7-2ef6a7213c28" Dec 16 02:11:45.627287 kubelet[3435]: E1216 02:11:45.627208 3435 controller.go:195] "Failed to update lease" err="Put \"https://172.31.29.223:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-29-223?timeout=10s\": context deadline exceeded" Dec 16 02:11:46.825975 kubelet[3435]: E1216 02:11:46.825902 3435 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7bbfbc6879-7gb6n" podUID="495e068f-5f86-4d9c-b537-813d53666a90" Dec 16 02:11:49.564682 systemd[1]: cri-containerd-d0b165588a53ae3b6171210ee2e871f1953dde006024fd1c48091c358d1d78d1.scope: Deactivated successfully. Dec 16 02:11:49.569179 containerd[1959]: time="2025-12-16T02:11:49.567901078Z" level=info msg="received container exit event container_id:\"d0b165588a53ae3b6171210ee2e871f1953dde006024fd1c48091c358d1d78d1\" id:\"d0b165588a53ae3b6171210ee2e871f1953dde006024fd1c48091c358d1d78d1\" pid:5991 exit_status:1 exited_at:{seconds:1765851109 nanos:566600182}" Dec 16 02:11:49.569000 audit: BPF prog-id=265 op=UNLOAD Dec 16 02:11:49.571694 kernel: kauditd_printk_skb: 16 callbacks suppressed Dec 16 02:11:49.571796 kernel: audit: type=1334 audit(1765851109.569:953): prog-id=265 op=UNLOAD Dec 16 02:11:49.569000 audit: BPF prog-id=269 op=UNLOAD Dec 16 02:11:49.575440 kernel: audit: type=1334 audit(1765851109.569:954): prog-id=269 op=UNLOAD Dec 16 02:11:49.616641 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d0b165588a53ae3b6171210ee2e871f1953dde006024fd1c48091c358d1d78d1-rootfs.mount: Deactivated successfully. Dec 16 02:11:49.826618 kubelet[3435]: E1216 02:11:49.826454 3435 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-ghk7w" podUID="f50b9bab-3859-4d1b-ba44-d918ecbff9d1" Dec 16 02:11:49.913569 kubelet[3435]: I1216 02:11:49.913473 3435 scope.go:117] "RemoveContainer" containerID="0fbf977894cb77aa6ed906034392b3e42e4ef070d2da98f83bdbfdc679c48f40" Dec 16 02:11:49.914727 kubelet[3435]: I1216 02:11:49.914617 3435 scope.go:117] "RemoveContainer" containerID="d0b165588a53ae3b6171210ee2e871f1953dde006024fd1c48091c358d1d78d1" Dec 16 02:11:49.915098 kubelet[3435]: E1216 02:11:49.915030 3435 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tigera-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=tigera-operator pod=tigera-operator-7dcd859c48-6p2fj_tigera-operator(cb11f455-b284-40c2-990e-3e7bc2af26e2)\"" pod="tigera-operator/tigera-operator-7dcd859c48-6p2fj" podUID="cb11f455-b284-40c2-990e-3e7bc2af26e2" Dec 16 02:11:49.918041 containerd[1959]: time="2025-12-16T02:11:49.917993664Z" level=info msg="RemoveContainer for \"0fbf977894cb77aa6ed906034392b3e42e4ef070d2da98f83bdbfdc679c48f40\"" Dec 16 02:11:49.927608 containerd[1959]: time="2025-12-16T02:11:49.927408276Z" level=info msg="RemoveContainer for \"0fbf977894cb77aa6ed906034392b3e42e4ef070d2da98f83bdbfdc679c48f40\" returns successfully" Dec 16 02:11:50.825748 kubelet[3435]: E1216 02:11:50.825456 3435 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-7kpqw" podUID="874b2e4b-6331-48a2-85aa-be2fc619cfc8" Dec 16 02:11:51.825789 kubelet[3435]: E1216 02:11:51.825711 3435 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-8478cfdcd8-h8dlm" podUID="e4cfe125-283d-49d5-a19e-c93c8097201d" Dec 16 02:11:55.629024 kubelet[3435]: E1216 02:11:55.628350 3435 controller.go:195] "Failed to update lease" err="Put \"https://172.31.29.223:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-29-223?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 16 02:11:56.825940 kubelet[3435]: E1216 02:11:56.825832 3435 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7bbfbc6879-szgl8" podUID="9240aeb5-ce88-4dd3-8f08-0a5e1f3a4a69"