Dec 16 02:07:53.982669 kernel: Booting Linux on physical CPU 0x0000000000 [0x410fd083] Dec 16 02:07:53.982720 kernel: Linux version 6.12.61-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.1_p20250801 p4) 14.3.1 20250801, GNU ld (Gentoo 2.45 p3) 2.45.0) #1 SMP PREEMPT Tue Dec 16 00:05:24 -00 2025 Dec 16 02:07:53.982747 kernel: KASLR disabled due to lack of seed Dec 16 02:07:53.982764 kernel: efi: EFI v2.7 by EDK II Dec 16 02:07:53.982781 kernel: efi: SMBIOS=0x7bed0000 SMBIOS 3.0=0x7beb0000 ACPI=0x786e0000 ACPI 2.0=0x786e0014 MEMATTR=0x7a731a98 MEMRESERVE=0x78551598 Dec 16 02:07:53.982798 kernel: secureboot: Secure boot disabled Dec 16 02:07:53.982817 kernel: ACPI: Early table checksum verification disabled Dec 16 02:07:53.982832 kernel: ACPI: RSDP 0x00000000786E0014 000024 (v02 AMAZON) Dec 16 02:07:53.982849 kernel: ACPI: XSDT 0x00000000786D00E8 000064 (v01 AMAZON AMZNFACP 00000001 01000013) Dec 16 02:07:53.982871 kernel: ACPI: FACP 0x00000000786B0000 000114 (v06 AMAZON AMZNFACP 00000001 AMZN 00000001) Dec 16 02:07:53.982888 kernel: ACPI: DSDT 0x0000000078640000 0013D2 (v02 AMAZON AMZNDSDT 00000001 AMZN 00000001) Dec 16 02:07:53.982904 kernel: ACPI: FACS 0x0000000078630000 000040 Dec 16 02:07:53.982919 kernel: ACPI: APIC 0x00000000786C0000 000108 (v04 AMAZON AMZNAPIC 00000001 AMZN 00000001) Dec 16 02:07:53.982935 kernel: ACPI: SPCR 0x00000000786A0000 000050 (v02 AMAZON AMZNSPCR 00000001 AMZN 00000001) Dec 16 02:07:53.982959 kernel: ACPI: GTDT 0x0000000078690000 000060 (v02 AMAZON AMZNGTDT 00000001 AMZN 00000001) Dec 16 02:07:53.982977 kernel: ACPI: MCFG 0x0000000078680000 00003C (v02 AMAZON AMZNMCFG 00000001 AMZN 00000001) Dec 16 02:07:53.982994 kernel: ACPI: SLIT 0x0000000078670000 00002D (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Dec 16 02:07:53.983011 kernel: ACPI: IORT 0x0000000078660000 000078 (v01 AMAZON AMZNIORT 00000001 AMZN 00000001) Dec 16 02:07:53.983028 kernel: ACPI: PPTT 0x0000000078650000 0000EC (v01 AMAZON AMZNPPTT 00000001 AMZN 00000001) Dec 16 02:07:53.983079 kernel: ACPI: SPCR: console: uart,mmio,0x90a0000,115200 Dec 16 02:07:53.983103 kernel: earlycon: uart0 at MMIO 0x00000000090a0000 (options '115200') Dec 16 02:07:53.983121 kernel: printk: legacy bootconsole [uart0] enabled Dec 16 02:07:53.983138 kernel: ACPI: Use ACPI SPCR as default console: Yes Dec 16 02:07:53.983156 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000004b5ffffff] Dec 16 02:07:53.983180 kernel: NODE_DATA(0) allocated [mem 0x4b584da00-0x4b5854fff] Dec 16 02:07:53.983197 kernel: Zone ranges: Dec 16 02:07:53.983214 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Dec 16 02:07:53.983231 kernel: DMA32 empty Dec 16 02:07:53.983248 kernel: Normal [mem 0x0000000100000000-0x00000004b5ffffff] Dec 16 02:07:53.983265 kernel: Device empty Dec 16 02:07:53.983282 kernel: Movable zone start for each node Dec 16 02:07:53.983299 kernel: Early memory node ranges Dec 16 02:07:53.983316 kernel: node 0: [mem 0x0000000040000000-0x000000007862ffff] Dec 16 02:07:53.983333 kernel: node 0: [mem 0x0000000078630000-0x000000007863ffff] Dec 16 02:07:53.983349 kernel: node 0: [mem 0x0000000078640000-0x00000000786effff] Dec 16 02:07:53.983365 kernel: node 0: [mem 0x00000000786f0000-0x000000007872ffff] Dec 16 02:07:53.983388 kernel: node 0: [mem 0x0000000078730000-0x000000007bbfffff] Dec 16 02:07:53.983404 kernel: node 0: [mem 0x000000007bc00000-0x000000007bfdffff] Dec 16 02:07:53.983421 kernel: node 0: [mem 0x000000007bfe0000-0x000000007fffffff] Dec 16 02:07:53.983438 kernel: node 0: [mem 0x0000000400000000-0x00000004b5ffffff] Dec 16 02:07:53.983462 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000004b5ffffff] Dec 16 02:07:53.983485 kernel: On node 0, zone Normal: 8192 pages in unavailable ranges Dec 16 02:07:53.983503 kernel: cma: Reserved 16 MiB at 0x000000007f000000 on node -1 Dec 16 02:07:53.983521 kernel: psci: probing for conduit method from ACPI. Dec 16 02:07:53.983539 kernel: psci: PSCIv1.0 detected in firmware. Dec 16 02:07:53.983556 kernel: psci: Using standard PSCI v0.2 function IDs Dec 16 02:07:53.983574 kernel: psci: Trusted OS migration not required Dec 16 02:07:53.983591 kernel: psci: SMC Calling Convention v1.1 Dec 16 02:07:53.983610 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000001) Dec 16 02:07:53.983627 kernel: percpu: Embedded 33 pages/cpu s98200 r8192 d28776 u135168 Dec 16 02:07:53.983650 kernel: pcpu-alloc: s98200 r8192 d28776 u135168 alloc=33*4096 Dec 16 02:07:53.983668 kernel: pcpu-alloc: [0] 0 [0] 1 Dec 16 02:07:53.983685 kernel: Detected PIPT I-cache on CPU0 Dec 16 02:07:53.983703 kernel: CPU features: detected: GIC system register CPU interface Dec 16 02:07:53.983721 kernel: CPU features: detected: Spectre-v2 Dec 16 02:07:53.983738 kernel: CPU features: detected: Spectre-v3a Dec 16 02:07:53.983756 kernel: CPU features: detected: Spectre-BHB Dec 16 02:07:53.983773 kernel: CPU features: detected: ARM erratum 1742098 Dec 16 02:07:53.983791 kernel: CPU features: detected: ARM errata 1165522, 1319367, or 1530923 Dec 16 02:07:53.983809 kernel: alternatives: applying boot alternatives Dec 16 02:07:53.983828 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=756b815c2fd7ac2947efceb2a88878d1ea9723ec85037c2b4d1a09bd798bb749 Dec 16 02:07:53.983852 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Dec 16 02:07:53.983870 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Dec 16 02:07:53.983888 kernel: Fallback order for Node 0: 0 Dec 16 02:07:53.983905 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1007616 Dec 16 02:07:53.983923 kernel: Policy zone: Normal Dec 16 02:07:53.983941 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Dec 16 02:07:53.983958 kernel: software IO TLB: area num 2. Dec 16 02:07:53.983976 kernel: software IO TLB: mapped [mem 0x000000006f800000-0x0000000073800000] (64MB) Dec 16 02:07:53.983994 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Dec 16 02:07:53.984011 kernel: rcu: Preemptible hierarchical RCU implementation. Dec 16 02:07:53.984036 kernel: rcu: RCU event tracing is enabled. Dec 16 02:07:53.984088 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Dec 16 02:07:53.984109 kernel: Trampoline variant of Tasks RCU enabled. Dec 16 02:07:53.984127 kernel: Tracing variant of Tasks RCU enabled. Dec 16 02:07:53.984145 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Dec 16 02:07:53.984162 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Dec 16 02:07:53.984180 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 16 02:07:53.984198 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 16 02:07:53.984216 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Dec 16 02:07:53.984234 kernel: GICv3: 96 SPIs implemented Dec 16 02:07:53.984252 kernel: GICv3: 0 Extended SPIs implemented Dec 16 02:07:53.984277 kernel: Root IRQ handler: gic_handle_irq Dec 16 02:07:53.984294 kernel: GICv3: GICv3 features: 16 PPIs Dec 16 02:07:53.984312 kernel: GICv3: GICD_CTRL.DS=1, SCR_EL3.FIQ=0 Dec 16 02:07:53.984329 kernel: GICv3: CPU0: found redistributor 0 region 0:0x0000000010200000 Dec 16 02:07:53.984347 kernel: ITS [mem 0x10080000-0x1009ffff] Dec 16 02:07:53.987101 kernel: ITS@0x0000000010080000: allocated 8192 Devices @4000f0000 (indirect, esz 8, psz 64K, shr 1) Dec 16 02:07:53.987132 kernel: ITS@0x0000000010080000: allocated 8192 Interrupt Collections @400100000 (flat, esz 8, psz 64K, shr 1) Dec 16 02:07:53.987151 kernel: GICv3: using LPI property table @0x0000000400110000 Dec 16 02:07:53.987168 kernel: ITS: Using hypervisor restricted LPI range [128] Dec 16 02:07:53.987186 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000400120000 Dec 16 02:07:53.987204 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Dec 16 02:07:53.987233 kernel: arch_timer: cp15 timer(s) running at 83.33MHz (virt). Dec 16 02:07:53.987251 kernel: clocksource: arch_sys_counter: mask: 0x1ffffffffffffff max_cycles: 0x13381ebeec, max_idle_ns: 440795203145 ns Dec 16 02:07:53.987269 kernel: sched_clock: 57 bits at 83MHz, resolution 12ns, wraps every 4398046511100ns Dec 16 02:07:53.987287 kernel: Console: colour dummy device 80x25 Dec 16 02:07:53.987306 kernel: printk: legacy console [tty1] enabled Dec 16 02:07:53.987324 kernel: ACPI: Core revision 20240827 Dec 16 02:07:53.987343 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 166.66 BogoMIPS (lpj=83333) Dec 16 02:07:53.987362 kernel: pid_max: default: 32768 minimum: 301 Dec 16 02:07:53.987385 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Dec 16 02:07:53.987403 kernel: landlock: Up and running. Dec 16 02:07:53.987422 kernel: SELinux: Initializing. Dec 16 02:07:53.987441 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Dec 16 02:07:53.987459 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Dec 16 02:07:53.987477 kernel: rcu: Hierarchical SRCU implementation. Dec 16 02:07:53.987497 kernel: rcu: Max phase no-delay instances is 400. Dec 16 02:07:53.987515 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Dec 16 02:07:53.987538 kernel: Remapping and enabling EFI services. Dec 16 02:07:53.987556 kernel: smp: Bringing up secondary CPUs ... Dec 16 02:07:53.987574 kernel: Detected PIPT I-cache on CPU1 Dec 16 02:07:53.987593 kernel: GICv3: CPU1: found redistributor 1 region 0:0x0000000010220000 Dec 16 02:07:53.987611 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000400130000 Dec 16 02:07:53.987630 kernel: CPU1: Booted secondary processor 0x0000000001 [0x410fd083] Dec 16 02:07:53.987648 kernel: smp: Brought up 1 node, 2 CPUs Dec 16 02:07:53.987671 kernel: SMP: Total of 2 processors activated. Dec 16 02:07:53.987690 kernel: CPU: All CPU(s) started at EL1 Dec 16 02:07:53.987720 kernel: CPU features: detected: 32-bit EL0 Support Dec 16 02:07:53.987743 kernel: CPU features: detected: 32-bit EL1 Support Dec 16 02:07:53.987763 kernel: CPU features: detected: CRC32 instructions Dec 16 02:07:53.987782 kernel: alternatives: applying system-wide alternatives Dec 16 02:07:53.987803 kernel: Memory: 3823400K/4030464K available (11200K kernel code, 2456K rwdata, 9084K rodata, 12480K init, 1038K bss, 185716K reserved, 16384K cma-reserved) Dec 16 02:07:53.987823 kernel: devtmpfs: initialized Dec 16 02:07:53.987848 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Dec 16 02:07:53.987867 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Dec 16 02:07:53.987887 kernel: 23648 pages in range for non-PLT usage Dec 16 02:07:53.987906 kernel: 515168 pages in range for PLT usage Dec 16 02:07:53.987925 kernel: pinctrl core: initialized pinctrl subsystem Dec 16 02:07:53.987949 kernel: SMBIOS 3.0.0 present. Dec 16 02:07:53.987968 kernel: DMI: Amazon EC2 a1.large/, BIOS 1.0 11/1/2018 Dec 16 02:07:53.987988 kernel: DMI: Memory slots populated: 0/0 Dec 16 02:07:53.988007 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Dec 16 02:07:53.988027 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Dec 16 02:07:53.990105 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Dec 16 02:07:53.990153 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Dec 16 02:07:53.990184 kernel: audit: initializing netlink subsys (disabled) Dec 16 02:07:53.990204 kernel: audit: type=2000 audit(0.231:1): state=initialized audit_enabled=0 res=1 Dec 16 02:07:53.990224 kernel: thermal_sys: Registered thermal governor 'step_wise' Dec 16 02:07:53.990243 kernel: cpuidle: using governor menu Dec 16 02:07:53.990263 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Dec 16 02:07:53.990283 kernel: ASID allocator initialised with 65536 entries Dec 16 02:07:53.990302 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Dec 16 02:07:53.990326 kernel: Serial: AMBA PL011 UART driver Dec 16 02:07:53.990346 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Dec 16 02:07:53.990366 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Dec 16 02:07:53.990386 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Dec 16 02:07:53.990405 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Dec 16 02:07:53.990424 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Dec 16 02:07:53.990443 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Dec 16 02:07:53.990468 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Dec 16 02:07:53.990488 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Dec 16 02:07:53.990508 kernel: ACPI: Added _OSI(Module Device) Dec 16 02:07:53.990528 kernel: ACPI: Added _OSI(Processor Device) Dec 16 02:07:53.990548 kernel: ACPI: Added _OSI(Processor Aggregator Device) Dec 16 02:07:53.990569 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Dec 16 02:07:53.990589 kernel: ACPI: Interpreter enabled Dec 16 02:07:53.990614 kernel: ACPI: Using GIC for interrupt routing Dec 16 02:07:53.990635 kernel: ACPI: MCFG table detected, 1 entries Dec 16 02:07:53.990656 kernel: ACPI: CPU0 has been hot-added Dec 16 02:07:53.990676 kernel: ACPI: CPU1 has been hot-added Dec 16 02:07:53.990695 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00]) Dec 16 02:07:53.991141 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Dec 16 02:07:53.991444 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Dec 16 02:07:53.991751 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Dec 16 02:07:53.992085 kernel: acpi PNP0A08:00: ECAM area [mem 0x20000000-0x200fffff] reserved by PNP0C02:00 Dec 16 02:07:53.992403 kernel: acpi PNP0A08:00: ECAM at [mem 0x20000000-0x200fffff] for [bus 00] Dec 16 02:07:53.992460 kernel: ACPI: Remapped I/O 0x000000001fff0000 to [io 0x0000-0xffff window] Dec 16 02:07:53.992487 kernel: acpiphp: Slot [1] registered Dec 16 02:07:53.992509 kernel: acpiphp: Slot [2] registered Dec 16 02:07:53.992541 kernel: acpiphp: Slot [3] registered Dec 16 02:07:53.992564 kernel: acpiphp: Slot [4] registered Dec 16 02:07:53.992584 kernel: acpiphp: Slot [5] registered Dec 16 02:07:53.992603 kernel: acpiphp: Slot [6] registered Dec 16 02:07:53.992622 kernel: acpiphp: Slot [7] registered Dec 16 02:07:53.992641 kernel: acpiphp: Slot [8] registered Dec 16 02:07:53.992660 kernel: acpiphp: Slot [9] registered Dec 16 02:07:53.992680 kernel: acpiphp: Slot [10] registered Dec 16 02:07:53.992706 kernel: acpiphp: Slot [11] registered Dec 16 02:07:53.992725 kernel: acpiphp: Slot [12] registered Dec 16 02:07:53.992744 kernel: acpiphp: Slot [13] registered Dec 16 02:07:53.992765 kernel: acpiphp: Slot [14] registered Dec 16 02:07:53.992786 kernel: acpiphp: Slot [15] registered Dec 16 02:07:53.992805 kernel: acpiphp: Slot [16] registered Dec 16 02:07:53.992826 kernel: acpiphp: Slot [17] registered Dec 16 02:07:53.992853 kernel: acpiphp: Slot [18] registered Dec 16 02:07:53.992874 kernel: acpiphp: Slot [19] registered Dec 16 02:07:53.992895 kernel: acpiphp: Slot [20] registered Dec 16 02:07:53.992915 kernel: acpiphp: Slot [21] registered Dec 16 02:07:53.992934 kernel: acpiphp: Slot [22] registered Dec 16 02:07:53.992953 kernel: acpiphp: Slot [23] registered Dec 16 02:07:53.992973 kernel: acpiphp: Slot [24] registered Dec 16 02:07:53.992998 kernel: acpiphp: Slot [25] registered Dec 16 02:07:53.993018 kernel: acpiphp: Slot [26] registered Dec 16 02:07:53.993037 kernel: acpiphp: Slot [27] registered Dec 16 02:07:53.997169 kernel: acpiphp: Slot [28] registered Dec 16 02:07:53.997195 kernel: acpiphp: Slot [29] registered Dec 16 02:07:53.997218 kernel: acpiphp: Slot [30] registered Dec 16 02:07:53.997239 kernel: acpiphp: Slot [31] registered Dec 16 02:07:53.997258 kernel: PCI host bridge to bus 0000:00 Dec 16 02:07:53.997631 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xffffffff window] Dec 16 02:07:53.997888 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Dec 16 02:07:53.998248 kernel: pci_bus 0000:00: root bus resource [mem 0x400000000000-0x407fffffffff window] Dec 16 02:07:53.998521 kernel: pci_bus 0000:00: root bus resource [bus 00] Dec 16 02:07:53.998836 kernel: pci 0000:00:00.0: [1d0f:0200] type 00 class 0x060000 conventional PCI endpoint Dec 16 02:07:54.000597 kernel: pci 0000:00:01.0: [1d0f:8250] type 00 class 0x070003 conventional PCI endpoint Dec 16 02:07:54.000910 kernel: pci 0000:00:01.0: BAR 0 [mem 0x80118000-0x80118fff] Dec 16 02:07:54.001331 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 PCIe Root Complex Integrated Endpoint Dec 16 02:07:54.001653 kernel: pci 0000:00:04.0: BAR 0 [mem 0x80114000-0x80117fff] Dec 16 02:07:54.001932 kernel: pci 0000:00:04.0: PME# supported from D0 D1 D2 D3hot D3cold Dec 16 02:07:54.002317 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 PCIe Root Complex Integrated Endpoint Dec 16 02:07:54.002600 kernel: pci 0000:00:05.0: BAR 0 [mem 0x80110000-0x80113fff] Dec 16 02:07:54.002879 kernel: pci 0000:00:05.0: BAR 2 [mem 0x80000000-0x800fffff pref] Dec 16 02:07:54.003220 kernel: pci 0000:00:05.0: BAR 4 [mem 0x80100000-0x8010ffff] Dec 16 02:07:54.003526 kernel: pci 0000:00:05.0: PME# supported from D0 D1 D2 D3hot D3cold Dec 16 02:07:54.003797 kernel: pci_bus 0000:00: resource 4 [mem 0x80000000-0xffffffff window] Dec 16 02:07:54.004145 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Dec 16 02:07:54.004475 kernel: pci_bus 0000:00: resource 6 [mem 0x400000000000-0x407fffffffff window] Dec 16 02:07:54.004514 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Dec 16 02:07:54.004536 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Dec 16 02:07:54.004556 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Dec 16 02:07:54.004577 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Dec 16 02:07:54.004597 kernel: iommu: Default domain type: Translated Dec 16 02:07:54.004627 kernel: iommu: DMA domain TLB invalidation policy: strict mode Dec 16 02:07:54.004648 kernel: efivars: Registered efivars operations Dec 16 02:07:54.004668 kernel: vgaarb: loaded Dec 16 02:07:54.004688 kernel: clocksource: Switched to clocksource arch_sys_counter Dec 16 02:07:54.004708 kernel: VFS: Disk quotas dquot_6.6.0 Dec 16 02:07:54.004729 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Dec 16 02:07:54.004749 kernel: pnp: PnP ACPI init Dec 16 02:07:54.005161 kernel: system 00:00: [mem 0x20000000-0x2fffffff] could not be reserved Dec 16 02:07:54.005205 kernel: pnp: PnP ACPI: found 1 devices Dec 16 02:07:54.005226 kernel: NET: Registered PF_INET protocol family Dec 16 02:07:54.005247 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Dec 16 02:07:54.005267 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Dec 16 02:07:54.005288 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Dec 16 02:07:54.005313 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Dec 16 02:07:54.005343 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Dec 16 02:07:54.005363 kernel: TCP: Hash tables configured (established 32768 bind 32768) Dec 16 02:07:54.005384 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Dec 16 02:07:54.005404 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Dec 16 02:07:54.005424 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Dec 16 02:07:54.005444 kernel: PCI: CLS 0 bytes, default 64 Dec 16 02:07:54.005465 kernel: kvm [1]: HYP mode not available Dec 16 02:07:54.005490 kernel: Initialise system trusted keyrings Dec 16 02:07:54.005509 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Dec 16 02:07:54.005529 kernel: Key type asymmetric registered Dec 16 02:07:54.005550 kernel: Asymmetric key parser 'x509' registered Dec 16 02:07:54.005570 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Dec 16 02:07:54.005589 kernel: io scheduler mq-deadline registered Dec 16 02:07:54.005609 kernel: io scheduler kyber registered Dec 16 02:07:54.005633 kernel: io scheduler bfq registered Dec 16 02:07:54.005962 kernel: pl061_gpio ARMH0061:00: PL061 GPIO chip registered Dec 16 02:07:54.006001 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Dec 16 02:07:54.006022 kernel: ACPI: button: Power Button [PWRB] Dec 16 02:07:54.006068 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input1 Dec 16 02:07:54.006126 kernel: ACPI: button: Sleep Button [SLPB] Dec 16 02:07:54.006159 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Dec 16 02:07:54.006180 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Dec 16 02:07:54.006482 kernel: serial 0000:00:01.0: enabling device (0010 -> 0012) Dec 16 02:07:54.006512 kernel: printk: legacy console [ttyS0] disabled Dec 16 02:07:54.006532 kernel: 0000:00:01.0: ttyS0 at MMIO 0x80118000 (irq = 14, base_baud = 115200) is a 16550A Dec 16 02:07:54.006551 kernel: printk: legacy console [ttyS0] enabled Dec 16 02:07:54.006571 kernel: printk: legacy bootconsole [uart0] disabled Dec 16 02:07:54.006597 kernel: thunder_xcv, ver 1.0 Dec 16 02:07:54.006617 kernel: thunder_bgx, ver 1.0 Dec 16 02:07:54.006636 kernel: nicpf, ver 1.0 Dec 16 02:07:54.006655 kernel: nicvf, ver 1.0 Dec 16 02:07:54.006965 kernel: rtc-efi rtc-efi.0: registered as rtc0 Dec 16 02:07:54.007280 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-12-16T02:07:50 UTC (1765850870) Dec 16 02:07:54.007311 kernel: hid: raw HID events driver (C) Jiri Kosina Dec 16 02:07:54.007340 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 3 (0,80000003) counters available Dec 16 02:07:54.007360 kernel: NET: Registered PF_INET6 protocol family Dec 16 02:07:54.007379 kernel: watchdog: NMI not fully supported Dec 16 02:07:54.007398 kernel: watchdog: Hard watchdog permanently disabled Dec 16 02:07:54.007417 kernel: Segment Routing with IPv6 Dec 16 02:07:54.007436 kernel: In-situ OAM (IOAM) with IPv6 Dec 16 02:07:54.007455 kernel: NET: Registered PF_PACKET protocol family Dec 16 02:07:54.007479 kernel: Key type dns_resolver registered Dec 16 02:07:54.007497 kernel: registered taskstats version 1 Dec 16 02:07:54.007517 kernel: Loading compiled-in X.509 certificates Dec 16 02:07:54.007536 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.61-flatcar: 545838337a91b65b763486e536766b3eec3ef99d' Dec 16 02:07:54.007556 kernel: Demotion targets for Node 0: null Dec 16 02:07:54.007575 kernel: Key type .fscrypt registered Dec 16 02:07:54.007594 kernel: Key type fscrypt-provisioning registered Dec 16 02:07:54.007617 kernel: ima: No TPM chip found, activating TPM-bypass! Dec 16 02:07:54.007637 kernel: ima: Allocated hash algorithm: sha1 Dec 16 02:07:54.007657 kernel: ima: No architecture policies found Dec 16 02:07:54.007676 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Dec 16 02:07:54.007695 kernel: clk: Disabling unused clocks Dec 16 02:07:54.007715 kernel: PM: genpd: Disabling unused power domains Dec 16 02:07:54.007734 kernel: Freeing unused kernel memory: 12480K Dec 16 02:07:54.007753 kernel: Run /init as init process Dec 16 02:07:54.007777 kernel: with arguments: Dec 16 02:07:54.007796 kernel: /init Dec 16 02:07:54.007814 kernel: with environment: Dec 16 02:07:54.007833 kernel: HOME=/ Dec 16 02:07:54.007852 kernel: TERM=linux Dec 16 02:07:54.007873 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Dec 16 02:07:54.008199 kernel: nvme nvme0: pci function 0000:00:04.0 Dec 16 02:07:54.008458 kernel: nvme nvme0: 2/0/0 default/read/poll queues Dec 16 02:07:54.008491 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Dec 16 02:07:54.008513 kernel: GPT:25804799 != 33554431 Dec 16 02:07:54.008533 kernel: GPT:Alternate GPT header not at the end of the disk. Dec 16 02:07:54.008553 kernel: GPT:25804799 != 33554431 Dec 16 02:07:54.008572 kernel: GPT: Use GNU Parted to correct GPT errors. Dec 16 02:07:54.008601 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Dec 16 02:07:54.008621 kernel: SCSI subsystem initialized Dec 16 02:07:54.008643 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Dec 16 02:07:54.008663 kernel: device-mapper: uevent: version 1.0.3 Dec 16 02:07:54.008684 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Dec 16 02:07:54.008706 kernel: device-mapper: verity: sha256 using shash "sha256-ce" Dec 16 02:07:54.008726 kernel: raid6: neonx8 gen() 6512 MB/s Dec 16 02:07:54.008752 kernel: raid6: neonx4 gen() 6470 MB/s Dec 16 02:07:54.008771 kernel: raid6: neonx2 gen() 5359 MB/s Dec 16 02:07:54.008791 kernel: raid6: neonx1 gen() 3963 MB/s Dec 16 02:07:54.008812 kernel: raid6: int64x8 gen() 3626 MB/s Dec 16 02:07:54.008832 kernel: raid6: int64x4 gen() 3678 MB/s Dec 16 02:07:54.008851 kernel: raid6: int64x2 gen() 3577 MB/s Dec 16 02:07:54.008871 kernel: raid6: int64x1 gen() 2720 MB/s Dec 16 02:07:54.008898 kernel: raid6: using algorithm neonx8 gen() 6512 MB/s Dec 16 02:07:54.008919 kernel: raid6: .... xor() 4702 MB/s, rmw enabled Dec 16 02:07:54.008939 kernel: raid6: using neon recovery algorithm Dec 16 02:07:54.008959 kernel: xor: measuring software checksum speed Dec 16 02:07:54.008978 kernel: 8regs : 13025 MB/sec Dec 16 02:07:54.008998 kernel: 32regs : 12668 MB/sec Dec 16 02:07:54.009018 kernel: arm64_neon : 8707 MB/sec Dec 16 02:07:54.009076 kernel: xor: using function: 8regs (13025 MB/sec) Dec 16 02:07:54.009120 kernel: Btrfs loaded, zoned=no, fsverity=no Dec 16 02:07:54.009145 kernel: BTRFS: device fsid d00a2bc5-1c68-4957-aa37-d070193fcf05 devid 1 transid 36 /dev/mapper/usr (254:0) scanned by mount (221) Dec 16 02:07:54.009166 kernel: BTRFS info (device dm-0): first mount of filesystem d00a2bc5-1c68-4957-aa37-d070193fcf05 Dec 16 02:07:54.009186 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Dec 16 02:07:54.009205 kernel: BTRFS info (device dm-0): enabling ssd optimizations Dec 16 02:07:54.009225 kernel: BTRFS info (device dm-0): disabling log replay at mount time Dec 16 02:07:54.009252 kernel: BTRFS info (device dm-0): enabling free space tree Dec 16 02:07:54.009271 kernel: loop: module loaded Dec 16 02:07:54.009291 kernel: loop0: detected capacity change from 0 to 91832 Dec 16 02:07:54.009310 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Dec 16 02:07:54.009332 systemd[1]: Successfully made /usr/ read-only. Dec 16 02:07:54.009358 systemd[1]: systemd 257.9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Dec 16 02:07:54.009386 systemd[1]: Detected virtualization amazon. Dec 16 02:07:54.009408 systemd[1]: Detected architecture arm64. Dec 16 02:07:54.009428 systemd[1]: Running in initrd. Dec 16 02:07:54.009448 systemd[1]: No hostname configured, using default hostname. Dec 16 02:07:54.009470 systemd[1]: Hostname set to . Dec 16 02:07:54.009491 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Dec 16 02:07:54.009512 systemd[1]: Queued start job for default target initrd.target. Dec 16 02:07:54.009537 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Dec 16 02:07:54.009558 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 16 02:07:54.009579 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 16 02:07:54.009602 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Dec 16 02:07:54.009624 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 16 02:07:54.009668 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Dec 16 02:07:54.009691 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Dec 16 02:07:54.009712 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 16 02:07:54.009734 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 16 02:07:54.009756 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Dec 16 02:07:54.009783 systemd[1]: Reached target paths.target - Path Units. Dec 16 02:07:54.009804 systemd[1]: Reached target slices.target - Slice Units. Dec 16 02:07:54.009825 systemd[1]: Reached target swap.target - Swaps. Dec 16 02:07:54.009847 systemd[1]: Reached target timers.target - Timer Units. Dec 16 02:07:54.009868 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Dec 16 02:07:54.009890 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 16 02:07:54.009912 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Dec 16 02:07:54.009938 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Dec 16 02:07:54.009960 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Dec 16 02:07:54.009982 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 16 02:07:54.010003 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 16 02:07:54.010122 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 16 02:07:54.010149 systemd[1]: Reached target sockets.target - Socket Units. Dec 16 02:07:54.010173 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Dec 16 02:07:54.010202 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Dec 16 02:07:54.010224 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 16 02:07:54.010246 systemd[1]: Finished network-cleanup.service - Network Cleanup. Dec 16 02:07:54.010268 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Dec 16 02:07:54.010291 systemd[1]: Starting systemd-fsck-usr.service... Dec 16 02:07:54.010313 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 16 02:07:54.010334 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 16 02:07:54.010362 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 16 02:07:54.010384 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Dec 16 02:07:54.010411 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 16 02:07:54.010433 systemd[1]: Finished systemd-fsck-usr.service. Dec 16 02:07:54.010455 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 16 02:07:54.010477 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 16 02:07:54.010558 systemd-journald[361]: Collecting audit messages is enabled. Dec 16 02:07:54.010612 kernel: audit: type=1130 audit(1765850873.977:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:07:54.010636 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 16 02:07:54.010658 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Dec 16 02:07:54.010717 systemd-journald[361]: Journal started Dec 16 02:07:54.010785 systemd-journald[361]: Runtime Journal (/run/log/journal/ec26b1b3b3dde4b9aac5448327693da5) is 8M, max 75.3M, 67.3M free. Dec 16 02:07:53.977000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:07:54.018091 systemd[1]: Started systemd-journald.service - Journal Service. Dec 16 02:07:54.018190 kernel: audit: type=1130 audit(1765850874.013:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:07:54.013000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:07:54.029523 kernel: Bridge firewalling registered Dec 16 02:07:54.030184 systemd-modules-load[362]: Inserted module 'br_netfilter' Dec 16 02:07:54.046295 kernel: audit: type=1130 audit(1765850874.034:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:07:54.034000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:07:54.033703 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 16 02:07:54.035189 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 16 02:07:54.056469 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 16 02:07:54.067254 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 02:07:54.071000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:07:54.078112 kernel: audit: type=1130 audit(1765850874.071:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:07:54.078817 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 16 02:07:54.080000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:07:54.090781 kernel: audit: type=1130 audit(1765850874.080:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:07:54.094389 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 16 02:07:54.117379 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 16 02:07:54.124000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:07:54.122107 systemd-tmpfiles[376]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Dec 16 02:07:54.128000 audit: BPF prog-id=6 op=LOAD Dec 16 02:07:54.135237 kernel: audit: type=1130 audit(1765850874.124:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:07:54.135313 kernel: audit: type=1334 audit(1765850874.128:8): prog-id=6 op=LOAD Dec 16 02:07:54.136797 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 16 02:07:54.154760 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 16 02:07:54.155000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:07:54.165188 kernel: audit: type=1130 audit(1765850874.155:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:07:54.180662 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 16 02:07:54.191000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:07:54.197090 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Dec 16 02:07:54.209090 kernel: audit: type=1130 audit(1765850874.191:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:07:54.292136 dracut-cmdline[401]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=756b815c2fd7ac2947efceb2a88878d1ea9723ec85037c2b4d1a09bd798bb749 Dec 16 02:07:54.324882 systemd-resolved[387]: Positive Trust Anchors: Dec 16 02:07:54.327332 systemd-resolved[387]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 16 02:07:54.328751 systemd-resolved[387]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Dec 16 02:07:54.328821 systemd-resolved[387]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 16 02:07:54.587085 kernel: Loading iSCSI transport class v2.0-870. Dec 16 02:07:54.630081 kernel: random: crng init done Dec 16 02:07:54.634656 systemd-resolved[387]: Defaulting to hostname 'linux'. Dec 16 02:07:54.637585 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 16 02:07:54.649124 kernel: audit: type=1130 audit(1765850874.639:11): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:07:54.639000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:07:54.640258 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 16 02:07:54.659068 kernel: iscsi: registered transport (tcp) Dec 16 02:07:54.712090 kernel: iscsi: registered transport (qla4xxx) Dec 16 02:07:54.712165 kernel: QLogic iSCSI HBA Driver Dec 16 02:07:54.751817 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Dec 16 02:07:54.784379 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Dec 16 02:07:54.789000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:07:54.792931 systemd[1]: Reached target network-pre.target - Preparation for Network. Dec 16 02:07:54.871118 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Dec 16 02:07:54.872000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:07:54.875823 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Dec 16 02:07:54.888399 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Dec 16 02:07:54.951406 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Dec 16 02:07:54.955000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:07:54.959000 audit: BPF prog-id=7 op=LOAD Dec 16 02:07:54.959000 audit: BPF prog-id=8 op=LOAD Dec 16 02:07:54.961703 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 16 02:07:55.027892 systemd-udevd[640]: Using default interface naming scheme 'v257'. Dec 16 02:07:55.047629 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 16 02:07:55.049000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:07:55.059494 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Dec 16 02:07:55.110097 dracut-pre-trigger[704]: rd.md=0: removing MD RAID activation Dec 16 02:07:55.127183 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 16 02:07:55.133000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:07:55.135000 audit: BPF prog-id=9 op=LOAD Dec 16 02:07:55.138326 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 16 02:07:55.182190 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Dec 16 02:07:55.182000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:07:55.186674 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 16 02:07:55.247666 systemd-networkd[756]: lo: Link UP Dec 16 02:07:55.247686 systemd-networkd[756]: lo: Gained carrier Dec 16 02:07:55.250938 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 16 02:07:55.260720 systemd[1]: Reached target network.target - Network. Dec 16 02:07:55.259000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:07:55.353544 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 16 02:07:55.360309 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Dec 16 02:07:55.355000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:07:55.569713 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 16 02:07:55.572648 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 02:07:55.577000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:07:55.578370 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Dec 16 02:07:55.582398 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 16 02:07:55.619090 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Dec 16 02:07:55.619166 kernel: ena 0000:00:05.0: enabling device (0010 -> 0012) Dec 16 02:07:55.639302 kernel: ena 0000:00:05.0: ENA device version: 0.10 Dec 16 02:07:55.639782 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Dec 16 02:07:55.645368 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 02:07:55.650000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:07:55.658130 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80110000, mac addr 06:60:28:3b:25:ef Dec 16 02:07:55.658569 kernel: nvme nvme0: using unchecked data buffer Dec 16 02:07:55.661130 (udev-worker)[796]: Network interface NamePolicy= disabled on kernel command line. Dec 16 02:07:55.679748 systemd-networkd[756]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Dec 16 02:07:55.680029 systemd-networkd[756]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 16 02:07:55.692939 systemd-networkd[756]: eth0: Link UP Dec 16 02:07:55.693828 systemd-networkd[756]: eth0: Gained carrier Dec 16 02:07:55.693849 systemd-networkd[756]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Dec 16 02:07:55.712164 systemd-networkd[756]: eth0: DHCPv4 address 172.31.24.92/20, gateway 172.31.16.1 acquired from 172.31.16.1 Dec 16 02:07:55.818080 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Dec 16 02:07:55.825593 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Dec 16 02:07:55.862153 disk-uuid[849]: Primary Header is updated. Dec 16 02:07:55.862153 disk-uuid[849]: Secondary Entries is updated. Dec 16 02:07:55.862153 disk-uuid[849]: Secondary Header is updated. Dec 16 02:07:55.986463 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Dec 16 02:07:56.089723 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Dec 16 02:07:56.131019 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Dec 16 02:07:56.221413 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Dec 16 02:07:56.225000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:07:56.228320 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Dec 16 02:07:56.231355 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 16 02:07:56.236739 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 16 02:07:56.244018 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Dec 16 02:07:56.292798 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Dec 16 02:07:56.296000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:07:56.731312 systemd-networkd[756]: eth0: Gained IPv6LL Dec 16 02:07:56.991541 disk-uuid[856]: Warning: The kernel is still using the old partition table. Dec 16 02:07:56.991541 disk-uuid[856]: The new table will be used at the next reboot or after you Dec 16 02:07:56.991541 disk-uuid[856]: run partprobe(8) or kpartx(8) Dec 16 02:07:56.991541 disk-uuid[856]: The operation has completed successfully. Dec 16 02:07:57.010615 systemd[1]: disk-uuid.service: Deactivated successfully. Dec 16 02:07:57.013109 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Dec 16 02:07:57.015000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:07:57.015000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:07:57.022188 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Dec 16 02:07:57.072088 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 (259:5) scanned by mount (1006) Dec 16 02:07:57.076843 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem eb4bb268-dde2-45a9-b660-8899d8790a47 Dec 16 02:07:57.076893 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Dec 16 02:07:57.117564 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Dec 16 02:07:57.117652 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Dec 16 02:07:57.128095 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem eb4bb268-dde2-45a9-b660-8899d8790a47 Dec 16 02:07:57.129729 systemd[1]: Finished ignition-setup.service - Ignition (setup). Dec 16 02:07:57.133000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:07:57.136816 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Dec 16 02:07:58.338817 ignition[1025]: Ignition 2.24.0 Dec 16 02:07:58.338854 ignition[1025]: Stage: fetch-offline Dec 16 02:07:58.339333 ignition[1025]: no configs at "/usr/lib/ignition/base.d" Dec 16 02:07:58.339366 ignition[1025]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 16 02:07:58.340010 ignition[1025]: Ignition finished successfully Dec 16 02:07:58.351905 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Dec 16 02:07:58.355000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:07:58.361491 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Dec 16 02:07:58.408319 ignition[1033]: Ignition 2.24.0 Dec 16 02:07:58.408354 ignition[1033]: Stage: fetch Dec 16 02:07:58.408777 ignition[1033]: no configs at "/usr/lib/ignition/base.d" Dec 16 02:07:58.408823 ignition[1033]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 16 02:07:58.410239 ignition[1033]: PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 16 02:07:58.431613 ignition[1033]: PUT result: OK Dec 16 02:07:58.435613 ignition[1033]: parsed url from cmdline: "" Dec 16 02:07:58.435643 ignition[1033]: no config URL provided Dec 16 02:07:58.435670 ignition[1033]: reading system config file "/usr/lib/ignition/user.ign" Dec 16 02:07:58.435711 ignition[1033]: no config at "/usr/lib/ignition/user.ign" Dec 16 02:07:58.435751 ignition[1033]: PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 16 02:07:58.445319 ignition[1033]: PUT result: OK Dec 16 02:07:58.445454 ignition[1033]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Dec 16 02:07:58.450468 ignition[1033]: GET result: OK Dec 16 02:07:58.450702 ignition[1033]: parsing config with SHA512: b7b440667c10af4e7a87e06939dca1dd334d38c762f717e7756625574a805c1a532a58be63c9a81d4b85cce6798ff376967eac90cf36f11029a7fac098ffc6d6 Dec 16 02:07:58.463885 unknown[1033]: fetched base config from "system" Dec 16 02:07:58.463914 unknown[1033]: fetched base config from "system" Dec 16 02:07:58.465362 ignition[1033]: fetch: fetch complete Dec 16 02:07:58.463930 unknown[1033]: fetched user config from "aws" Dec 16 02:07:58.465375 ignition[1033]: fetch: fetch passed Dec 16 02:07:58.465496 ignition[1033]: Ignition finished successfully Dec 16 02:07:58.482178 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Dec 16 02:07:58.485000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:07:58.489673 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Dec 16 02:07:58.534755 ignition[1039]: Ignition 2.24.0 Dec 16 02:07:58.536693 ignition[1039]: Stage: kargs Dec 16 02:07:58.538540 ignition[1039]: no configs at "/usr/lib/ignition/base.d" Dec 16 02:07:58.538574 ignition[1039]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 16 02:07:58.538729 ignition[1039]: PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 16 02:07:58.546987 ignition[1039]: PUT result: OK Dec 16 02:07:58.559362 ignition[1039]: kargs: kargs passed Dec 16 02:07:58.559536 ignition[1039]: Ignition finished successfully Dec 16 02:07:58.566171 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Dec 16 02:07:58.567000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:07:58.574362 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Dec 16 02:07:58.633752 ignition[1045]: Ignition 2.24.0 Dec 16 02:07:58.633790 ignition[1045]: Stage: disks Dec 16 02:07:58.634298 ignition[1045]: no configs at "/usr/lib/ignition/base.d" Dec 16 02:07:58.634324 ignition[1045]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 16 02:07:58.635225 ignition[1045]: PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 16 02:07:58.640625 ignition[1045]: PUT result: OK Dec 16 02:07:58.653258 ignition[1045]: disks: disks passed Dec 16 02:07:58.653419 ignition[1045]: Ignition finished successfully Dec 16 02:07:58.660961 systemd[1]: Finished ignition-disks.service - Ignition (disks). Dec 16 02:07:58.667000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:07:58.669191 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Dec 16 02:07:58.674480 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Dec 16 02:07:58.674830 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 16 02:07:58.682772 systemd[1]: Reached target sysinit.target - System Initialization. Dec 16 02:07:58.685643 systemd[1]: Reached target basic.target - Basic System. Dec 16 02:07:58.688107 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Dec 16 02:07:58.822190 systemd-fsck[1053]: ROOT: clean, 15/1631200 files, 112378/1617920 blocks Dec 16 02:07:58.829304 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Dec 16 02:07:58.835000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:07:58.839513 systemd[1]: Mounting sysroot.mount - /sysroot... Dec 16 02:07:59.076099 kernel: EXT4-fs (nvme0n1p9): mounted filesystem 0e69f709-36a9-4e15-b0c9-c7e150185653 r/w with ordered data mode. Quota mode: none. Dec 16 02:07:59.077878 systemd[1]: Mounted sysroot.mount - /sysroot. Dec 16 02:07:59.083564 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Dec 16 02:07:59.140606 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 16 02:07:59.145818 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Dec 16 02:07:59.154371 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Dec 16 02:07:59.160212 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Dec 16 02:07:59.167088 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Dec 16 02:07:59.187779 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Dec 16 02:07:59.195338 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Dec 16 02:07:59.211103 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 (259:5) scanned by mount (1072) Dec 16 02:07:59.215731 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem eb4bb268-dde2-45a9-b660-8899d8790a47 Dec 16 02:07:59.215812 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Dec 16 02:07:59.227079 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Dec 16 02:07:59.227157 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Dec 16 02:07:59.230879 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 16 02:08:01.712414 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Dec 16 02:08:01.714000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:08:01.723688 kernel: kauditd_printk_skb: 23 callbacks suppressed Dec 16 02:08:01.723770 kernel: audit: type=1130 audit(1765850881.714:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:08:01.720263 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Dec 16 02:08:01.734560 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Dec 16 02:08:01.762800 systemd[1]: sysroot-oem.mount: Deactivated successfully. Dec 16 02:08:01.768246 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem eb4bb268-dde2-45a9-b660-8899d8790a47 Dec 16 02:08:01.801662 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Dec 16 02:08:01.805000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:08:01.813131 kernel: audit: type=1130 audit(1765850881.805:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:08:01.824738 ignition[1170]: INFO : Ignition 2.24.0 Dec 16 02:08:01.826975 ignition[1170]: INFO : Stage: mount Dec 16 02:08:01.826975 ignition[1170]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 16 02:08:01.826975 ignition[1170]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 16 02:08:01.826975 ignition[1170]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 16 02:08:01.840002 ignition[1170]: INFO : PUT result: OK Dec 16 02:08:01.848376 ignition[1170]: INFO : mount: mount passed Dec 16 02:08:01.850257 ignition[1170]: INFO : Ignition finished successfully Dec 16 02:08:01.854151 systemd[1]: Finished ignition-mount.service - Ignition (mount). Dec 16 02:08:01.855000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:08:01.860956 systemd[1]: Starting ignition-files.service - Ignition (files)... Dec 16 02:08:01.870745 kernel: audit: type=1130 audit(1765850881.855:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:08:01.908310 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 16 02:08:01.951122 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 (259:5) scanned by mount (1180) Dec 16 02:08:01.957102 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem eb4bb268-dde2-45a9-b660-8899d8790a47 Dec 16 02:08:01.957217 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Dec 16 02:08:01.965495 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Dec 16 02:08:01.967115 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Dec 16 02:08:01.969782 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 16 02:08:02.020021 ignition[1198]: INFO : Ignition 2.24.0 Dec 16 02:08:02.020021 ignition[1198]: INFO : Stage: files Dec 16 02:08:02.023977 ignition[1198]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 16 02:08:02.023977 ignition[1198]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 16 02:08:02.023977 ignition[1198]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 16 02:08:02.032846 ignition[1198]: INFO : PUT result: OK Dec 16 02:08:02.040168 ignition[1198]: DEBUG : files: compiled without relabeling support, skipping Dec 16 02:08:02.046215 ignition[1198]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Dec 16 02:08:02.046215 ignition[1198]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Dec 16 02:08:02.060076 ignition[1198]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Dec 16 02:08:02.063560 ignition[1198]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Dec 16 02:08:02.067545 unknown[1198]: wrote ssh authorized keys file for user: core Dec 16 02:08:02.071373 ignition[1198]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Dec 16 02:08:02.074439 ignition[1198]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Dec 16 02:08:02.074439 ignition[1198]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-arm64.tar.gz: attempt #1 Dec 16 02:08:02.159126 ignition[1198]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Dec 16 02:08:02.371149 ignition[1198]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Dec 16 02:08:02.371149 ignition[1198]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Dec 16 02:08:02.371149 ignition[1198]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Dec 16 02:08:02.371149 ignition[1198]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Dec 16 02:08:02.371149 ignition[1198]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Dec 16 02:08:02.371149 ignition[1198]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 16 02:08:02.371149 ignition[1198]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 16 02:08:02.371149 ignition[1198]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 16 02:08:02.404315 ignition[1198]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 16 02:08:02.404315 ignition[1198]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Dec 16 02:08:02.404315 ignition[1198]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Dec 16 02:08:02.404315 ignition[1198]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-arm64.raw" Dec 16 02:08:02.422867 ignition[1198]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-arm64.raw" Dec 16 02:08:02.422867 ignition[1198]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-arm64.raw" Dec 16 02:08:02.422867 ignition[1198]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.34.1-arm64.raw: attempt #1 Dec 16 02:08:02.879183 ignition[1198]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Dec 16 02:08:03.279763 ignition[1198]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-arm64.raw" Dec 16 02:08:03.279763 ignition[1198]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Dec 16 02:08:03.287640 ignition[1198]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 16 02:08:03.295741 ignition[1198]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 16 02:08:03.295741 ignition[1198]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Dec 16 02:08:03.295741 ignition[1198]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Dec 16 02:08:03.307190 ignition[1198]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Dec 16 02:08:03.307190 ignition[1198]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Dec 16 02:08:03.307190 ignition[1198]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Dec 16 02:08:03.307190 ignition[1198]: INFO : files: files passed Dec 16 02:08:03.307190 ignition[1198]: INFO : Ignition finished successfully Dec 16 02:08:03.336604 kernel: audit: type=1130 audit(1765850883.314:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:08:03.314000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:08:03.311183 systemd[1]: Finished ignition-files.service - Ignition (files). Dec 16 02:08:03.317408 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Dec 16 02:08:03.329568 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Dec 16 02:08:03.361226 systemd[1]: ignition-quench.service: Deactivated successfully. Dec 16 02:08:03.361494 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Dec 16 02:08:03.377532 kernel: audit: type=1130 audit(1765850883.365:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:08:03.377578 kernel: audit: type=1131 audit(1765850883.365:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:08:03.365000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:08:03.365000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:08:03.391383 initrd-setup-root-after-ignition[1228]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 16 02:08:03.395476 initrd-setup-root-after-ignition[1228]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Dec 16 02:08:03.399643 initrd-setup-root-after-ignition[1232]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 16 02:08:03.407431 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 16 02:08:03.412000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:08:03.413740 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Dec 16 02:08:03.424569 kernel: audit: type=1130 audit(1765850883.412:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:08:03.426139 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Dec 16 02:08:03.501932 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Dec 16 02:08:03.504125 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Dec 16 02:08:03.518368 kernel: audit: type=1130 audit(1765850883.508:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:08:03.518413 kernel: audit: type=1131 audit(1765850883.508:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:08:03.508000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:08:03.508000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:08:03.510258 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Dec 16 02:08:03.525009 systemd[1]: Reached target initrd.target - Initrd Default Target. Dec 16 02:08:03.528485 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Dec 16 02:08:03.534330 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Dec 16 02:08:03.585467 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 16 02:08:03.587000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:08:03.596083 kernel: audit: type=1130 audit(1765850883.587:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:08:03.596373 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Dec 16 02:08:03.639309 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Dec 16 02:08:03.640730 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Dec 16 02:08:03.648017 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 16 02:08:03.653793 systemd[1]: Stopped target timers.target - Timer Units. Dec 16 02:08:03.658428 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Dec 16 02:08:03.659416 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 16 02:08:03.666000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:08:03.668228 systemd[1]: Stopped target initrd.target - Initrd Default Target. Dec 16 02:08:03.671606 systemd[1]: Stopped target basic.target - Basic System. Dec 16 02:08:03.679844 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Dec 16 02:08:03.685505 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Dec 16 02:08:03.691324 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Dec 16 02:08:03.694973 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Dec 16 02:08:03.699210 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Dec 16 02:08:03.703429 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Dec 16 02:08:03.708595 systemd[1]: Stopped target sysinit.target - System Initialization. Dec 16 02:08:03.716206 systemd[1]: Stopped target local-fs.target - Local File Systems. Dec 16 02:08:03.719782 systemd[1]: Stopped target swap.target - Swaps. Dec 16 02:08:03.727000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:08:03.724193 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Dec 16 02:08:03.725127 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Dec 16 02:08:03.730633 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Dec 16 02:08:03.738375 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 16 02:08:03.751487 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Dec 16 02:08:03.752010 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 16 02:08:03.760764 systemd[1]: dracut-initqueue.service: Deactivated successfully. Dec 16 02:08:03.761335 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Dec 16 02:08:03.768000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:08:03.770351 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Dec 16 02:08:03.773000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:08:03.770674 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 16 02:08:03.774834 systemd[1]: ignition-files.service: Deactivated successfully. Dec 16 02:08:03.780000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:08:03.775172 systemd[1]: Stopped ignition-files.service - Ignition (files). Dec 16 02:08:03.786522 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Dec 16 02:08:03.793286 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Dec 16 02:08:03.796172 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Dec 16 02:08:03.805000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:08:03.808086 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Dec 16 02:08:03.818331 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Dec 16 02:08:03.829000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:08:03.818706 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 16 02:08:03.834000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:08:03.844000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:08:03.830563 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Dec 16 02:08:03.830847 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Dec 16 02:08:03.835596 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Dec 16 02:08:03.870199 ignition[1252]: INFO : Ignition 2.24.0 Dec 16 02:08:03.870199 ignition[1252]: INFO : Stage: umount Dec 16 02:08:03.870199 ignition[1252]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 16 02:08:03.870199 ignition[1252]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 16 02:08:03.870199 ignition[1252]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 16 02:08:03.881000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:08:03.881000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:08:03.835876 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Dec 16 02:08:03.910000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:08:03.913489 ignition[1252]: INFO : PUT result: OK Dec 16 02:08:03.913489 ignition[1252]: INFO : umount: umount passed Dec 16 02:08:03.913489 ignition[1252]: INFO : Ignition finished successfully Dec 16 02:08:03.917000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:08:03.923000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:08:03.873644 systemd[1]: initrd-cleanup.service: Deactivated successfully. Dec 16 02:08:03.934000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:08:03.878249 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Dec 16 02:08:03.942000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:08:03.898655 systemd[1]: ignition-mount.service: Deactivated successfully. Dec 16 02:08:03.898980 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Dec 16 02:08:03.912035 systemd[1]: ignition-disks.service: Deactivated successfully. Dec 16 02:08:03.912228 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Dec 16 02:08:03.919233 systemd[1]: ignition-kargs.service: Deactivated successfully. Dec 16 02:08:03.919390 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Dec 16 02:08:03.979000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:08:03.987000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup-pre comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:08:03.927318 systemd[1]: ignition-fetch.service: Deactivated successfully. Dec 16 02:08:03.927563 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Dec 16 02:08:03.936142 systemd[1]: Stopped target network.target - Network. Dec 16 02:08:03.939081 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Dec 16 02:08:03.939239 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Dec 16 02:08:03.943307 systemd[1]: Stopped target paths.target - Path Units. Dec 16 02:08:03.947903 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Dec 16 02:08:03.950404 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 16 02:08:03.953424 systemd[1]: Stopped target slices.target - Slice Units. Dec 16 02:08:03.958314 systemd[1]: Stopped target sockets.target - Socket Units. Dec 16 02:08:03.964153 systemd[1]: iscsid.socket: Deactivated successfully. Dec 16 02:08:03.964264 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Dec 16 02:08:03.967135 systemd[1]: iscsiuio.socket: Deactivated successfully. Dec 16 02:08:04.033000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:08:03.967239 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 16 02:08:03.967817 systemd[1]: systemd-journald-audit.socket: Deactivated successfully. Dec 16 02:08:03.967882 systemd[1]: Closed systemd-journald-audit.socket - Journal Audit Socket. Dec 16 02:08:04.052000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:08:03.977362 systemd[1]: ignition-setup.service: Deactivated successfully. Dec 16 02:08:03.977544 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Dec 16 02:08:03.982799 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Dec 16 02:08:04.072000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:08:03.982940 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Dec 16 02:08:04.079000 audit: BPF prog-id=6 op=UNLOAD Dec 16 02:08:04.079000 audit: BPF prog-id=9 op=UNLOAD Dec 16 02:08:03.988844 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Dec 16 02:08:03.991639 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Dec 16 02:08:04.020852 systemd[1]: sysroot-boot.mount: Deactivated successfully. Dec 16 02:08:04.023141 systemd[1]: sysroot-boot.service: Deactivated successfully. Dec 16 02:08:04.023409 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Dec 16 02:08:04.037634 systemd[1]: systemd-networkd.service: Deactivated successfully. Dec 16 02:08:04.106000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:08:04.037882 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Dec 16 02:08:04.066434 systemd[1]: systemd-resolved.service: Deactivated successfully. Dec 16 02:08:04.066687 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Dec 16 02:08:04.080184 systemd[1]: Stopped target network-pre.target - Preparation for Network. Dec 16 02:08:04.089898 systemd[1]: systemd-networkd.socket: Deactivated successfully. Dec 16 02:08:04.089995 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Dec 16 02:08:04.099994 systemd[1]: initrd-setup-root.service: Deactivated successfully. Dec 16 02:08:04.100204 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Dec 16 02:08:04.111002 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Dec 16 02:08:04.143580 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Dec 16 02:08:04.143746 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 16 02:08:04.151000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:08:04.152843 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 16 02:08:04.157000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:08:04.152982 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Dec 16 02:08:04.161000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:08:04.158229 systemd[1]: systemd-modules-load.service: Deactivated successfully. Dec 16 02:08:04.158380 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Dec 16 02:08:04.170560 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 16 02:08:04.195880 systemd[1]: systemd-udevd.service: Deactivated successfully. Dec 16 02:08:04.198351 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 16 02:08:04.204000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:08:04.206846 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Dec 16 02:08:04.210163 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Dec 16 02:08:04.216344 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Dec 16 02:08:04.217144 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Dec 16 02:08:04.226172 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Dec 16 02:08:04.228000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:08:04.226312 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Dec 16 02:08:04.236279 systemd[1]: dracut-cmdline.service: Deactivated successfully. Dec 16 02:08:04.240000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:08:04.244000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:08:04.236435 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Dec 16 02:08:04.241888 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 16 02:08:04.242025 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 16 02:08:04.251346 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Dec 16 02:08:04.270441 systemd[1]: systemd-network-generator.service: Deactivated successfully. Dec 16 02:08:04.280000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:08:04.270604 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Dec 16 02:08:04.288000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:08:04.282117 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Dec 16 02:08:04.282649 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 16 02:08:04.289723 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 16 02:08:04.289854 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 02:08:04.318000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:08:04.321371 systemd[1]: network-cleanup.service: Deactivated successfully. Dec 16 02:08:04.323204 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Dec 16 02:08:04.327000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:08:04.329806 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Dec 16 02:08:04.332551 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Dec 16 02:08:04.337000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:08:04.337000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:08:04.339799 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Dec 16 02:08:04.346840 systemd[1]: Starting initrd-switch-root.service - Switch Root... Dec 16 02:08:04.395846 systemd[1]: Switching root. Dec 16 02:08:04.486365 systemd-journald[361]: Journal stopped Dec 16 02:08:08.789167 systemd-journald[361]: Received SIGTERM from PID 1 (systemd). Dec 16 02:08:08.789313 kernel: SELinux: policy capability network_peer_controls=1 Dec 16 02:08:08.789360 kernel: SELinux: policy capability open_perms=1 Dec 16 02:08:08.789396 kernel: SELinux: policy capability extended_socket_class=1 Dec 16 02:08:08.789439 kernel: SELinux: policy capability always_check_network=0 Dec 16 02:08:08.789473 kernel: SELinux: policy capability cgroup_seclabel=1 Dec 16 02:08:08.789518 kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 16 02:08:08.789548 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Dec 16 02:08:08.789587 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Dec 16 02:08:08.789619 kernel: SELinux: policy capability userspace_initial_context=0 Dec 16 02:08:08.789653 systemd[1]: Successfully loaded SELinux policy in 170.362ms. Dec 16 02:08:08.789704 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 17.561ms. Dec 16 02:08:08.789744 systemd[1]: systemd 257.9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Dec 16 02:08:08.789780 systemd[1]: Detected virtualization amazon. Dec 16 02:08:08.789816 systemd[1]: Detected architecture arm64. Dec 16 02:08:08.789856 systemd[1]: Detected first boot. Dec 16 02:08:08.789887 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Dec 16 02:08:08.789921 zram_generator::config[1297]: No configuration found. Dec 16 02:08:08.789969 kernel: NET: Registered PF_VSOCK protocol family Dec 16 02:08:08.790001 systemd[1]: Populated /etc with preset unit settings. Dec 16 02:08:08.790039 kernel: kauditd_printk_skb: 43 callbacks suppressed Dec 16 02:08:08.790994 kernel: audit: type=1334 audit(1765850887.886:88): prog-id=12 op=LOAD Dec 16 02:08:08.793156 kernel: audit: type=1334 audit(1765850887.886:89): prog-id=3 op=UNLOAD Dec 16 02:08:08.793205 kernel: audit: type=1334 audit(1765850887.888:90): prog-id=13 op=LOAD Dec 16 02:08:08.793238 kernel: audit: type=1334 audit(1765850887.889:91): prog-id=14 op=LOAD Dec 16 02:08:08.793280 kernel: audit: type=1334 audit(1765850887.889:92): prog-id=4 op=UNLOAD Dec 16 02:08:08.793311 kernel: audit: type=1334 audit(1765850887.889:93): prog-id=5 op=UNLOAD Dec 16 02:08:08.793364 kernel: audit: type=1131 audit(1765850887.897:94): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:08:08.793400 systemd[1]: initrd-switch-root.service: Deactivated successfully. Dec 16 02:08:08.793436 kernel: audit: type=1334 audit(1765850887.906:95): prog-id=12 op=UNLOAD Dec 16 02:08:08.793465 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Dec 16 02:08:08.793497 kernel: audit: type=1130 audit(1765850887.911:96): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:08:08.793527 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Dec 16 02:08:08.793561 kernel: audit: type=1131 audit(1765850887.911:97): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:08:08.793601 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Dec 16 02:08:08.793635 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Dec 16 02:08:08.793669 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Dec 16 02:08:08.793699 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Dec 16 02:08:08.793730 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Dec 16 02:08:08.793763 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Dec 16 02:08:08.793803 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Dec 16 02:08:08.793843 systemd[1]: Created slice user.slice - User and Session Slice. Dec 16 02:08:08.793874 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 16 02:08:08.793907 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 16 02:08:08.793941 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Dec 16 02:08:08.793974 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Dec 16 02:08:08.794016 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Dec 16 02:08:08.794079 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 16 02:08:08.794119 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Dec 16 02:08:08.794151 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 16 02:08:08.794187 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 16 02:08:08.794230 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Dec 16 02:08:08.794262 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Dec 16 02:08:08.794302 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Dec 16 02:08:08.794339 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Dec 16 02:08:08.794373 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 16 02:08:08.794406 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 16 02:08:08.794439 systemd[1]: Reached target remote-veritysetup.target - Remote Verity Protected Volumes. Dec 16 02:08:08.794470 systemd[1]: Reached target slices.target - Slice Units. Dec 16 02:08:08.794504 systemd[1]: Reached target swap.target - Swaps. Dec 16 02:08:08.794537 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Dec 16 02:08:08.794584 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Dec 16 02:08:08.794626 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Dec 16 02:08:08.794662 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Dec 16 02:08:08.794695 systemd[1]: Listening on systemd-mountfsd.socket - DDI File System Mounter Socket. Dec 16 02:08:08.794725 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 16 02:08:08.794758 systemd[1]: Listening on systemd-nsresourced.socket - Namespace Resource Manager Socket. Dec 16 02:08:08.794792 systemd[1]: Listening on systemd-oomd.socket - Userspace Out-Of-Memory (OOM) Killer Socket. Dec 16 02:08:08.794830 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 16 02:08:08.794861 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 16 02:08:08.794893 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Dec 16 02:08:08.794923 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Dec 16 02:08:08.794957 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Dec 16 02:08:08.794990 systemd[1]: Mounting media.mount - External Media Directory... Dec 16 02:08:08.795023 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Dec 16 02:08:08.797157 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Dec 16 02:08:08.797209 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Dec 16 02:08:08.797242 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Dec 16 02:08:08.802930 systemd[1]: Reached target machines.target - Containers. Dec 16 02:08:08.802977 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Dec 16 02:08:08.803010 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 16 02:08:08.803085 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 16 02:08:08.803125 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Dec 16 02:08:08.803157 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 16 02:08:08.803187 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 16 02:08:08.803220 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 16 02:08:08.803254 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Dec 16 02:08:08.803287 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 16 02:08:08.803327 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Dec 16 02:08:08.803361 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Dec 16 02:08:08.803394 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Dec 16 02:08:08.803424 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Dec 16 02:08:08.803456 systemd[1]: Stopped systemd-fsck-usr.service. Dec 16 02:08:08.803491 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 16 02:08:08.803526 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 16 02:08:08.803558 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 16 02:08:08.803588 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Dec 16 02:08:08.803618 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Dec 16 02:08:08.803651 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Dec 16 02:08:08.803688 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 16 02:08:08.803717 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Dec 16 02:08:08.803747 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Dec 16 02:08:08.803777 systemd[1]: Mounted media.mount - External Media Directory. Dec 16 02:08:08.803807 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Dec 16 02:08:08.803837 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Dec 16 02:08:08.803875 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Dec 16 02:08:08.803911 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 16 02:08:08.803941 systemd[1]: modprobe@configfs.service: Deactivated successfully. Dec 16 02:08:08.803974 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Dec 16 02:08:08.804006 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 16 02:08:08.804039 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 16 02:08:08.804604 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 16 02:08:08.804638 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 16 02:08:08.804667 kernel: fuse: init (API version 7.41) Dec 16 02:08:08.804697 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 16 02:08:08.804732 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 16 02:08:08.804762 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Dec 16 02:08:08.804799 systemd[1]: modprobe@fuse.service: Deactivated successfully. Dec 16 02:08:08.804832 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Dec 16 02:08:08.804863 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 16 02:08:08.804897 systemd[1]: Listening on systemd-importd.socket - Disk Image Download Service Socket. Dec 16 02:08:08.804931 kernel: ACPI: bus type drm_connector registered Dec 16 02:08:08.804959 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Dec 16 02:08:08.804989 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Dec 16 02:08:08.805022 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Dec 16 02:08:08.805086 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 16 02:08:08.805121 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Dec 16 02:08:08.805155 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 16 02:08:08.805191 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Dec 16 02:08:08.805221 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Dec 16 02:08:08.805254 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 16 02:08:08.805291 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Dec 16 02:08:08.805321 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 16 02:08:08.805396 systemd-journald[1375]: Collecting audit messages is enabled. Dec 16 02:08:08.805455 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 16 02:08:08.805486 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Dec 16 02:08:08.805517 systemd-journald[1375]: Journal started Dec 16 02:08:08.805566 systemd-journald[1375]: Runtime Journal (/run/log/journal/ec26b1b3b3dde4b9aac5448327693da5) is 8M, max 75.3M, 67.3M free. Dec 16 02:08:08.077000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Dec 16 02:08:08.400000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:08:08.408000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:08:08.416000 audit: BPF prog-id=14 op=UNLOAD Dec 16 02:08:08.416000 audit: BPF prog-id=13 op=UNLOAD Dec 16 02:08:08.422000 audit: BPF prog-id=15 op=LOAD Dec 16 02:08:08.423000 audit: BPF prog-id=16 op=LOAD Dec 16 02:08:08.423000 audit: BPF prog-id=17 op=LOAD Dec 16 02:08:08.560000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:08:08.573000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:08:08.573000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:08:08.587000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:08:08.588000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:08:08.604000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:08:08.604000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:08:08.617000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:08:08.617000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:08:08.627000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:08:08.640000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:08:08.640000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:08:08.649000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:08:08.777000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Dec 16 02:08:08.777000 audit[1375]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=60 a0=6 a1=ffffe2cfbdf0 a2=4000 a3=0 items=0 ppid=1 pid=1375 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:08:08.777000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Dec 16 02:08:07.861631 systemd[1]: Queued start job for default target multi-user.target. Dec 16 02:08:07.892324 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Dec 16 02:08:08.812471 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 16 02:08:08.812948 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 16 02:08:07.894862 systemd[1]: systemd-journald.service: Deactivated successfully. Dec 16 02:08:08.827442 systemd[1]: Started systemd-journald.service - Journal Service. Dec 16 02:08:08.817000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:08:08.819000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:08:08.826000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:08:08.837000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:08:08.834241 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Dec 16 02:08:08.847297 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Dec 16 02:08:08.852000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-load-credentials comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:08:08.856342 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Dec 16 02:08:08.861652 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Dec 16 02:08:08.873632 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Dec 16 02:08:08.878000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:08:08.919550 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Dec 16 02:08:08.922864 systemd[1]: Reached target network-pre.target - Preparation for Network. Dec 16 02:08:08.928122 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Dec 16 02:08:08.933238 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Dec 16 02:08:08.953096 kernel: loop1: detected capacity change from 0 to 200800 Dec 16 02:08:08.957000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:08:08.955271 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Dec 16 02:08:08.964551 systemd[1]: Starting systemd-sysusers.service - Create System Users... Dec 16 02:08:08.986000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:08:08.983994 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 16 02:08:09.014368 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Dec 16 02:08:09.018933 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Dec 16 02:08:09.021000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:08:09.036396 systemd-journald[1375]: Time spent on flushing to /var/log/journal/ec26b1b3b3dde4b9aac5448327693da5 is 76.536ms for 1063 entries. Dec 16 02:08:09.036396 systemd-journald[1375]: System Journal (/var/log/journal/ec26b1b3b3dde4b9aac5448327693da5) is 8M, max 588.1M, 580.1M free. Dec 16 02:08:09.156698 systemd-journald[1375]: Received client request to flush runtime journal. Dec 16 02:08:09.110000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:08:09.108233 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 16 02:08:09.165211 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Dec 16 02:08:09.169000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:08:09.173363 systemd[1]: Finished systemd-sysusers.service - Create System Users. Dec 16 02:08:09.175000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:08:09.182000 audit: BPF prog-id=18 op=LOAD Dec 16 02:08:09.183000 audit: BPF prog-id=19 op=LOAD Dec 16 02:08:09.184000 audit: BPF prog-id=20 op=LOAD Dec 16 02:08:09.190449 systemd[1]: Starting systemd-oomd.service - Userspace Out-Of-Memory (OOM) Killer... Dec 16 02:08:09.195000 audit: BPF prog-id=21 op=LOAD Dec 16 02:08:09.199284 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 16 02:08:09.207567 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 16 02:08:09.218183 kernel: loop2: detected capacity change from 0 to 45344 Dec 16 02:08:09.227000 audit: BPF prog-id=22 op=LOAD Dec 16 02:08:09.228000 audit: BPF prog-id=23 op=LOAD Dec 16 02:08:09.228000 audit: BPF prog-id=24 op=LOAD Dec 16 02:08:09.232635 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Dec 16 02:08:09.238000 audit: BPF prog-id=25 op=LOAD Dec 16 02:08:09.239000 audit: BPF prog-id=26 op=LOAD Dec 16 02:08:09.239000 audit: BPF prog-id=27 op=LOAD Dec 16 02:08:09.243966 systemd[1]: Starting systemd-nsresourced.service - Namespace Resource Manager... Dec 16 02:08:09.324403 systemd-tmpfiles[1450]: ACLs are not supported, ignoring. Dec 16 02:08:09.324446 systemd-tmpfiles[1450]: ACLs are not supported, ignoring. Dec 16 02:08:09.347211 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 16 02:08:09.349000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:08:09.398032 systemd-nsresourced[1452]: Not setting up BPF subsystem, as functionality has been disabled at compile time. Dec 16 02:08:09.401839 systemd[1]: Started systemd-nsresourced.service - Namespace Resource Manager. Dec 16 02:08:09.404000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-nsresourced comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:08:09.424609 systemd[1]: Started systemd-userdbd.service - User Database Manager. Dec 16 02:08:09.429000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:08:09.544074 kernel: loop3: detected capacity change from 0 to 61504 Dec 16 02:08:09.600994 systemd-oomd[1448]: No swap; memory pressure usage will be degraded Dec 16 02:08:09.602826 systemd[1]: Started systemd-oomd.service - Userspace Out-Of-Memory (OOM) Killer. Dec 16 02:08:09.604000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-oomd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:08:09.675821 systemd-resolved[1449]: Positive Trust Anchors: Dec 16 02:08:09.675868 systemd-resolved[1449]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 16 02:08:09.675878 systemd-resolved[1449]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Dec 16 02:08:09.675940 systemd-resolved[1449]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 16 02:08:09.692398 systemd-resolved[1449]: Defaulting to hostname 'linux'. Dec 16 02:08:09.695040 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 16 02:08:09.695000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:08:09.698787 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 16 02:08:09.911095 kernel: loop4: detected capacity change from 0 to 100192 Dec 16 02:08:10.148183 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Dec 16 02:08:10.150000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:08:10.150000 audit: BPF prog-id=8 op=UNLOAD Dec 16 02:08:10.150000 audit: BPF prog-id=7 op=UNLOAD Dec 16 02:08:10.152000 audit: BPF prog-id=28 op=LOAD Dec 16 02:08:10.152000 audit: BPF prog-id=29 op=LOAD Dec 16 02:08:10.155304 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 16 02:08:10.225539 systemd-udevd[1472]: Using default interface naming scheme 'v257'. Dec 16 02:08:10.232108 kernel: loop5: detected capacity change from 0 to 200800 Dec 16 02:08:10.258476 kernel: loop6: detected capacity change from 0 to 45344 Dec 16 02:08:10.270125 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 16 02:08:10.272000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:08:10.275000 audit: BPF prog-id=30 op=LOAD Dec 16 02:08:10.278965 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 16 02:08:10.290688 kernel: loop7: detected capacity change from 0 to 61504 Dec 16 02:08:10.313091 kernel: loop1: detected capacity change from 0 to 100192 Dec 16 02:08:10.331362 (sd-merge)[1474]: Using extensions 'containerd-flatcar.raw', 'docker-flatcar.raw', 'kubernetes.raw', 'oem-ami.raw'. Dec 16 02:08:10.341602 (sd-merge)[1474]: Merged extensions into '/usr'. Dec 16 02:08:10.355303 systemd[1]: Reload requested from client PID 1412 ('systemd-sysext') (unit systemd-sysext.service)... Dec 16 02:08:10.355571 systemd[1]: Reloading... Dec 16 02:08:10.497104 (udev-worker)[1482]: Network interface NamePolicy= disabled on kernel command line. Dec 16 02:08:10.565040 systemd-networkd[1478]: lo: Link UP Dec 16 02:08:10.565113 systemd-networkd[1478]: lo: Gained carrier Dec 16 02:08:10.658137 zram_generator::config[1535]: No configuration found. Dec 16 02:08:10.747662 systemd-networkd[1478]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Dec 16 02:08:10.747687 systemd-networkd[1478]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 16 02:08:10.760960 systemd-networkd[1478]: eth0: Link UP Dec 16 02:08:10.761832 systemd-networkd[1478]: eth0: Gained carrier Dec 16 02:08:10.761893 systemd-networkd[1478]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Dec 16 02:08:10.775192 systemd-networkd[1478]: eth0: DHCPv4 address 172.31.24.92/20, gateway 172.31.16.1 acquired from 172.31.16.1 Dec 16 02:08:11.277705 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Dec 16 02:08:11.280089 systemd[1]: Reloading finished in 923 ms. Dec 16 02:08:11.308777 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 16 02:08:11.310000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:08:11.316000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:08:11.314212 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Dec 16 02:08:11.363469 systemd[1]: Reached target network.target - Network. Dec 16 02:08:11.377370 systemd[1]: Starting ensure-sysext.service... Dec 16 02:08:11.383456 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Dec 16 02:08:11.389435 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Dec 16 02:08:11.394728 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 16 02:08:11.407000 audit: BPF prog-id=31 op=LOAD Dec 16 02:08:11.407000 audit: BPF prog-id=15 op=UNLOAD Dec 16 02:08:11.407000 audit: BPF prog-id=32 op=LOAD Dec 16 02:08:11.407000 audit: BPF prog-id=33 op=LOAD Dec 16 02:08:11.407000 audit: BPF prog-id=16 op=UNLOAD Dec 16 02:08:11.407000 audit: BPF prog-id=17 op=UNLOAD Dec 16 02:08:11.409000 audit: BPF prog-id=34 op=LOAD Dec 16 02:08:11.409000 audit: BPF prog-id=18 op=UNLOAD Dec 16 02:08:11.409000 audit: BPF prog-id=35 op=LOAD Dec 16 02:08:11.403559 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 16 02:08:11.410000 audit: BPF prog-id=36 op=LOAD Dec 16 02:08:11.410000 audit: BPF prog-id=19 op=UNLOAD Dec 16 02:08:11.410000 audit: BPF prog-id=20 op=UNLOAD Dec 16 02:08:11.411000 audit: BPF prog-id=37 op=LOAD Dec 16 02:08:11.412000 audit: BPF prog-id=30 op=UNLOAD Dec 16 02:08:11.415000 audit: BPF prog-id=38 op=LOAD Dec 16 02:08:11.415000 audit: BPF prog-id=21 op=UNLOAD Dec 16 02:08:11.418000 audit: BPF prog-id=39 op=LOAD Dec 16 02:08:11.418000 audit: BPF prog-id=25 op=UNLOAD Dec 16 02:08:11.419000 audit: BPF prog-id=40 op=LOAD Dec 16 02:08:11.419000 audit: BPF prog-id=41 op=LOAD Dec 16 02:08:11.419000 audit: BPF prog-id=26 op=UNLOAD Dec 16 02:08:11.419000 audit: BPF prog-id=27 op=UNLOAD Dec 16 02:08:11.420000 audit: BPF prog-id=42 op=LOAD Dec 16 02:08:11.421000 audit: BPF prog-id=43 op=LOAD Dec 16 02:08:11.421000 audit: BPF prog-id=28 op=UNLOAD Dec 16 02:08:11.421000 audit: BPF prog-id=29 op=UNLOAD Dec 16 02:08:11.423000 audit: BPF prog-id=44 op=LOAD Dec 16 02:08:11.424000 audit: BPF prog-id=22 op=UNLOAD Dec 16 02:08:11.424000 audit: BPF prog-id=45 op=LOAD Dec 16 02:08:11.425000 audit: BPF prog-id=46 op=LOAD Dec 16 02:08:11.425000 audit: BPF prog-id=23 op=UNLOAD Dec 16 02:08:11.426000 audit: BPF prog-id=24 op=UNLOAD Dec 16 02:08:11.476653 systemd[1]: Reload requested from client PID 1665 ('systemctl') (unit ensure-sysext.service)... Dec 16 02:08:11.476678 systemd[1]: Reloading... Dec 16 02:08:11.556781 systemd-tmpfiles[1672]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Dec 16 02:08:11.556900 systemd-tmpfiles[1672]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Dec 16 02:08:11.558016 systemd-tmpfiles[1672]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Dec 16 02:08:11.562827 systemd-tmpfiles[1672]: ACLs are not supported, ignoring. Dec 16 02:08:11.562993 systemd-tmpfiles[1672]: ACLs are not supported, ignoring. Dec 16 02:08:11.609345 systemd-tmpfiles[1672]: Detected autofs mount point /boot during canonicalization of boot. Dec 16 02:08:11.609377 systemd-tmpfiles[1672]: Skipping /boot Dec 16 02:08:11.638871 systemd-tmpfiles[1672]: Detected autofs mount point /boot during canonicalization of boot. Dec 16 02:08:11.638912 systemd-tmpfiles[1672]: Skipping /boot Dec 16 02:08:11.760213 zram_generator::config[1735]: No configuration found. Dec 16 02:08:12.224124 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Dec 16 02:08:12.228485 systemd[1]: Reloading finished in 750 ms. Dec 16 02:08:12.251000 audit: BPF prog-id=47 op=LOAD Dec 16 02:08:12.251000 audit: BPF prog-id=37 op=UNLOAD Dec 16 02:08:12.252000 audit: BPF prog-id=48 op=LOAD Dec 16 02:08:12.252000 audit: BPF prog-id=49 op=LOAD Dec 16 02:08:12.252000 audit: BPF prog-id=42 op=UNLOAD Dec 16 02:08:12.253000 audit: BPF prog-id=43 op=UNLOAD Dec 16 02:08:12.254000 audit: BPF prog-id=50 op=LOAD Dec 16 02:08:12.254000 audit: BPF prog-id=31 op=UNLOAD Dec 16 02:08:12.255000 audit: BPF prog-id=51 op=LOAD Dec 16 02:08:12.255000 audit: BPF prog-id=52 op=LOAD Dec 16 02:08:12.255000 audit: BPF prog-id=32 op=UNLOAD Dec 16 02:08:12.255000 audit: BPF prog-id=33 op=UNLOAD Dec 16 02:08:12.256000 audit: BPF prog-id=53 op=LOAD Dec 16 02:08:12.257000 audit: BPF prog-id=39 op=UNLOAD Dec 16 02:08:12.257000 audit: BPF prog-id=54 op=LOAD Dec 16 02:08:12.257000 audit: BPF prog-id=55 op=LOAD Dec 16 02:08:12.261000 audit: BPF prog-id=40 op=UNLOAD Dec 16 02:08:12.261000 audit: BPF prog-id=41 op=UNLOAD Dec 16 02:08:12.263000 audit: BPF prog-id=56 op=LOAD Dec 16 02:08:12.263000 audit: BPF prog-id=38 op=UNLOAD Dec 16 02:08:12.265000 audit: BPF prog-id=57 op=LOAD Dec 16 02:08:12.266000 audit: BPF prog-id=34 op=UNLOAD Dec 16 02:08:12.266000 audit: BPF prog-id=58 op=LOAD Dec 16 02:08:12.266000 audit: BPF prog-id=59 op=LOAD Dec 16 02:08:12.266000 audit: BPF prog-id=35 op=UNLOAD Dec 16 02:08:12.267000 audit: BPF prog-id=36 op=UNLOAD Dec 16 02:08:12.268000 audit: BPF prog-id=60 op=LOAD Dec 16 02:08:12.268000 audit: BPF prog-id=44 op=UNLOAD Dec 16 02:08:12.268000 audit: BPF prog-id=61 op=LOAD Dec 16 02:08:12.268000 audit: BPF prog-id=62 op=LOAD Dec 16 02:08:12.268000 audit: BPF prog-id=45 op=UNLOAD Dec 16 02:08:12.268000 audit: BPF prog-id=46 op=UNLOAD Dec 16 02:08:12.278392 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Dec 16 02:08:12.280000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd-persistent-storage comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:08:12.282419 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 16 02:08:12.285000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:08:12.289680 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 02:08:12.292000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:08:12.347340 systemd-networkd[1478]: eth0: Gained IPv6LL Dec 16 02:08:12.357151 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Dec 16 02:08:12.359000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd-wait-online comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:08:12.363259 systemd[1]: Reached target network-online.target - Network is Online. Dec 16 02:08:12.369645 systemd[1]: Starting audit-rules.service - Load Audit Rules... Dec 16 02:08:12.376584 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Dec 16 02:08:12.379575 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 16 02:08:12.383621 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 16 02:08:12.388653 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 16 02:08:12.394132 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 16 02:08:12.397548 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 16 02:08:12.397992 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Dec 16 02:08:12.405712 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Dec 16 02:08:12.414520 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Dec 16 02:08:12.417276 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 16 02:08:12.422669 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Dec 16 02:08:12.430126 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Dec 16 02:08:12.442680 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 16 02:08:12.444201 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 16 02:08:12.444648 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Dec 16 02:08:12.444917 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 16 02:08:12.456846 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 16 02:08:12.467660 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 16 02:08:12.470461 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 16 02:08:12.470836 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Dec 16 02:08:12.472102 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 16 02:08:12.472576 systemd[1]: Reached target time-set.target - System Time Set. Dec 16 02:08:12.487000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:08:12.487580 systemd[1]: Finished ensure-sysext.service. Dec 16 02:08:12.491955 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 16 02:08:12.493000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:08:12.494000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:08:12.494163 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 16 02:08:12.503364 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 16 02:08:12.517818 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 16 02:08:12.520143 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 16 02:08:12.522000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:08:12.523000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:08:12.526000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:08:12.528000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:08:12.525282 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 16 02:08:12.527214 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 16 02:08:12.532317 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 16 02:08:12.532000 audit[1793]: SYSTEM_BOOT pid=1793 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Dec 16 02:08:12.543582 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Dec 16 02:08:12.545000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:08:12.549000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:08:12.549000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:08:12.547314 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 16 02:08:12.547745 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 16 02:08:12.563642 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Dec 16 02:08:12.565000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:08:12.596185 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Dec 16 02:08:12.598000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:08:12.752000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Dec 16 02:08:12.752000 audit[1823]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffe56c36a0 a2=420 a3=0 items=0 ppid=1785 pid=1823 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:08:12.752000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Dec 16 02:08:12.755549 augenrules[1823]: No rules Dec 16 02:08:12.756629 systemd[1]: audit-rules.service: Deactivated successfully. Dec 16 02:08:12.759576 systemd[1]: Finished audit-rules.service - Load Audit Rules. Dec 16 02:08:12.802482 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Dec 16 02:08:12.806618 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 16 02:08:15.205945 ldconfig[1790]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Dec 16 02:08:15.214161 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Dec 16 02:08:15.220173 systemd[1]: Starting systemd-update-done.service - Update is Completed... Dec 16 02:08:15.251335 systemd[1]: Finished systemd-update-done.service - Update is Completed. Dec 16 02:08:15.255547 systemd[1]: Reached target sysinit.target - System Initialization. Dec 16 02:08:15.258401 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Dec 16 02:08:15.261278 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Dec 16 02:08:15.264422 systemd[1]: Started logrotate.timer - Daily rotation of log files. Dec 16 02:08:15.267183 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Dec 16 02:08:15.270130 systemd[1]: Started systemd-sysupdate-reboot.timer - Reboot Automatically After System Update. Dec 16 02:08:15.273261 systemd[1]: Started systemd-sysupdate.timer - Automatic System Update. Dec 16 02:08:15.275884 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Dec 16 02:08:15.279266 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Dec 16 02:08:15.279329 systemd[1]: Reached target paths.target - Path Units. Dec 16 02:08:15.281542 systemd[1]: Reached target timers.target - Timer Units. Dec 16 02:08:15.285176 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Dec 16 02:08:15.290634 systemd[1]: Starting docker.socket - Docker Socket for the API... Dec 16 02:08:15.297652 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Dec 16 02:08:15.301526 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Dec 16 02:08:15.304834 systemd[1]: Reached target ssh-access.target - SSH Access Available. Dec 16 02:08:15.320229 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Dec 16 02:08:15.323201 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Dec 16 02:08:15.327079 systemd[1]: Listening on docker.socket - Docker Socket for the API. Dec 16 02:08:15.329730 systemd[1]: Reached target sockets.target - Socket Units. Dec 16 02:08:15.331885 systemd[1]: Reached target basic.target - Basic System. Dec 16 02:08:15.334997 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Dec 16 02:08:15.335174 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Dec 16 02:08:15.337132 systemd[1]: Starting containerd.service - containerd container runtime... Dec 16 02:08:15.346331 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Dec 16 02:08:15.356216 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Dec 16 02:08:15.362486 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Dec 16 02:08:15.374443 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Dec 16 02:08:15.379544 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Dec 16 02:08:15.384451 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Dec 16 02:08:15.410910 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 02:08:15.419364 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Dec 16 02:08:15.429070 jq[1839]: false Dec 16 02:08:15.427578 systemd[1]: Started ntpd.service - Network Time Service. Dec 16 02:08:15.439474 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Dec 16 02:08:15.446404 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Dec 16 02:08:15.457452 extend-filesystems[1840]: Found /dev/nvme0n1p6 Dec 16 02:08:15.460474 systemd[1]: Starting setup-oem.service - Setup OEM... Dec 16 02:08:15.469102 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Dec 16 02:08:15.478499 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Dec 16 02:08:15.494476 systemd[1]: Starting systemd-logind.service - User Login Management... Dec 16 02:08:15.497252 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Dec 16 02:08:15.498234 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Dec 16 02:08:15.506248 systemd[1]: Starting update-engine.service - Update Engine... Dec 16 02:08:15.509853 extend-filesystems[1840]: Found /dev/nvme0n1p9 Dec 16 02:08:15.514029 extend-filesystems[1840]: Checking size of /dev/nvme0n1p9 Dec 16 02:08:15.527720 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Dec 16 02:08:15.547188 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Dec 16 02:08:15.550719 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Dec 16 02:08:15.552187 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Dec 16 02:08:15.611163 extend-filesystems[1840]: Resized partition /dev/nvme0n1p9 Dec 16 02:08:15.645579 extend-filesystems[1877]: resize2fs 1.47.3 (8-Jul-2025) Dec 16 02:08:15.675369 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 1617920 to 2604027 blocks Dec 16 02:08:15.676606 jq[1857]: true Dec 16 02:08:15.662401 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Dec 16 02:08:15.662874 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Dec 16 02:08:15.714080 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 2604027 Dec 16 02:08:15.693528 dbus-daemon[1837]: [system] SELinux support is enabled Dec 16 02:08:15.694385 systemd[1]: Started dbus.service - D-Bus System Message Bus. Dec 16 02:08:15.717999 dbus-daemon[1837]: [system] Activating via systemd: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.3' (uid=244 pid=1478 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Dec 16 02:08:15.703880 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Dec 16 02:08:15.703931 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Dec 16 02:08:15.708441 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Dec 16 02:08:15.708475 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Dec 16 02:08:15.726072 extend-filesystems[1877]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Dec 16 02:08:15.726072 extend-filesystems[1877]: old_desc_blocks = 1, new_desc_blocks = 2 Dec 16 02:08:15.726072 extend-filesystems[1877]: The filesystem on /dev/nvme0n1p9 is now 2604027 (4k) blocks long. Dec 16 02:08:15.745774 extend-filesystems[1840]: Resized filesystem in /dev/nvme0n1p9 Dec 16 02:08:15.753841 ntpd[1843]: ntpd 4.2.8p18@1.4062-o Mon Dec 15 23:39:58 UTC 2025 (1): Starting Dec 16 02:08:15.753961 ntpd[1843]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Dec 16 02:08:15.754449 ntpd[1843]: 16 Dec 02:08:15 ntpd[1843]: ntpd 4.2.8p18@1.4062-o Mon Dec 15 23:39:58 UTC 2025 (1): Starting Dec 16 02:08:15.754449 ntpd[1843]: 16 Dec 02:08:15 ntpd[1843]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Dec 16 02:08:15.754449 ntpd[1843]: 16 Dec 02:08:15 ntpd[1843]: ---------------------------------------------------- Dec 16 02:08:15.754449 ntpd[1843]: 16 Dec 02:08:15 ntpd[1843]: ntp-4 is maintained by Network Time Foundation, Dec 16 02:08:15.754449 ntpd[1843]: 16 Dec 02:08:15 ntpd[1843]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Dec 16 02:08:15.754449 ntpd[1843]: 16 Dec 02:08:15 ntpd[1843]: corporation. Support and training for ntp-4 are Dec 16 02:08:15.754449 ntpd[1843]: 16 Dec 02:08:15 ntpd[1843]: available at https://www.nwtime.org/support Dec 16 02:08:15.754449 ntpd[1843]: 16 Dec 02:08:15 ntpd[1843]: ---------------------------------------------------- Dec 16 02:08:15.753982 ntpd[1843]: ---------------------------------------------------- Dec 16 02:08:15.754000 ntpd[1843]: ntp-4 is maintained by Network Time Foundation, Dec 16 02:08:15.754018 ntpd[1843]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Dec 16 02:08:15.754035 ntpd[1843]: corporation. Support and training for ntp-4 are Dec 16 02:08:15.754078 ntpd[1843]: available at https://www.nwtime.org/support Dec 16 02:08:15.754098 ntpd[1843]: ---------------------------------------------------- Dec 16 02:08:15.767617 ntpd[1843]: proto: precision = 0.096 usec (-23) Dec 16 02:08:15.778259 ntpd[1843]: 16 Dec 02:08:15 ntpd[1843]: proto: precision = 0.096 usec (-23) Dec 16 02:08:15.778259 ntpd[1843]: 16 Dec 02:08:15 ntpd[1843]: basedate set to 2025-12-03 Dec 16 02:08:15.778259 ntpd[1843]: 16 Dec 02:08:15 ntpd[1843]: gps base set to 2025-12-07 (week 2396) Dec 16 02:08:15.778259 ntpd[1843]: 16 Dec 02:08:15 ntpd[1843]: Listen and drop on 0 v6wildcard [::]:123 Dec 16 02:08:15.778259 ntpd[1843]: 16 Dec 02:08:15 ntpd[1843]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Dec 16 02:08:15.778259 ntpd[1843]: 16 Dec 02:08:15 ntpd[1843]: Listen normally on 2 lo 127.0.0.1:123 Dec 16 02:08:15.778259 ntpd[1843]: 16 Dec 02:08:15 ntpd[1843]: Listen normally on 3 eth0 172.31.24.92:123 Dec 16 02:08:15.778259 ntpd[1843]: 16 Dec 02:08:15 ntpd[1843]: Listen normally on 4 lo [::1]:123 Dec 16 02:08:15.778259 ntpd[1843]: 16 Dec 02:08:15 ntpd[1843]: Listen normally on 5 eth0 [fe80::460:28ff:fe3b:25ef%2]:123 Dec 16 02:08:15.778259 ntpd[1843]: 16 Dec 02:08:15 ntpd[1843]: Listening on routing socket on fd #22 for interface updates Dec 16 02:08:15.768985 ntpd[1843]: basedate set to 2025-12-03 Dec 16 02:08:15.769015 ntpd[1843]: gps base set to 2025-12-07 (week 2396) Dec 16 02:08:15.769225 ntpd[1843]: Listen and drop on 0 v6wildcard [::]:123 Dec 16 02:08:15.769275 ntpd[1843]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Dec 16 02:08:15.771391 ntpd[1843]: Listen normally on 2 lo 127.0.0.1:123 Dec 16 02:08:15.771470 ntpd[1843]: Listen normally on 3 eth0 172.31.24.92:123 Dec 16 02:08:15.771522 ntpd[1843]: Listen normally on 4 lo [::1]:123 Dec 16 02:08:15.771569 ntpd[1843]: Listen normally on 5 eth0 [fe80::460:28ff:fe3b:25ef%2]:123 Dec 16 02:08:15.771613 ntpd[1843]: Listening on routing socket on fd #22 for interface updates Dec 16 02:08:15.785404 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Dec 16 02:08:15.790283 systemd[1]: extend-filesystems.service: Deactivated successfully. Dec 16 02:08:15.790746 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Dec 16 02:08:15.817261 ntpd[1843]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Dec 16 02:08:15.818991 ntpd[1843]: 16 Dec 02:08:15 ntpd[1843]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Dec 16 02:08:15.818991 ntpd[1843]: 16 Dec 02:08:15 ntpd[1843]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Dec 16 02:08:15.817329 ntpd[1843]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Dec 16 02:08:15.829359 coreos-metadata[1836]: Dec 16 02:08:15.829 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Dec 16 02:08:15.833361 coreos-metadata[1836]: Dec 16 02:08:15.833 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Dec 16 02:08:15.834754 systemd[1]: motdgen.service: Deactivated successfully. Dec 16 02:08:15.836232 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Dec 16 02:08:15.851070 jq[1889]: true Dec 16 02:08:15.853516 coreos-metadata[1836]: Dec 16 02:08:15.853 INFO Fetch successful Dec 16 02:08:15.853516 coreos-metadata[1836]: Dec 16 02:08:15.853 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Dec 16 02:08:15.856624 coreos-metadata[1836]: Dec 16 02:08:15.856 INFO Fetch successful Dec 16 02:08:15.856624 coreos-metadata[1836]: Dec 16 02:08:15.856 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Dec 16 02:08:15.856624 coreos-metadata[1836]: Dec 16 02:08:15.856 INFO Fetch successful Dec 16 02:08:15.856624 coreos-metadata[1836]: Dec 16 02:08:15.856 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Dec 16 02:08:15.861385 coreos-metadata[1836]: Dec 16 02:08:15.861 INFO Fetch successful Dec 16 02:08:15.861385 coreos-metadata[1836]: Dec 16 02:08:15.861 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Dec 16 02:08:15.865576 coreos-metadata[1836]: Dec 16 02:08:15.865 INFO Fetch failed with 404: resource not found Dec 16 02:08:15.865576 coreos-metadata[1836]: Dec 16 02:08:15.865 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Dec 16 02:08:15.871367 coreos-metadata[1836]: Dec 16 02:08:15.871 INFO Fetch successful Dec 16 02:08:15.871367 coreos-metadata[1836]: Dec 16 02:08:15.871 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Dec 16 02:08:15.880060 tar[1864]: linux-arm64/LICENSE Dec 16 02:08:15.881889 tar[1864]: linux-arm64/helm Dec 16 02:08:15.885396 coreos-metadata[1836]: Dec 16 02:08:15.885 INFO Fetch successful Dec 16 02:08:15.885396 coreos-metadata[1836]: Dec 16 02:08:15.885 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Dec 16 02:08:15.887268 coreos-metadata[1836]: Dec 16 02:08:15.887 INFO Fetch successful Dec 16 02:08:15.887268 coreos-metadata[1836]: Dec 16 02:08:15.887 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Dec 16 02:08:15.896308 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Dec 16 02:08:15.900209 coreos-metadata[1836]: Dec 16 02:08:15.900 INFO Fetch successful Dec 16 02:08:15.900209 coreos-metadata[1836]: Dec 16 02:08:15.900 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Dec 16 02:08:15.900209 coreos-metadata[1836]: Dec 16 02:08:15.900 INFO Fetch successful Dec 16 02:08:15.948086 update_engine[1854]: I20251216 02:08:15.935011 1854 main.cc:92] Flatcar Update Engine starting Dec 16 02:08:15.966226 systemd[1]: Started update-engine.service - Update Engine. Dec 16 02:08:15.978398 update_engine[1854]: I20251216 02:08:15.976565 1854 update_check_scheduler.cc:74] Next update check in 11m27s Dec 16 02:08:16.030734 systemd[1]: Started locksmithd.service - Cluster reboot manager. Dec 16 02:08:16.035691 systemd[1]: Finished setup-oem.service - Setup OEM. Dec 16 02:08:16.053259 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Dec 16 02:08:16.100549 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Dec 16 02:08:16.104811 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Dec 16 02:08:16.285150 bash[1971]: Updated "/home/core/.ssh/authorized_keys" Dec 16 02:08:16.293173 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Dec 16 02:08:16.318333 systemd[1]: Starting sshkeys.service... Dec 16 02:08:16.389283 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Dec 16 02:08:16.401241 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Dec 16 02:08:16.530719 systemd-logind[1853]: Watching system buttons on /dev/input/event0 (Power Button) Dec 16 02:08:16.530773 systemd-logind[1853]: Watching system buttons on /dev/input/event1 (Sleep Button) Dec 16 02:08:16.541366 systemd-logind[1853]: New seat seat0. Dec 16 02:08:16.548761 systemd[1]: Started systemd-logind.service - User Login Management. Dec 16 02:08:16.578559 amazon-ssm-agent[1945]: Initializing new seelog logger Dec 16 02:08:16.578559 amazon-ssm-agent[1945]: New Seelog Logger Creation Complete Dec 16 02:08:16.580324 amazon-ssm-agent[1945]: 2025/12/16 02:08:16 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Dec 16 02:08:16.580324 amazon-ssm-agent[1945]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Dec 16 02:08:16.580324 amazon-ssm-agent[1945]: 2025/12/16 02:08:16 processing appconfig overrides Dec 16 02:08:16.597338 amazon-ssm-agent[1945]: 2025/12/16 02:08:16 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Dec 16 02:08:16.597338 amazon-ssm-agent[1945]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Dec 16 02:08:16.597338 amazon-ssm-agent[1945]: 2025/12/16 02:08:16 processing appconfig overrides Dec 16 02:08:16.597338 amazon-ssm-agent[1945]: 2025/12/16 02:08:16 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Dec 16 02:08:16.597338 amazon-ssm-agent[1945]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Dec 16 02:08:16.597338 amazon-ssm-agent[1945]: 2025/12/16 02:08:16 processing appconfig overrides Dec 16 02:08:16.597624 amazon-ssm-agent[1945]: 2025-12-16 02:08:16.5912 INFO Proxy environment variables: Dec 16 02:08:16.604349 locksmithd[1930]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Dec 16 02:08:16.615172 amazon-ssm-agent[1945]: 2025/12/16 02:08:16 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Dec 16 02:08:16.615172 amazon-ssm-agent[1945]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Dec 16 02:08:16.620077 amazon-ssm-agent[1945]: 2025/12/16 02:08:16 processing appconfig overrides Dec 16 02:08:16.657579 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Dec 16 02:08:16.668826 dbus-daemon[1837]: [system] Successfully activated service 'org.freedesktop.hostname1' Dec 16 02:08:16.674259 dbus-daemon[1837]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.8' (uid=0 pid=1902 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Dec 16 02:08:16.687575 systemd[1]: Starting polkit.service - Authorization Manager... Dec 16 02:08:16.705700 amazon-ssm-agent[1945]: 2025-12-16 02:08:16.5913 INFO https_proxy: Dec 16 02:08:16.804890 amazon-ssm-agent[1945]: 2025-12-16 02:08:16.5913 INFO http_proxy: Dec 16 02:08:16.887083 coreos-metadata[2009]: Dec 16 02:08:16.884 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Dec 16 02:08:16.896175 coreos-metadata[2009]: Dec 16 02:08:16.892 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Dec 16 02:08:16.898068 coreos-metadata[2009]: Dec 16 02:08:16.896 INFO Fetch successful Dec 16 02:08:16.898068 coreos-metadata[2009]: Dec 16 02:08:16.896 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Dec 16 02:08:16.901346 coreos-metadata[2009]: Dec 16 02:08:16.901 INFO Fetch successful Dec 16 02:08:16.905868 unknown[2009]: wrote ssh authorized keys file for user: core Dec 16 02:08:16.916636 amazon-ssm-agent[1945]: 2025-12-16 02:08:16.5913 INFO no_proxy: Dec 16 02:08:16.995146 update-ssh-keys[2064]: Updated "/home/core/.ssh/authorized_keys" Dec 16 02:08:16.998794 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Dec 16 02:08:17.018193 amazon-ssm-agent[1945]: 2025-12-16 02:08:16.5915 INFO Checking if agent identity type OnPrem can be assumed Dec 16 02:08:17.021698 systemd[1]: Finished sshkeys.service. Dec 16 02:08:17.078721 containerd[1908]: time="2025-12-16T02:08:17Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Dec 16 02:08:17.082032 containerd[1908]: time="2025-12-16T02:08:17.081337628Z" level=info msg="starting containerd" revision=fcd43222d6b07379a4be9786bda52438f0dd16a1 version=v2.1.5 Dec 16 02:08:17.128149 amazon-ssm-agent[1945]: 2025-12-16 02:08:16.5916 INFO Checking if agent identity type EC2 can be assumed Dec 16 02:08:17.159068 containerd[1908]: time="2025-12-16T02:08:17.158567924Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="15.336µs" Dec 16 02:08:17.159068 containerd[1908]: time="2025-12-16T02:08:17.158628812Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Dec 16 02:08:17.159068 containerd[1908]: time="2025-12-16T02:08:17.158701640Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Dec 16 02:08:17.159068 containerd[1908]: time="2025-12-16T02:08:17.158732576Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Dec 16 02:08:17.159068 containerd[1908]: time="2025-12-16T02:08:17.159018668Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Dec 16 02:08:17.163640 containerd[1908]: time="2025-12-16T02:08:17.163580864Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Dec 16 02:08:17.163919 containerd[1908]: time="2025-12-16T02:08:17.163789124Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Dec 16 02:08:17.163919 containerd[1908]: time="2025-12-16T02:08:17.163828772Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Dec 16 02:08:17.164398 containerd[1908]: time="2025-12-16T02:08:17.164345312Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Dec 16 02:08:17.164485 containerd[1908]: time="2025-12-16T02:08:17.164393444Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Dec 16 02:08:17.164485 containerd[1908]: time="2025-12-16T02:08:17.164422976Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Dec 16 02:08:17.164485 containerd[1908]: time="2025-12-16T02:08:17.164444684Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Dec 16 02:08:17.165116 containerd[1908]: time="2025-12-16T02:08:17.164747420Z" level=info msg="skip loading plugin" error="EROFS unsupported, please `modprobe erofs`: skip plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Dec 16 02:08:17.165116 containerd[1908]: time="2025-12-16T02:08:17.164787728Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Dec 16 02:08:17.165116 containerd[1908]: time="2025-12-16T02:08:17.164948252Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Dec 16 02:08:17.171546 containerd[1908]: time="2025-12-16T02:08:17.171480260Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Dec 16 02:08:17.171656 containerd[1908]: time="2025-12-16T02:08:17.171606536Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Dec 16 02:08:17.171656 containerd[1908]: time="2025-12-16T02:08:17.171635864Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Dec 16 02:08:17.171787 containerd[1908]: time="2025-12-16T02:08:17.171707504Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Dec 16 02:08:17.172231 containerd[1908]: time="2025-12-16T02:08:17.172169492Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Dec 16 02:08:17.172448 containerd[1908]: time="2025-12-16T02:08:17.172348148Z" level=info msg="metadata content store policy set" policy=shared Dec 16 02:08:17.194391 containerd[1908]: time="2025-12-16T02:08:17.194106884Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Dec 16 02:08:17.194391 containerd[1908]: time="2025-12-16T02:08:17.194220536Z" level=info msg="loading plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Dec 16 02:08:17.194391 containerd[1908]: time="2025-12-16T02:08:17.194386484Z" level=info msg="skip loading plugin" error="could not find mkfs.erofs: exec: \"mkfs.erofs\": executable file not found in $PATH: skip plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Dec 16 02:08:17.194625 containerd[1908]: time="2025-12-16T02:08:17.194419892Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Dec 16 02:08:17.194625 containerd[1908]: time="2025-12-16T02:08:17.194452412Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Dec 16 02:08:17.194625 containerd[1908]: time="2025-12-16T02:08:17.194482040Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Dec 16 02:08:17.194625 containerd[1908]: time="2025-12-16T02:08:17.194510156Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Dec 16 02:08:17.194625 containerd[1908]: time="2025-12-16T02:08:17.194535380Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Dec 16 02:08:17.194625 containerd[1908]: time="2025-12-16T02:08:17.194582480Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Dec 16 02:08:17.194625 containerd[1908]: time="2025-12-16T02:08:17.194615612Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Dec 16 02:08:17.194892 containerd[1908]: time="2025-12-16T02:08:17.194650820Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Dec 16 02:08:17.194892 containerd[1908]: time="2025-12-16T02:08:17.194678600Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Dec 16 02:08:17.194892 containerd[1908]: time="2025-12-16T02:08:17.194706992Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Dec 16 02:08:17.194892 containerd[1908]: time="2025-12-16T02:08:17.194736404Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Dec 16 02:08:17.195038 containerd[1908]: time="2025-12-16T02:08:17.194972696Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Dec 16 02:08:17.195038 containerd[1908]: time="2025-12-16T02:08:17.195010016Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Dec 16 02:08:17.197072 containerd[1908]: time="2025-12-16T02:08:17.195193004Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Dec 16 02:08:17.197072 containerd[1908]: time="2025-12-16T02:08:17.195238424Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Dec 16 02:08:17.197072 containerd[1908]: time="2025-12-16T02:08:17.195266648Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Dec 16 02:08:17.197072 containerd[1908]: time="2025-12-16T02:08:17.195301064Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Dec 16 02:08:17.197072 containerd[1908]: time="2025-12-16T02:08:17.195330836Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Dec 16 02:08:17.197072 containerd[1908]: time="2025-12-16T02:08:17.195360248Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Dec 16 02:08:17.197072 containerd[1908]: time="2025-12-16T02:08:17.195390008Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Dec 16 02:08:17.197072 containerd[1908]: time="2025-12-16T02:08:17.195420956Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Dec 16 02:08:17.197072 containerd[1908]: time="2025-12-16T02:08:17.195446420Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Dec 16 02:08:17.197072 containerd[1908]: time="2025-12-16T02:08:17.195497144Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Dec 16 02:08:17.197072 containerd[1908]: time="2025-12-16T02:08:17.195561656Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Dec 16 02:08:17.197072 containerd[1908]: time="2025-12-16T02:08:17.195591068Z" level=info msg="Start snapshots syncer" Dec 16 02:08:17.197072 containerd[1908]: time="2025-12-16T02:08:17.195643232Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Dec 16 02:08:17.203313 containerd[1908]: time="2025-12-16T02:08:17.201812948Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"cgroupWritable\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"\",\"binDirs\":[\"/opt/cni/bin\"],\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogLineSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Dec 16 02:08:17.203313 containerd[1908]: time="2025-12-16T02:08:17.201983024Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Dec 16 02:08:17.203632 containerd[1908]: time="2025-12-16T02:08:17.202126520Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Dec 16 02:08:17.203632 containerd[1908]: time="2025-12-16T02:08:17.202390496Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Dec 16 02:08:17.203632 containerd[1908]: time="2025-12-16T02:08:17.202453424Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Dec 16 02:08:17.203632 containerd[1908]: time="2025-12-16T02:08:17.202492904Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Dec 16 02:08:17.203632 containerd[1908]: time="2025-12-16T02:08:17.202532888Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Dec 16 02:08:17.203632 containerd[1908]: time="2025-12-16T02:08:17.202573076Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Dec 16 02:08:17.203632 containerd[1908]: time="2025-12-16T02:08:17.202606088Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Dec 16 02:08:17.203632 containerd[1908]: time="2025-12-16T02:08:17.202643564Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Dec 16 02:08:17.203632 containerd[1908]: time="2025-12-16T02:08:17.202681496Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Dec 16 02:08:17.203632 containerd[1908]: time="2025-12-16T02:08:17.202721516Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Dec 16 02:08:17.203632 containerd[1908]: time="2025-12-16T02:08:17.202790720Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Dec 16 02:08:17.203632 containerd[1908]: time="2025-12-16T02:08:17.202826564Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Dec 16 02:08:17.203632 containerd[1908]: time="2025-12-16T02:08:17.202858844Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Dec 16 02:08:17.204169 containerd[1908]: time="2025-12-16T02:08:17.202893608Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Dec 16 02:08:17.204169 containerd[1908]: time="2025-12-16T02:08:17.202916900Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Dec 16 02:08:17.211246 containerd[1908]: time="2025-12-16T02:08:17.202955120Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Dec 16 02:08:17.214068 containerd[1908]: time="2025-12-16T02:08:17.211449584Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Dec 16 02:08:17.214068 containerd[1908]: time="2025-12-16T02:08:17.211649888Z" level=info msg="runtime interface created" Dec 16 02:08:17.214068 containerd[1908]: time="2025-12-16T02:08:17.211679384Z" level=info msg="created NRI interface" Dec 16 02:08:17.214068 containerd[1908]: time="2025-12-16T02:08:17.211703996Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Dec 16 02:08:17.214068 containerd[1908]: time="2025-12-16T02:08:17.211746656Z" level=info msg="Connect containerd service" Dec 16 02:08:17.214068 containerd[1908]: time="2025-12-16T02:08:17.211824944Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Dec 16 02:08:17.217269 containerd[1908]: time="2025-12-16T02:08:17.217213916Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 16 02:08:17.230195 amazon-ssm-agent[1945]: 2025-12-16 02:08:17.0403 INFO Agent will take identity from EC2 Dec 16 02:08:17.262005 polkitd[2044]: Started polkitd version 126 Dec 16 02:08:17.291614 polkitd[2044]: Loading rules from directory /etc/polkit-1/rules.d Dec 16 02:08:17.295359 polkitd[2044]: Loading rules from directory /run/polkit-1/rules.d Dec 16 02:08:17.295469 polkitd[2044]: Error opening rules directory: Error opening directory “/run/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Dec 16 02:08:17.298974 polkitd[2044]: Loading rules from directory /usr/local/share/polkit-1/rules.d Dec 16 02:08:17.299095 polkitd[2044]: Error opening rules directory: Error opening directory “/usr/local/share/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Dec 16 02:08:17.299181 polkitd[2044]: Loading rules from directory /usr/share/polkit-1/rules.d Dec 16 02:08:17.307152 polkitd[2044]: Finished loading, compiling and executing 2 rules Dec 16 02:08:17.307746 systemd[1]: Started polkit.service - Authorization Manager. Dec 16 02:08:17.309396 dbus-daemon[1837]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Dec 16 02:08:17.312750 polkitd[2044]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Dec 16 02:08:17.329896 amazon-ssm-agent[1945]: 2025-12-16 02:08:17.0516 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.3.0.0 Dec 16 02:08:17.370694 systemd-resolved[1449]: System hostname changed to 'ip-172-31-24-92'. Dec 16 02:08:17.370775 systemd-hostnamed[1902]: Hostname set to (transient) Dec 16 02:08:17.430067 amazon-ssm-agent[1945]: 2025-12-16 02:08:17.0517 INFO [amazon-ssm-agent] OS: linux, Arch: arm64 Dec 16 02:08:17.528699 amazon-ssm-agent[1945]: 2025-12-16 02:08:17.0517 INFO [amazon-ssm-agent] Starting Core Agent Dec 16 02:08:17.628396 amazon-ssm-agent[1945]: 2025-12-16 02:08:17.0517 INFO [amazon-ssm-agent] Registrar detected. Attempting registration Dec 16 02:08:17.636836 amazon-ssm-agent[1945]: 2025/12/16 02:08:17 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Dec 16 02:08:17.636836 amazon-ssm-agent[1945]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Dec 16 02:08:17.636836 amazon-ssm-agent[1945]: 2025/12/16 02:08:17 processing appconfig overrides Dec 16 02:08:17.678253 amazon-ssm-agent[1945]: 2025-12-16 02:08:17.0517 INFO [Registrar] Starting registrar module Dec 16 02:08:17.678519 amazon-ssm-agent[1945]: 2025-12-16 02:08:17.0597 INFO [EC2Identity] Checking disk for registration info Dec 16 02:08:17.678519 amazon-ssm-agent[1945]: 2025-12-16 02:08:17.0597 INFO [EC2Identity] No registration info found for ec2 instance, attempting registration Dec 16 02:08:17.678519 amazon-ssm-agent[1945]: 2025-12-16 02:08:17.0597 INFO [EC2Identity] Generating registration keypair Dec 16 02:08:17.678519 amazon-ssm-agent[1945]: 2025-12-16 02:08:17.5883 INFO [EC2Identity] Checking write access before registering Dec 16 02:08:17.678519 amazon-ssm-agent[1945]: 2025-12-16 02:08:17.5889 INFO [EC2Identity] Registering EC2 instance with Systems Manager Dec 16 02:08:17.678519 amazon-ssm-agent[1945]: 2025-12-16 02:08:17.6354 INFO [EC2Identity] EC2 registration was successful. Dec 16 02:08:17.678519 amazon-ssm-agent[1945]: 2025-12-16 02:08:17.6355 INFO [amazon-ssm-agent] Registration attempted. Resuming core agent startup. Dec 16 02:08:17.679446 amazon-ssm-agent[1945]: 2025-12-16 02:08:17.6356 INFO [CredentialRefresher] credentialRefresher has started Dec 16 02:08:17.679446 amazon-ssm-agent[1945]: 2025-12-16 02:08:17.6356 INFO [CredentialRefresher] Starting credentials refresher loop Dec 16 02:08:17.679446 amazon-ssm-agent[1945]: 2025-12-16 02:08:17.6762 INFO EC2RoleProvider Successfully connected with instance profile role credentials Dec 16 02:08:17.679446 amazon-ssm-agent[1945]: 2025-12-16 02:08:17.6781 INFO [CredentialRefresher] Credentials ready Dec 16 02:08:17.713233 containerd[1908]: time="2025-12-16T02:08:17.711440039Z" level=info msg="Start subscribing containerd event" Dec 16 02:08:17.713233 containerd[1908]: time="2025-12-16T02:08:17.711561851Z" level=info msg="Start recovering state" Dec 16 02:08:17.713233 containerd[1908]: time="2025-12-16T02:08:17.711797807Z" level=info msg="Start event monitor" Dec 16 02:08:17.713233 containerd[1908]: time="2025-12-16T02:08:17.711838247Z" level=info msg="Start cni network conf syncer for default" Dec 16 02:08:17.713233 containerd[1908]: time="2025-12-16T02:08:17.711893579Z" level=info msg="Start streaming server" Dec 16 02:08:17.713233 containerd[1908]: time="2025-12-16T02:08:17.711914051Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Dec 16 02:08:17.713233 containerd[1908]: time="2025-12-16T02:08:17.711957119Z" level=info msg="runtime interface starting up..." Dec 16 02:08:17.713233 containerd[1908]: time="2025-12-16T02:08:17.711977927Z" level=info msg="starting plugins..." Dec 16 02:08:17.713233 containerd[1908]: time="2025-12-16T02:08:17.712092047Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Dec 16 02:08:17.713233 containerd[1908]: time="2025-12-16T02:08:17.711981443Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Dec 16 02:08:17.719829 containerd[1908]: time="2025-12-16T02:08:17.713301203Z" level=info msg=serving... address=/run/containerd/containerd.sock Dec 16 02:08:17.719829 containerd[1908]: time="2025-12-16T02:08:17.716301923Z" level=info msg="containerd successfully booted in 0.639719s" Dec 16 02:08:17.713684 systemd[1]: Started containerd.service - containerd container runtime. Dec 16 02:08:17.729915 amazon-ssm-agent[1945]: 2025-12-16 02:08:17.6786 INFO [CredentialRefresher] Next credential rotation will be in 29.9999606123 minutes Dec 16 02:08:17.851277 tar[1864]: linux-arm64/README.md Dec 16 02:08:17.882332 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Dec 16 02:08:18.709966 amazon-ssm-agent[1945]: 2025-12-16 02:08:18.7091 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Dec 16 02:08:18.720812 sshd_keygen[1899]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Dec 16 02:08:18.761198 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Dec 16 02:08:18.769596 systemd[1]: Starting issuegen.service - Generate /run/issue... Dec 16 02:08:18.797229 systemd[1]: issuegen.service: Deactivated successfully. Dec 16 02:08:18.797794 systemd[1]: Finished issuegen.service - Generate /run/issue. Dec 16 02:08:18.804524 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Dec 16 02:08:18.811728 amazon-ssm-agent[1945]: 2025-12-16 02:08:18.7165 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2102) started Dec 16 02:08:18.839642 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Dec 16 02:08:18.845503 systemd[1]: Started getty@tty1.service - Getty on tty1. Dec 16 02:08:18.854915 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Dec 16 02:08:18.860777 systemd[1]: Reached target getty.target - Login Prompts. Dec 16 02:08:18.912095 amazon-ssm-agent[1945]: 2025-12-16 02:08:18.7166 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Dec 16 02:08:20.199959 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 02:08:20.206262 systemd[1]: Reached target multi-user.target - Multi-User System. Dec 16 02:08:20.215162 systemd[1]: Startup finished in 4.102s (kernel) + 12.416s (initrd) + 14.996s (userspace) = 31.515s. Dec 16 02:08:20.217615 (kubelet)[2134]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 16 02:08:21.651856 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Dec 16 02:08:21.656473 systemd[1]: Started sshd@0-172.31.24.92:22-139.178.89.65:55530.service - OpenSSH per-connection server daemon (139.178.89.65:55530). Dec 16 02:08:21.932641 kubelet[2134]: E1216 02:08:21.932483 2134 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 16 02:08:21.938742 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 16 02:08:21.939331 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 16 02:08:21.940299 systemd[1]: kubelet.service: Consumed 1.379s CPU time, 248.9M memory peak. Dec 16 02:08:22.063340 sshd[2144]: Accepted publickey for core from 139.178.89.65 port 55530 ssh2: RSA SHA256:GQgi8hrngD5IAzSBvjpWGNrbDxS4+WSDV6E9Am09kRw Dec 16 02:08:22.068732 sshd-session[2144]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 02:08:22.084633 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Dec 16 02:08:22.087189 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Dec 16 02:08:22.102400 systemd-logind[1853]: New session 1 of user core. Dec 16 02:08:22.124375 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Dec 16 02:08:22.131364 systemd[1]: Starting user@500.service - User Manager for UID 500... Dec 16 02:08:22.159474 (systemd)[2152]: pam_unix(systemd-user:session): session opened for user core(uid=500) by core(uid=0) Dec 16 02:08:22.165323 systemd-logind[1853]: New session 2 of user core. Dec 16 02:08:22.464766 systemd[2152]: Queued start job for default target default.target. Dec 16 02:08:22.476706 systemd[2152]: Created slice app.slice - User Application Slice. Dec 16 02:08:22.476792 systemd[2152]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of User's Temporary Directories. Dec 16 02:08:22.476825 systemd[2152]: Reached target paths.target - Paths. Dec 16 02:08:22.476937 systemd[2152]: Reached target timers.target - Timers. Dec 16 02:08:22.479763 systemd[2152]: Starting dbus.socket - D-Bus User Message Bus Socket... Dec 16 02:08:22.483373 systemd[2152]: Starting systemd-tmpfiles-setup.service - Create User Files and Directories... Dec 16 02:08:22.512972 systemd[2152]: Listening on dbus.socket - D-Bus User Message Bus Socket. Dec 16 02:08:22.513184 systemd[2152]: Reached target sockets.target - Sockets. Dec 16 02:08:22.514508 systemd[2152]: Finished systemd-tmpfiles-setup.service - Create User Files and Directories. Dec 16 02:08:22.514668 systemd[2152]: Reached target basic.target - Basic System. Dec 16 02:08:22.514787 systemd[2152]: Reached target default.target - Main User Target. Dec 16 02:08:22.514864 systemd[2152]: Startup finished in 336ms. Dec 16 02:08:22.515514 systemd[1]: Started user@500.service - User Manager for UID 500. Dec 16 02:08:22.520553 systemd[1]: Started session-1.scope - Session 1 of User core. Dec 16 02:08:22.618626 systemd[1]: Started sshd@1-172.31.24.92:22-139.178.89.65:55546.service - OpenSSH per-connection server daemon (139.178.89.65:55546). Dec 16 02:08:23.241615 systemd-resolved[1449]: Clock change detected. Flushing caches. Dec 16 02:08:23.311699 sshd[2166]: Accepted publickey for core from 139.178.89.65 port 55546 ssh2: RSA SHA256:GQgi8hrngD5IAzSBvjpWGNrbDxS4+WSDV6E9Am09kRw Dec 16 02:08:23.314598 sshd-session[2166]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 02:08:23.324512 systemd-logind[1853]: New session 3 of user core. Dec 16 02:08:23.343813 systemd[1]: Started session-3.scope - Session 3 of User core. Dec 16 02:08:23.412445 sshd[2170]: Connection closed by 139.178.89.65 port 55546 Dec 16 02:08:23.413325 sshd-session[2166]: pam_unix(sshd:session): session closed for user core Dec 16 02:08:23.422163 systemd[1]: sshd@1-172.31.24.92:22-139.178.89.65:55546.service: Deactivated successfully. Dec 16 02:08:23.426040 systemd[1]: session-3.scope: Deactivated successfully. Dec 16 02:08:23.428401 systemd-logind[1853]: Session 3 logged out. Waiting for processes to exit. Dec 16 02:08:23.431649 systemd-logind[1853]: Removed session 3. Dec 16 02:08:23.447753 systemd[1]: Started sshd@2-172.31.24.92:22-139.178.89.65:55556.service - OpenSSH per-connection server daemon (139.178.89.65:55556). Dec 16 02:08:23.657341 sshd[2176]: Accepted publickey for core from 139.178.89.65 port 55556 ssh2: RSA SHA256:GQgi8hrngD5IAzSBvjpWGNrbDxS4+WSDV6E9Am09kRw Dec 16 02:08:23.660335 sshd-session[2176]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 02:08:23.670014 systemd-logind[1853]: New session 4 of user core. Dec 16 02:08:23.681825 systemd[1]: Started session-4.scope - Session 4 of User core. Dec 16 02:08:23.743500 sshd[2180]: Connection closed by 139.178.89.65 port 55556 Dec 16 02:08:23.744367 sshd-session[2176]: pam_unix(sshd:session): session closed for user core Dec 16 02:08:23.754655 systemd[1]: sshd@2-172.31.24.92:22-139.178.89.65:55556.service: Deactivated successfully. Dec 16 02:08:23.760081 systemd[1]: session-4.scope: Deactivated successfully. Dec 16 02:08:23.762840 systemd-logind[1853]: Session 4 logged out. Waiting for processes to exit. Dec 16 02:08:23.778640 systemd-logind[1853]: Removed session 4. Dec 16 02:08:23.781060 systemd[1]: Started sshd@3-172.31.24.92:22-139.178.89.65:55566.service - OpenSSH per-connection server daemon (139.178.89.65:55566). Dec 16 02:08:23.972834 sshd[2186]: Accepted publickey for core from 139.178.89.65 port 55566 ssh2: RSA SHA256:GQgi8hrngD5IAzSBvjpWGNrbDxS4+WSDV6E9Am09kRw Dec 16 02:08:23.975547 sshd-session[2186]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 02:08:23.986552 systemd-logind[1853]: New session 5 of user core. Dec 16 02:08:23.993802 systemd[1]: Started session-5.scope - Session 5 of User core. Dec 16 02:08:24.061889 sshd[2190]: Connection closed by 139.178.89.65 port 55566 Dec 16 02:08:24.061736 sshd-session[2186]: pam_unix(sshd:session): session closed for user core Dec 16 02:08:24.069605 systemd-logind[1853]: Session 5 logged out. Waiting for processes to exit. Dec 16 02:08:24.069838 systemd[1]: sshd@3-172.31.24.92:22-139.178.89.65:55566.service: Deactivated successfully. Dec 16 02:08:24.074873 systemd[1]: session-5.scope: Deactivated successfully. Dec 16 02:08:24.083791 systemd-logind[1853]: Removed session 5. Dec 16 02:08:24.105115 systemd[1]: Started sshd@4-172.31.24.92:22-139.178.89.65:55570.service - OpenSSH per-connection server daemon (139.178.89.65:55570). Dec 16 02:08:24.312718 sshd[2196]: Accepted publickey for core from 139.178.89.65 port 55570 ssh2: RSA SHA256:GQgi8hrngD5IAzSBvjpWGNrbDxS4+WSDV6E9Am09kRw Dec 16 02:08:24.315197 sshd-session[2196]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 02:08:24.325531 systemd-logind[1853]: New session 6 of user core. Dec 16 02:08:24.332750 systemd[1]: Started session-6.scope - Session 6 of User core. Dec 16 02:08:24.397027 sudo[2201]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Dec 16 02:08:24.397848 sudo[2201]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 16 02:08:24.413605 sudo[2201]: pam_unix(sudo:session): session closed for user root Dec 16 02:08:24.437979 sshd[2200]: Connection closed by 139.178.89.65 port 55570 Dec 16 02:08:24.438311 sshd-session[2196]: pam_unix(sshd:session): session closed for user core Dec 16 02:08:24.449330 systemd[1]: sshd@4-172.31.24.92:22-139.178.89.65:55570.service: Deactivated successfully. Dec 16 02:08:24.454189 systemd[1]: session-6.scope: Deactivated successfully. Dec 16 02:08:24.457577 systemd-logind[1853]: Session 6 logged out. Waiting for processes to exit. Dec 16 02:08:24.481052 systemd[1]: Started sshd@5-172.31.24.92:22-139.178.89.65:55584.service - OpenSSH per-connection server daemon (139.178.89.65:55584). Dec 16 02:08:24.483959 systemd-logind[1853]: Removed session 6. Dec 16 02:08:24.672313 sshd[2208]: Accepted publickey for core from 139.178.89.65 port 55584 ssh2: RSA SHA256:GQgi8hrngD5IAzSBvjpWGNrbDxS4+WSDV6E9Am09kRw Dec 16 02:08:24.674763 sshd-session[2208]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 02:08:24.685030 systemd-logind[1853]: New session 7 of user core. Dec 16 02:08:24.694847 systemd[1]: Started session-7.scope - Session 7 of User core. Dec 16 02:08:24.744699 sudo[2214]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Dec 16 02:08:24.745374 sudo[2214]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 16 02:08:24.751787 sudo[2214]: pam_unix(sudo:session): session closed for user root Dec 16 02:08:24.765079 sudo[2213]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Dec 16 02:08:24.766058 sudo[2213]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 16 02:08:24.784459 systemd[1]: Starting audit-rules.service - Load Audit Rules... Dec 16 02:08:24.859000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Dec 16 02:08:24.862175 kernel: kauditd_printk_skb: 144 callbacks suppressed Dec 16 02:08:24.862272 kernel: audit: type=1305 audit(1765850904.859:238): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Dec 16 02:08:24.862354 augenrules[2238]: No rules Dec 16 02:08:24.866115 systemd[1]: audit-rules.service: Deactivated successfully. Dec 16 02:08:24.873023 kernel: audit: type=1300 audit(1765850904.859:238): arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffcb55e0c0 a2=420 a3=0 items=0 ppid=2219 pid=2238 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:08:24.859000 audit[2238]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffcb55e0c0 a2=420 a3=0 items=0 ppid=2219 pid=2238 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:08:24.868553 systemd[1]: Finished audit-rules.service - Load Audit Rules. Dec 16 02:08:24.873937 sudo[2213]: pam_unix(sudo:session): session closed for user root Dec 16 02:08:24.859000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Dec 16 02:08:24.878540 kernel: audit: type=1327 audit(1765850904.859:238): proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Dec 16 02:08:24.868000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:08:24.883381 kernel: audit: type=1130 audit(1765850904.868:239): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:08:24.868000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:08:24.888203 kernel: audit: type=1131 audit(1765850904.868:240): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:08:24.872000 audit[2213]: USER_END pid=2213 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 16 02:08:24.872000 audit[2213]: CRED_DISP pid=2213 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 16 02:08:24.898459 kernel: audit: type=1106 audit(1765850904.872:241): pid=2213 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 16 02:08:24.898585 kernel: audit: type=1104 audit(1765850904.872:242): pid=2213 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 16 02:08:24.898625 sshd[2212]: Connection closed by 139.178.89.65 port 55584 Dec 16 02:08:24.899011 sshd-session[2208]: pam_unix(sshd:session): session closed for user core Dec 16 02:08:24.901000 audit[2208]: USER_END pid=2208 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 02:08:24.909756 systemd[1]: sshd@5-172.31.24.92:22-139.178.89.65:55584.service: Deactivated successfully. Dec 16 02:08:24.902000 audit[2208]: CRED_DISP pid=2208 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 02:08:24.916281 kernel: audit: type=1106 audit(1765850904.901:243): pid=2208 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 02:08:24.916382 kernel: audit: type=1104 audit(1765850904.902:244): pid=2208 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 02:08:24.915368 systemd[1]: session-7.scope: Deactivated successfully. Dec 16 02:08:24.909000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-172.31.24.92:22-139.178.89.65:55584 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:08:24.921891 kernel: audit: type=1131 audit(1765850904.909:245): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-172.31.24.92:22-139.178.89.65:55584 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:08:24.922126 systemd-logind[1853]: Session 7 logged out. Waiting for processes to exit. Dec 16 02:08:24.940578 systemd[1]: Started sshd@6-172.31.24.92:22-139.178.89.65:55590.service - OpenSSH per-connection server daemon (139.178.89.65:55590). Dec 16 02:08:24.939000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-172.31.24.92:22-139.178.89.65:55590 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:08:24.943563 systemd-logind[1853]: Removed session 7. Dec 16 02:08:25.138000 audit[2247]: USER_ACCT pid=2247 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 02:08:25.139988 sshd[2247]: Accepted publickey for core from 139.178.89.65 port 55590 ssh2: RSA SHA256:GQgi8hrngD5IAzSBvjpWGNrbDxS4+WSDV6E9Am09kRw Dec 16 02:08:25.140000 audit[2247]: CRED_ACQ pid=2247 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 02:08:25.140000 audit[2247]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffdea1db90 a2=3 a3=0 items=0 ppid=1 pid=2247 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=8 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:08:25.140000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 02:08:25.143036 sshd-session[2247]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 02:08:25.151521 systemd-logind[1853]: New session 8 of user core. Dec 16 02:08:25.167748 systemd[1]: Started session-8.scope - Session 8 of User core. Dec 16 02:08:25.173000 audit[2247]: USER_START pid=2247 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 02:08:25.176000 audit[2251]: CRED_ACQ pid=2251 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 02:08:25.214000 audit[2252]: USER_ACCT pid=2252 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_unix,pam_faillock acct="core" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 16 02:08:25.214000 audit[2252]: CRED_REFR pid=2252 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 16 02:08:25.215000 audit[2252]: USER_START pid=2252 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 16 02:08:25.215955 sudo[2252]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Dec 16 02:08:25.216606 sudo[2252]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 16 02:08:26.405475 systemd[1]: Starting docker.service - Docker Application Container Engine... Dec 16 02:08:26.435076 (dockerd)[2271]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Dec 16 02:08:27.552450 dockerd[2271]: time="2025-12-16T02:08:27.551998525Z" level=info msg="Starting up" Dec 16 02:08:27.553971 dockerd[2271]: time="2025-12-16T02:08:27.553931257Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Dec 16 02:08:27.577303 dockerd[2271]: time="2025-12-16T02:08:27.577239337Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Dec 16 02:08:27.640252 systemd[1]: var-lib-docker-metacopy\x2dcheck1414377395-merged.mount: Deactivated successfully. Dec 16 02:08:27.659469 dockerd[2271]: time="2025-12-16T02:08:27.659346073Z" level=info msg="Loading containers: start." Dec 16 02:08:27.722505 kernel: Initializing XFRM netlink socket Dec 16 02:08:27.886000 audit[2321]: NETFILTER_CFG table=nat:2 family=2 entries=2 op=nft_register_chain pid=2321 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 02:08:27.886000 audit[2321]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=116 a0=3 a1=ffffd9ecd8c0 a2=0 a3=0 items=0 ppid=2271 pid=2321 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:08:27.886000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Dec 16 02:08:27.891000 audit[2323]: NETFILTER_CFG table=filter:3 family=2 entries=2 op=nft_register_chain pid=2323 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 02:08:27.891000 audit[2323]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=124 a0=3 a1=ffffe81afd20 a2=0 a3=0 items=0 ppid=2271 pid=2323 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:08:27.891000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Dec 16 02:08:27.896000 audit[2325]: NETFILTER_CFG table=filter:4 family=2 entries=1 op=nft_register_chain pid=2325 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 02:08:27.896000 audit[2325]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffc83b9b40 a2=0 a3=0 items=0 ppid=2271 pid=2325 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:08:27.896000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D464F5257415244 Dec 16 02:08:27.901000 audit[2327]: NETFILTER_CFG table=filter:5 family=2 entries=1 op=nft_register_chain pid=2327 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 02:08:27.901000 audit[2327]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffff390db90 a2=0 a3=0 items=0 ppid=2271 pid=2327 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:08:27.901000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D425249444745 Dec 16 02:08:27.905000 audit[2329]: NETFILTER_CFG table=filter:6 family=2 entries=1 op=nft_register_chain pid=2329 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 02:08:27.905000 audit[2329]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=fffff4f2a8c0 a2=0 a3=0 items=0 ppid=2271 pid=2329 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:08:27.905000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D4354 Dec 16 02:08:27.910000 audit[2331]: NETFILTER_CFG table=filter:7 family=2 entries=1 op=nft_register_chain pid=2331 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 02:08:27.910000 audit[2331]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=112 a0=3 a1=ffffd0d5b420 a2=0 a3=0 items=0 ppid=2271 pid=2331 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:08:27.910000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Dec 16 02:08:27.914000 audit[2333]: NETFILTER_CFG table=filter:8 family=2 entries=1 op=nft_register_chain pid=2333 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 02:08:27.914000 audit[2333]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=112 a0=3 a1=ffffd1e0b510 a2=0 a3=0 items=0 ppid=2271 pid=2333 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:08:27.914000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Dec 16 02:08:27.920000 audit[2335]: NETFILTER_CFG table=nat:9 family=2 entries=2 op=nft_register_chain pid=2335 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 02:08:27.920000 audit[2335]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=384 a0=3 a1=fffffb5cb4a0 a2=0 a3=0 items=0 ppid=2271 pid=2335 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:08:27.920000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Dec 16 02:08:27.966000 audit[2338]: NETFILTER_CFG table=nat:10 family=2 entries=2 op=nft_register_chain pid=2338 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 02:08:27.966000 audit[2338]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=472 a0=3 a1=ffffe4c6e810 a2=0 a3=0 items=0 ppid=2271 pid=2338 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:08:27.966000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003132372E302E302E302F38 Dec 16 02:08:27.972000 audit[2340]: NETFILTER_CFG table=filter:11 family=2 entries=2 op=nft_register_chain pid=2340 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 02:08:27.972000 audit[2340]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=340 a0=3 a1=fffff1c24160 a2=0 a3=0 items=0 ppid=2271 pid=2340 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:08:27.972000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D464F5257415244 Dec 16 02:08:27.980000 audit[2342]: NETFILTER_CFG table=filter:12 family=2 entries=1 op=nft_register_rule pid=2342 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 02:08:27.980000 audit[2342]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=236 a0=3 a1=ffffeddf10d0 a2=0 a3=0 items=0 ppid=2271 pid=2342 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:08:27.980000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D425249444745 Dec 16 02:08:27.986000 audit[2344]: NETFILTER_CFG table=filter:13 family=2 entries=1 op=nft_register_rule pid=2344 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 02:08:27.986000 audit[2344]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=248 a0=3 a1=ffffd9fdcd60 a2=0 a3=0 items=0 ppid=2271 pid=2344 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:08:27.986000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Dec 16 02:08:27.991000 audit[2346]: NETFILTER_CFG table=filter:14 family=2 entries=1 op=nft_register_rule pid=2346 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 02:08:27.991000 audit[2346]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=232 a0=3 a1=fffff6754850 a2=0 a3=0 items=0 ppid=2271 pid=2346 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:08:27.991000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D4354 Dec 16 02:08:28.068000 audit[2376]: NETFILTER_CFG table=nat:15 family=10 entries=2 op=nft_register_chain pid=2376 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 02:08:28.068000 audit[2376]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=116 a0=3 a1=fffffcaf3340 a2=0 a3=0 items=0 ppid=2271 pid=2376 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:08:28.068000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Dec 16 02:08:28.073000 audit[2378]: NETFILTER_CFG table=filter:16 family=10 entries=2 op=nft_register_chain pid=2378 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 02:08:28.073000 audit[2378]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=124 a0=3 a1=ffffe0d32fb0 a2=0 a3=0 items=0 ppid=2271 pid=2378 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:08:28.073000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Dec 16 02:08:28.078000 audit[2380]: NETFILTER_CFG table=filter:17 family=10 entries=1 op=nft_register_chain pid=2380 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 02:08:28.078000 audit[2380]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffc1def5e0 a2=0 a3=0 items=0 ppid=2271 pid=2380 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:08:28.078000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D464F5257415244 Dec 16 02:08:28.083000 audit[2382]: NETFILTER_CFG table=filter:18 family=10 entries=1 op=nft_register_chain pid=2382 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 02:08:28.083000 audit[2382]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffce4845d0 a2=0 a3=0 items=0 ppid=2271 pid=2382 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:08:28.083000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D425249444745 Dec 16 02:08:28.088000 audit[2384]: NETFILTER_CFG table=filter:19 family=10 entries=1 op=nft_register_chain pid=2384 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 02:08:28.088000 audit[2384]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffe1946330 a2=0 a3=0 items=0 ppid=2271 pid=2384 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:08:28.088000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D4354 Dec 16 02:08:28.092000 audit[2386]: NETFILTER_CFG table=filter:20 family=10 entries=1 op=nft_register_chain pid=2386 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 02:08:28.092000 audit[2386]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=112 a0=3 a1=ffffd405d020 a2=0 a3=0 items=0 ppid=2271 pid=2386 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:08:28.092000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Dec 16 02:08:28.097000 audit[2388]: NETFILTER_CFG table=filter:21 family=10 entries=1 op=nft_register_chain pid=2388 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 02:08:28.097000 audit[2388]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=112 a0=3 a1=ffffe9d90cb0 a2=0 a3=0 items=0 ppid=2271 pid=2388 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:08:28.097000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Dec 16 02:08:28.103000 audit[2390]: NETFILTER_CFG table=nat:22 family=10 entries=2 op=nft_register_chain pid=2390 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 02:08:28.103000 audit[2390]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=384 a0=3 a1=ffffe52994e0 a2=0 a3=0 items=0 ppid=2271 pid=2390 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:08:28.103000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Dec 16 02:08:28.108000 audit[2392]: NETFILTER_CFG table=nat:23 family=10 entries=2 op=nft_register_chain pid=2392 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 02:08:28.108000 audit[2392]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=484 a0=3 a1=ffffd4693360 a2=0 a3=0 items=0 ppid=2271 pid=2392 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:08:28.108000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003A3A312F313238 Dec 16 02:08:28.113000 audit[2394]: NETFILTER_CFG table=filter:24 family=10 entries=2 op=nft_register_chain pid=2394 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 02:08:28.113000 audit[2394]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=340 a0=3 a1=ffffcc9960f0 a2=0 a3=0 items=0 ppid=2271 pid=2394 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:08:28.113000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D464F5257415244 Dec 16 02:08:28.118000 audit[2396]: NETFILTER_CFG table=filter:25 family=10 entries=1 op=nft_register_rule pid=2396 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 02:08:28.118000 audit[2396]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=236 a0=3 a1=ffffd8158670 a2=0 a3=0 items=0 ppid=2271 pid=2396 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:08:28.118000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D425249444745 Dec 16 02:08:28.123000 audit[2398]: NETFILTER_CFG table=filter:26 family=10 entries=1 op=nft_register_rule pid=2398 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 02:08:28.123000 audit[2398]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=248 a0=3 a1=ffffc1311870 a2=0 a3=0 items=0 ppid=2271 pid=2398 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:08:28.123000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Dec 16 02:08:28.128000 audit[2400]: NETFILTER_CFG table=filter:27 family=10 entries=1 op=nft_register_rule pid=2400 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 02:08:28.128000 audit[2400]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=232 a0=3 a1=fffffe634940 a2=0 a3=0 items=0 ppid=2271 pid=2400 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:08:28.128000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D4354 Dec 16 02:08:28.139000 audit[2405]: NETFILTER_CFG table=filter:28 family=2 entries=1 op=nft_register_chain pid=2405 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 02:08:28.139000 audit[2405]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=fffff6324550 a2=0 a3=0 items=0 ppid=2271 pid=2405 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:08:28.139000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Dec 16 02:08:28.147000 audit[2407]: NETFILTER_CFG table=filter:29 family=2 entries=1 op=nft_register_rule pid=2407 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 02:08:28.147000 audit[2407]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=212 a0=3 a1=fffff6d63d00 a2=0 a3=0 items=0 ppid=2271 pid=2407 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:08:28.147000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Dec 16 02:08:28.151000 audit[2409]: NETFILTER_CFG table=filter:30 family=2 entries=1 op=nft_register_rule pid=2409 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 02:08:28.151000 audit[2409]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=224 a0=3 a1=ffffcc6c14b0 a2=0 a3=0 items=0 ppid=2271 pid=2409 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:08:28.151000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Dec 16 02:08:28.156000 audit[2411]: NETFILTER_CFG table=filter:31 family=10 entries=1 op=nft_register_chain pid=2411 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 02:08:28.156000 audit[2411]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffc6856780 a2=0 a3=0 items=0 ppid=2271 pid=2411 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:08:28.156000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Dec 16 02:08:28.161000 audit[2413]: NETFILTER_CFG table=filter:32 family=10 entries=1 op=nft_register_rule pid=2413 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 02:08:28.161000 audit[2413]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=212 a0=3 a1=ffffe35eacf0 a2=0 a3=0 items=0 ppid=2271 pid=2413 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:08:28.161000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Dec 16 02:08:28.166000 audit[2415]: NETFILTER_CFG table=filter:33 family=10 entries=1 op=nft_register_rule pid=2415 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 02:08:28.166000 audit[2415]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=224 a0=3 a1=ffffc8972d80 a2=0 a3=0 items=0 ppid=2271 pid=2415 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:08:28.166000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Dec 16 02:08:28.187903 (udev-worker)[2293]: Network interface NamePolicy= disabled on kernel command line. Dec 16 02:08:28.199000 audit[2419]: NETFILTER_CFG table=nat:34 family=2 entries=2 op=nft_register_chain pid=2419 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 02:08:28.199000 audit[2419]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=520 a0=3 a1=ffffec98fad0 a2=0 a3=0 items=0 ppid=2271 pid=2419 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:08:28.199000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4900504F5354524F5554494E47002D73003137322E31372E302E302F31360000002D6F00646F636B657230002D6A004D415351554552414445 Dec 16 02:08:28.209000 audit[2421]: NETFILTER_CFG table=nat:35 family=2 entries=1 op=nft_register_rule pid=2421 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 02:08:28.209000 audit[2421]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=288 a0=3 a1=ffffd203d1c0 a2=0 a3=0 items=0 ppid=2271 pid=2421 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:08:28.209000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4900444F434B4552002D6900646F636B657230002D6A0052455455524E Dec 16 02:08:28.231000 audit[2429]: NETFILTER_CFG table=filter:36 family=2 entries=1 op=nft_register_rule pid=2429 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 02:08:28.231000 audit[2429]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=300 a0=3 a1=ffffc1539dd0 a2=0 a3=0 items=0 ppid=2271 pid=2429 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:08:28.231000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D464F5257415244002D6900646F636B657230002D6A00414343455054 Dec 16 02:08:28.250000 audit[2435]: NETFILTER_CFG table=filter:37 family=2 entries=1 op=nft_register_rule pid=2435 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 02:08:28.250000 audit[2435]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=376 a0=3 a1=ffffc0874eb0 a2=0 a3=0 items=0 ppid=2271 pid=2435 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:08:28.250000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45520000002D6900646F636B657230002D6F00646F636B657230002D6A0044524F50 Dec 16 02:08:28.255000 audit[2437]: NETFILTER_CFG table=filter:38 family=2 entries=1 op=nft_register_rule pid=2437 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 02:08:28.255000 audit[2437]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=512 a0=3 a1=ffffea072c80 a2=0 a3=0 items=0 ppid=2271 pid=2437 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:08:28.255000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D4354002D6F00646F636B657230002D6D00636F6E6E747261636B002D2D637473746174650052454C415445442C45535441424C4953484544002D6A00414343455054 Dec 16 02:08:28.260000 audit[2439]: NETFILTER_CFG table=filter:39 family=2 entries=1 op=nft_register_rule pid=2439 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 02:08:28.260000 audit[2439]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=312 a0=3 a1=ffffffe38890 a2=0 a3=0 items=0 ppid=2271 pid=2439 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:08:28.260000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D425249444745002D6F00646F636B657230002D6A00444F434B4552 Dec 16 02:08:28.266000 audit[2441]: NETFILTER_CFG table=filter:40 family=2 entries=1 op=nft_register_rule pid=2441 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 02:08:28.266000 audit[2441]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=428 a0=3 a1=ffffe56c3250 a2=0 a3=0 items=0 ppid=2271 pid=2441 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:08:28.266000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D49534F4C4154494F4E2D53544147452D31002D6900646F636B6572300000002D6F00646F636B657230002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D32 Dec 16 02:08:28.271000 audit[2443]: NETFILTER_CFG table=filter:41 family=2 entries=1 op=nft_register_rule pid=2443 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 02:08:28.271000 audit[2443]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=312 a0=3 a1=fffff5824250 a2=0 a3=0 items=0 ppid=2271 pid=2443 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:08:28.271000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D32002D6F00646F636B657230002D6A0044524F50 Dec 16 02:08:28.274165 systemd-networkd[1478]: docker0: Link UP Dec 16 02:08:28.287109 dockerd[2271]: time="2025-12-16T02:08:28.287019829Z" level=info msg="Loading containers: done." Dec 16 02:08:28.342347 dockerd[2271]: time="2025-12-16T02:08:28.342257041Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Dec 16 02:08:28.342660 dockerd[2271]: time="2025-12-16T02:08:28.342390757Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Dec 16 02:08:28.342819 dockerd[2271]: time="2025-12-16T02:08:28.342765337Z" level=info msg="Initializing buildkit" Dec 16 02:08:28.401140 dockerd[2271]: time="2025-12-16T02:08:28.398989861Z" level=info msg="Completed buildkit initialization" Dec 16 02:08:28.417025 dockerd[2271]: time="2025-12-16T02:08:28.416925109Z" level=info msg="Daemon has completed initialization" Dec 16 02:08:28.417701 dockerd[2271]: time="2025-12-16T02:08:28.417220045Z" level=info msg="API listen on /run/docker.sock" Dec 16 02:08:28.417601 systemd[1]: Started docker.service - Docker Application Container Engine. Dec 16 02:08:28.416000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=docker comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:08:28.611402 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1943293631-merged.mount: Deactivated successfully. Dec 16 02:08:29.396524 containerd[1908]: time="2025-12-16T02:08:29.396465194Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.3\"" Dec 16 02:08:30.309528 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2576516230.mount: Deactivated successfully. Dec 16 02:08:31.694061 containerd[1908]: time="2025-12-16T02:08:31.693994241Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 02:08:31.696142 containerd[1908]: time="2025-12-16T02:08:31.696052493Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.34.3: active requests=0, bytes read=22977756" Dec 16 02:08:31.698655 containerd[1908]: time="2025-12-16T02:08:31.698576178Z" level=info msg="ImageCreate event name:\"sha256:cf65ae6c8f700cc27f57b7305c6e2b71276a7eed943c559a0091e1e667169896\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 02:08:31.704463 containerd[1908]: time="2025-12-16T02:08:31.704300682Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:5af1030676ceca025742ef5e73a504d11b59be0e5551cdb8c9cf0d3c1231b460\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 02:08:31.706605 containerd[1908]: time="2025-12-16T02:08:31.706275942Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.34.3\" with image id \"sha256:cf65ae6c8f700cc27f57b7305c6e2b71276a7eed943c559a0091e1e667169896\", repo tag \"registry.k8s.io/kube-apiserver:v1.34.3\", repo digest \"registry.k8s.io/kube-apiserver@sha256:5af1030676ceca025742ef5e73a504d11b59be0e5551cdb8c9cf0d3c1231b460\", size \"24567639\" in 2.309747448s" Dec 16 02:08:31.706605 containerd[1908]: time="2025-12-16T02:08:31.706342230Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.3\" returns image reference \"sha256:cf65ae6c8f700cc27f57b7305c6e2b71276a7eed943c559a0091e1e667169896\"" Dec 16 02:08:31.707376 containerd[1908]: time="2025-12-16T02:08:31.707288058Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.3\"" Dec 16 02:08:32.461250 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Dec 16 02:08:32.467816 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 02:08:32.939000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:08:32.940787 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 02:08:32.943466 kernel: kauditd_printk_skb: 132 callbacks suppressed Dec 16 02:08:32.943596 kernel: audit: type=1130 audit(1765850912.939:296): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:08:32.961668 (kubelet)[2551]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 16 02:08:33.085166 kubelet[2551]: E1216 02:08:33.085059 2551 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 16 02:08:33.095345 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 16 02:08:33.096043 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 16 02:08:33.095000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Dec 16 02:08:33.097487 systemd[1]: kubelet.service: Consumed 386ms CPU time, 107.5M memory peak. Dec 16 02:08:33.103512 kernel: audit: type=1131 audit(1765850913.095:297): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Dec 16 02:08:33.471824 containerd[1908]: time="2025-12-16T02:08:33.471737790Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 02:08:33.474122 containerd[1908]: time="2025-12-16T02:08:33.473563062Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.34.3: active requests=0, bytes read=19127323" Dec 16 02:08:33.475294 containerd[1908]: time="2025-12-16T02:08:33.475233006Z" level=info msg="ImageCreate event name:\"sha256:7ada8ff13e54bf42ca66f146b54cd7b1757797d93b3b9ba06df034cdddb5ab22\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 02:08:33.482189 containerd[1908]: time="2025-12-16T02:08:33.482129730Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:716a210d31ee5e27053ea0e1a3a3deb4910791a85ba4b1120410b5a4cbcf1954\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 02:08:33.484312 containerd[1908]: time="2025-12-16T02:08:33.484252002Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.34.3\" with image id \"sha256:7ada8ff13e54bf42ca66f146b54cd7b1757797d93b3b9ba06df034cdddb5ab22\", repo tag \"registry.k8s.io/kube-controller-manager:v1.34.3\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:716a210d31ee5e27053ea0e1a3a3deb4910791a85ba4b1120410b5a4cbcf1954\", size \"20719958\" in 1.77667304s" Dec 16 02:08:33.484672 containerd[1908]: time="2025-12-16T02:08:33.484528842Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.3\" returns image reference \"sha256:7ada8ff13e54bf42ca66f146b54cd7b1757797d93b3b9ba06df034cdddb5ab22\"" Dec 16 02:08:33.485438 containerd[1908]: time="2025-12-16T02:08:33.485128662Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.3\"" Dec 16 02:08:34.962323 containerd[1908]: time="2025-12-16T02:08:34.962225794Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 02:08:34.965943 containerd[1908]: time="2025-12-16T02:08:34.965816434Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.34.3: active requests=0, bytes read=14183580" Dec 16 02:08:34.969063 containerd[1908]: time="2025-12-16T02:08:34.968964574Z" level=info msg="ImageCreate event name:\"sha256:2f2aa21d34d2db37a290752f34faf1d41087c02e18aa9d046a8b4ba1e29421a6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 02:08:34.975472 containerd[1908]: time="2025-12-16T02:08:34.974588938Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:f9a9bc7948fd804ef02255fe82ac2e85d2a66534bae2fe1348c14849260a1fe2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 02:08:34.977580 containerd[1908]: time="2025-12-16T02:08:34.976944886Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.34.3\" with image id \"sha256:2f2aa21d34d2db37a290752f34faf1d41087c02e18aa9d046a8b4ba1e29421a6\", repo tag \"registry.k8s.io/kube-scheduler:v1.34.3\", repo digest \"registry.k8s.io/kube-scheduler@sha256:f9a9bc7948fd804ef02255fe82ac2e85d2a66534bae2fe1348c14849260a1fe2\", size \"15776215\" in 1.491759032s" Dec 16 02:08:34.977580 containerd[1908]: time="2025-12-16T02:08:34.977038006Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.3\" returns image reference \"sha256:2f2aa21d34d2db37a290752f34faf1d41087c02e18aa9d046a8b4ba1e29421a6\"" Dec 16 02:08:34.977813 containerd[1908]: time="2025-12-16T02:08:34.977747638Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.3\"" Dec 16 02:08:36.298435 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1534983034.mount: Deactivated successfully. Dec 16 02:08:36.714590 containerd[1908]: time="2025-12-16T02:08:36.714471214Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 02:08:36.715609 containerd[1908]: time="2025-12-16T02:08:36.715355434Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.34.3: active requests=0, bytes read=0" Dec 16 02:08:36.717492 containerd[1908]: time="2025-12-16T02:08:36.717220150Z" level=info msg="ImageCreate event name:\"sha256:4461daf6b6af87cf200fc22cecc9a2120959aabaf5712ba54ef5b4a6361d1162\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 02:08:36.721967 containerd[1908]: time="2025-12-16T02:08:36.721861558Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:7298ab89a103523d02ff4f49bedf9359710af61df92efdc07bac873064f03ed6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 02:08:36.724634 containerd[1908]: time="2025-12-16T02:08:36.723343894Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.34.3\" with image id \"sha256:4461daf6b6af87cf200fc22cecc9a2120959aabaf5712ba54ef5b4a6361d1162\", repo tag \"registry.k8s.io/kube-proxy:v1.34.3\", repo digest \"registry.k8s.io/kube-proxy@sha256:7298ab89a103523d02ff4f49bedf9359710af61df92efdc07bac873064f03ed6\", size \"22804272\" in 1.745516384s" Dec 16 02:08:36.724634 containerd[1908]: time="2025-12-16T02:08:36.723457306Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.3\" returns image reference \"sha256:4461daf6b6af87cf200fc22cecc9a2120959aabaf5712ba54ef5b4a6361d1162\"" Dec 16 02:08:36.724979 containerd[1908]: time="2025-12-16T02:08:36.724762030Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\"" Dec 16 02:08:37.399735 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount823857211.mount: Deactivated successfully. Dec 16 02:08:38.496265 containerd[1908]: time="2025-12-16T02:08:38.496199363Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 02:08:38.497472 containerd[1908]: time="2025-12-16T02:08:38.496948751Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.1: active requests=0, bytes read=19575985" Dec 16 02:08:38.499301 containerd[1908]: time="2025-12-16T02:08:38.499240283Z" level=info msg="ImageCreate event name:\"sha256:138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 02:08:38.505394 containerd[1908]: time="2025-12-16T02:08:38.505297967Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 02:08:38.508449 containerd[1908]: time="2025-12-16T02:08:38.507994679Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.1\" with image id \"sha256:138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\", size \"20392204\" in 1.783175133s" Dec 16 02:08:38.508449 containerd[1908]: time="2025-12-16T02:08:38.508063523Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\" returns image reference \"sha256:138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc\"" Dec 16 02:08:38.509112 containerd[1908]: time="2025-12-16T02:08:38.509045279Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Dec 16 02:08:39.024386 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1572665504.mount: Deactivated successfully. Dec 16 02:08:39.031109 containerd[1908]: time="2025-12-16T02:08:39.031021762Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 02:08:39.033295 containerd[1908]: time="2025-12-16T02:08:39.033206650Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=0" Dec 16 02:08:39.034685 containerd[1908]: time="2025-12-16T02:08:39.034588546Z" level=info msg="ImageCreate event name:\"sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 02:08:39.038258 containerd[1908]: time="2025-12-16T02:08:39.038172430Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 02:08:39.040293 containerd[1908]: time="2025-12-16T02:08:39.040231318Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"267939\" in 531.116691ms" Dec 16 02:08:39.040529 containerd[1908]: time="2025-12-16T02:08:39.040495138Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd\"" Dec 16 02:08:39.041884 containerd[1908]: time="2025-12-16T02:08:39.041826526Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.4-0\"" Dec 16 02:08:39.582217 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1563714690.mount: Deactivated successfully. Dec 16 02:08:43.211275 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Dec 16 02:08:43.216765 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 02:08:43.825969 containerd[1908]: time="2025-12-16T02:08:43.825897942Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.4-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 02:08:43.830405 containerd[1908]: time="2025-12-16T02:08:43.829595142Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.4-0: active requests=0, bytes read=85821047" Dec 16 02:08:43.836445 containerd[1908]: time="2025-12-16T02:08:43.836302278Z" level=info msg="ImageCreate event name:\"sha256:a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 02:08:43.843503 containerd[1908]: time="2025-12-16T02:08:43.843369450Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 02:08:43.848061 containerd[1908]: time="2025-12-16T02:08:43.847989594Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.4-0\" with image id \"sha256:a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e\", repo tag \"registry.k8s.io/etcd:3.6.4-0\", repo digest \"registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19\", size \"98207481\" in 4.805716524s" Dec 16 02:08:43.848200 containerd[1908]: time="2025-12-16T02:08:43.848054922Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.4-0\" returns image reference \"sha256:a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e\"" Dec 16 02:08:43.854000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:08:43.854769 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 02:08:43.866472 kernel: audit: type=1130 audit(1765850923.854:298): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:08:43.874954 (kubelet)[2692]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 16 02:08:43.969337 kubelet[2692]: E1216 02:08:43.969249 2692 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 16 02:08:43.974184 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 16 02:08:43.975753 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 16 02:08:43.978000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Dec 16 02:08:43.978579 systemd[1]: kubelet.service: Consumed 320ms CPU time, 108.9M memory peak. Dec 16 02:08:43.984604 kernel: audit: type=1131 audit(1765850923.978:299): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Dec 16 02:08:47.870184 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Dec 16 02:08:47.870000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hostnamed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:08:47.882440 kernel: audit: type=1131 audit(1765850927.870:300): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hostnamed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:08:47.891000 audit: BPF prog-id=66 op=UNLOAD Dec 16 02:08:47.893440 kernel: audit: type=1334 audit(1765850927.891:301): prog-id=66 op=UNLOAD Dec 16 02:08:50.618587 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 02:08:50.619000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:08:50.619779 systemd[1]: kubelet.service: Consumed 320ms CPU time, 108.9M memory peak. Dec 16 02:08:50.619000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:08:50.625799 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 02:08:50.630485 kernel: audit: type=1130 audit(1765850930.619:302): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:08:50.630608 kernel: audit: type=1131 audit(1765850930.619:303): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:08:50.689179 systemd[1]: Reload requested from client PID 2727 ('systemctl') (unit session-8.scope)... Dec 16 02:08:50.689376 systemd[1]: Reloading... Dec 16 02:08:50.960554 zram_generator::config[2780]: No configuration found. Dec 16 02:08:51.437386 systemd[1]: Reloading finished in 747 ms. Dec 16 02:08:51.502443 kernel: audit: type=1334 audit(1765850931.497:304): prog-id=70 op=LOAD Dec 16 02:08:51.502577 kernel: audit: type=1334 audit(1765850931.497:305): prog-id=71 op=LOAD Dec 16 02:08:51.502627 kernel: audit: type=1334 audit(1765850931.497:306): prog-id=72 op=LOAD Dec 16 02:08:51.497000 audit: BPF prog-id=70 op=LOAD Dec 16 02:08:51.497000 audit: BPF prog-id=71 op=LOAD Dec 16 02:08:51.497000 audit: BPF prog-id=72 op=LOAD Dec 16 02:08:51.507375 kernel: audit: type=1334 audit(1765850931.498:307): prog-id=60 op=UNLOAD Dec 16 02:08:51.507496 kernel: audit: type=1334 audit(1765850931.498:308): prog-id=61 op=UNLOAD Dec 16 02:08:51.507542 kernel: audit: type=1334 audit(1765850931.498:309): prog-id=62 op=UNLOAD Dec 16 02:08:51.498000 audit: BPF prog-id=60 op=UNLOAD Dec 16 02:08:51.498000 audit: BPF prog-id=61 op=UNLOAD Dec 16 02:08:51.498000 audit: BPF prog-id=62 op=UNLOAD Dec 16 02:08:51.510352 kernel: audit: type=1334 audit(1765850931.501:310): prog-id=73 op=LOAD Dec 16 02:08:51.510474 kernel: audit: type=1334 audit(1765850931.501:311): prog-id=53 op=UNLOAD Dec 16 02:08:51.501000 audit: BPF prog-id=73 op=LOAD Dec 16 02:08:51.501000 audit: BPF prog-id=53 op=UNLOAD Dec 16 02:08:51.508000 audit: BPF prog-id=74 op=LOAD Dec 16 02:08:51.514000 audit: BPF prog-id=75 op=LOAD Dec 16 02:08:51.514000 audit: BPF prog-id=54 op=UNLOAD Dec 16 02:08:51.514000 audit: BPF prog-id=55 op=UNLOAD Dec 16 02:08:51.516000 audit: BPF prog-id=76 op=LOAD Dec 16 02:08:51.516000 audit: BPF prog-id=47 op=UNLOAD Dec 16 02:08:51.527000 audit: BPF prog-id=77 op=LOAD Dec 16 02:08:51.527000 audit: BPF prog-id=56 op=UNLOAD Dec 16 02:08:51.529000 audit: BPF prog-id=78 op=LOAD Dec 16 02:08:51.529000 audit: BPF prog-id=57 op=UNLOAD Dec 16 02:08:51.530000 audit: BPF prog-id=79 op=LOAD Dec 16 02:08:51.530000 audit: BPF prog-id=80 op=LOAD Dec 16 02:08:51.530000 audit: BPF prog-id=58 op=UNLOAD Dec 16 02:08:51.530000 audit: BPF prog-id=59 op=UNLOAD Dec 16 02:08:51.532000 audit: BPF prog-id=81 op=LOAD Dec 16 02:08:51.532000 audit: BPF prog-id=50 op=UNLOAD Dec 16 02:08:51.532000 audit: BPF prog-id=82 op=LOAD Dec 16 02:08:51.532000 audit: BPF prog-id=83 op=LOAD Dec 16 02:08:51.532000 audit: BPF prog-id=51 op=UNLOAD Dec 16 02:08:51.532000 audit: BPF prog-id=52 op=UNLOAD Dec 16 02:08:51.533000 audit: BPF prog-id=84 op=LOAD Dec 16 02:08:51.533000 audit: BPF prog-id=85 op=LOAD Dec 16 02:08:51.533000 audit: BPF prog-id=48 op=UNLOAD Dec 16 02:08:51.533000 audit: BPF prog-id=49 op=UNLOAD Dec 16 02:08:51.536000 audit: BPF prog-id=86 op=LOAD Dec 16 02:08:51.536000 audit: BPF prog-id=69 op=UNLOAD Dec 16 02:08:51.540000 audit: BPF prog-id=87 op=LOAD Dec 16 02:08:51.540000 audit: BPF prog-id=63 op=UNLOAD Dec 16 02:08:51.540000 audit: BPF prog-id=88 op=LOAD Dec 16 02:08:51.540000 audit: BPF prog-id=89 op=LOAD Dec 16 02:08:51.540000 audit: BPF prog-id=64 op=UNLOAD Dec 16 02:08:51.540000 audit: BPF prog-id=65 op=UNLOAD Dec 16 02:08:51.570571 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Dec 16 02:08:51.570762 systemd[1]: kubelet.service: Failed with result 'signal'. Dec 16 02:08:51.571496 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 02:08:51.571605 systemd[1]: kubelet.service: Consumed 229ms CPU time, 95.1M memory peak. Dec 16 02:08:51.571000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Dec 16 02:08:51.574877 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 02:08:51.920000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:08:51.920883 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 02:08:51.939998 (kubelet)[2837]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 16 02:08:52.020334 kubelet[2837]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Dec 16 02:08:52.020334 kubelet[2837]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 16 02:08:52.022480 kubelet[2837]: I1216 02:08:52.022037 2837 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 16 02:08:52.565669 kubelet[2837]: I1216 02:08:52.565598 2837 server.go:529] "Kubelet version" kubeletVersion="v1.34.1" Dec 16 02:08:52.566101 kubelet[2837]: I1216 02:08:52.565905 2837 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 16 02:08:52.568760 kubelet[2837]: I1216 02:08:52.568664 2837 watchdog_linux.go:95] "Systemd watchdog is not enabled" Dec 16 02:08:52.568760 kubelet[2837]: I1216 02:08:52.568715 2837 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Dec 16 02:08:52.570474 kubelet[2837]: I1216 02:08:52.569779 2837 server.go:956] "Client rotation is on, will bootstrap in background" Dec 16 02:08:52.580841 kubelet[2837]: E1216 02:08:52.580750 2837 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://172.31.24.92:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.24.92:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Dec 16 02:08:52.583288 kubelet[2837]: I1216 02:08:52.583238 2837 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 16 02:08:52.592639 kubelet[2837]: I1216 02:08:52.592587 2837 server.go:1423] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Dec 16 02:08:52.598794 kubelet[2837]: I1216 02:08:52.598734 2837 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Dec 16 02:08:52.599302 kubelet[2837]: I1216 02:08:52.599242 2837 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 16 02:08:52.599626 kubelet[2837]: I1216 02:08:52.599300 2837 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-24-92","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 16 02:08:52.599853 kubelet[2837]: I1216 02:08:52.599627 2837 topology_manager.go:138] "Creating topology manager with none policy" Dec 16 02:08:52.599853 kubelet[2837]: I1216 02:08:52.599652 2837 container_manager_linux.go:306] "Creating device plugin manager" Dec 16 02:08:52.599853 kubelet[2837]: I1216 02:08:52.599835 2837 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Dec 16 02:08:52.604719 kubelet[2837]: I1216 02:08:52.604635 2837 state_mem.go:36] "Initialized new in-memory state store" Dec 16 02:08:52.608462 kubelet[2837]: I1216 02:08:52.607181 2837 kubelet.go:475] "Attempting to sync node with API server" Dec 16 02:08:52.608462 kubelet[2837]: I1216 02:08:52.607250 2837 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 16 02:08:52.608462 kubelet[2837]: I1216 02:08:52.607296 2837 kubelet.go:387] "Adding apiserver pod source" Dec 16 02:08:52.608462 kubelet[2837]: I1216 02:08:52.607318 2837 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 16 02:08:52.609728 kubelet[2837]: E1216 02:08:52.609626 2837 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://172.31.24.92:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-24-92&limit=500&resourceVersion=0\": dial tcp 172.31.24.92:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Dec 16 02:08:52.609986 kubelet[2837]: E1216 02:08:52.609929 2837 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://172.31.24.92:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.24.92:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Dec 16 02:08:52.610714 kubelet[2837]: I1216 02:08:52.610645 2837 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v2.1.5" apiVersion="v1" Dec 16 02:08:52.611947 kubelet[2837]: I1216 02:08:52.611883 2837 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Dec 16 02:08:52.612079 kubelet[2837]: I1216 02:08:52.611961 2837 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Dec 16 02:08:52.612079 kubelet[2837]: W1216 02:08:52.612046 2837 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Dec 16 02:08:52.618383 kubelet[2837]: I1216 02:08:52.618317 2837 server.go:1262] "Started kubelet" Dec 16 02:08:52.630534 kubelet[2837]: I1216 02:08:52.628713 2837 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Dec 16 02:08:52.633701 kubelet[2837]: I1216 02:08:52.633587 2837 server.go:310] "Adding debug handlers to kubelet server" Dec 16 02:08:52.638488 kubelet[2837]: E1216 02:08:52.635502 2837 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.24.92:6443/api/v1/namespaces/default/events\": dial tcp 172.31.24.92:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-24-92.188190113d010ff1 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-24-92,UID:ip-172-31-24-92,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-24-92,},FirstTimestamp:2025-12-16 02:08:52.618268657 +0000 UTC m=+0.670571788,LastTimestamp:2025-12-16 02:08:52.618268657 +0000 UTC m=+0.670571788,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-24-92,}" Dec 16 02:08:52.642653 kubelet[2837]: I1216 02:08:52.642598 2837 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 16 02:08:52.645320 kubelet[2837]: I1216 02:08:52.645205 2837 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 16 02:08:52.645529 kubelet[2837]: I1216 02:08:52.645336 2837 server_v1.go:49] "podresources" method="list" useActivePods=true Dec 16 02:08:52.645819 kubelet[2837]: I1216 02:08:52.645755 2837 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 16 02:08:52.646019 kubelet[2837]: I1216 02:08:52.645983 2837 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Dec 16 02:08:52.650981 kubelet[2837]: I1216 02:08:52.650927 2837 volume_manager.go:313] "Starting Kubelet Volume Manager" Dec 16 02:08:52.651528 kubelet[2837]: E1216 02:08:52.651478 2837 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ip-172-31-24-92\" not found" Dec 16 02:08:52.652822 kubelet[2837]: I1216 02:08:52.652775 2837 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Dec 16 02:08:52.653159 kubelet[2837]: I1216 02:08:52.653125 2837 reconciler.go:29] "Reconciler: start to sync state" Dec 16 02:08:52.657520 kubelet[2837]: E1216 02:08:52.656307 2837 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://172.31.24.92:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.24.92:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Dec 16 02:08:52.658033 kubelet[2837]: E1216 02:08:52.657955 2837 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.24.92:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-24-92?timeout=10s\": dial tcp 172.31.24.92:6443: connect: connection refused" interval="200ms" Dec 16 02:08:52.658000 audit[2853]: NETFILTER_CFG table=mangle:42 family=2 entries=2 op=nft_register_chain pid=2853 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 02:08:52.658000 audit[2853]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=136 a0=3 a1=ffffd9e152e0 a2=0 a3=0 items=0 ppid=2837 pid=2853 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:08:52.658000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Dec 16 02:08:52.660815 kubelet[2837]: I1216 02:08:52.660759 2837 factory.go:223] Registration of the systemd container factory successfully Dec 16 02:08:52.661187 kubelet[2837]: I1216 02:08:52.661136 2837 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 16 02:08:52.661000 audit[2854]: NETFILTER_CFG table=filter:43 family=2 entries=1 op=nft_register_chain pid=2854 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 02:08:52.661000 audit[2854]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffc7894640 a2=0 a3=0 items=0 ppid=2837 pid=2854 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:08:52.661000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D4649524557414C4C002D740066696C746572 Dec 16 02:08:52.665472 kubelet[2837]: I1216 02:08:52.664276 2837 factory.go:223] Registration of the containerd container factory successfully Dec 16 02:08:52.678000 audit[2858]: NETFILTER_CFG table=filter:44 family=2 entries=2 op=nft_register_chain pid=2858 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 02:08:52.678000 audit[2858]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=340 a0=3 a1=ffffcffb6610 a2=0 a3=0 items=0 ppid=2837 pid=2858 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:08:52.678000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Dec 16 02:08:52.685778 kubelet[2837]: E1216 02:08:52.685716 2837 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 16 02:08:52.693000 audit[2862]: NETFILTER_CFG table=filter:45 family=2 entries=2 op=nft_register_chain pid=2862 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 02:08:52.693000 audit[2862]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=340 a0=3 a1=fffff33a3aa0 a2=0 a3=0 items=0 ppid=2837 pid=2862 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:08:52.693000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Dec 16 02:08:52.705993 kubelet[2837]: I1216 02:08:52.705947 2837 cpu_manager.go:221] "Starting CPU manager" policy="none" Dec 16 02:08:52.709921 kubelet[2837]: I1216 02:08:52.709507 2837 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Dec 16 02:08:52.709921 kubelet[2837]: I1216 02:08:52.709566 2837 state_mem.go:36] "Initialized new in-memory state store" Dec 16 02:08:52.710000 audit[2865]: NETFILTER_CFG table=filter:46 family=2 entries=1 op=nft_register_rule pid=2865 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 02:08:52.710000 audit[2865]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=924 a0=3 a1=ffffc0bdbb00 a2=0 a3=0 items=0 ppid=2837 pid=2865 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:08:52.710000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E7400626C6F636B20696E636F6D696E67206C6F63616C6E657420636F6E6E656374696F6E73002D2D647374003132372E302E302E302F380000002D2D737263003132372E Dec 16 02:08:52.712286 kubelet[2837]: I1216 02:08:52.712095 2837 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Dec 16 02:08:52.716676 kubelet[2837]: I1216 02:08:52.716599 2837 policy_none.go:49] "None policy: Start" Dec 16 02:08:52.716676 kubelet[2837]: I1216 02:08:52.716659 2837 memory_manager.go:187] "Starting memorymanager" policy="None" Dec 16 02:08:52.716883 kubelet[2837]: I1216 02:08:52.716692 2837 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Dec 16 02:08:52.718000 audit[2866]: NETFILTER_CFG table=mangle:47 family=10 entries=2 op=nft_register_chain pid=2866 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 02:08:52.718000 audit[2866]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=136 a0=3 a1=ffffddf79f40 a2=0 a3=0 items=0 ppid=2837 pid=2866 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:08:52.718000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Dec 16 02:08:52.721716 kubelet[2837]: I1216 02:08:52.720532 2837 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Dec 16 02:08:52.721716 kubelet[2837]: I1216 02:08:52.720590 2837 status_manager.go:244] "Starting to sync pod status with apiserver" Dec 16 02:08:52.721716 kubelet[2837]: I1216 02:08:52.720626 2837 kubelet.go:2427] "Starting kubelet main sync loop" Dec 16 02:08:52.721716 kubelet[2837]: E1216 02:08:52.720712 2837 kubelet.go:2451] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 16 02:08:52.722215 kubelet[2837]: E1216 02:08:52.722031 2837 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://172.31.24.92:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.24.92:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Dec 16 02:08:52.721000 audit[2867]: NETFILTER_CFG table=mangle:48 family=2 entries=1 op=nft_register_chain pid=2867 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 02:08:52.722583 kubelet[2837]: I1216 02:08:52.722521 2837 policy_none.go:47] "Start" Dec 16 02:08:52.721000 audit[2867]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffdfcda3a0 a2=0 a3=0 items=0 ppid=2837 pid=2867 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:08:52.721000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Dec 16 02:08:52.726000 audit[2869]: NETFILTER_CFG table=mangle:49 family=10 entries=1 op=nft_register_chain pid=2869 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 02:08:52.726000 audit[2869]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=fffff33d3b30 a2=0 a3=0 items=0 ppid=2837 pid=2869 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:08:52.726000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Dec 16 02:08:52.731000 audit[2870]: NETFILTER_CFG table=nat:50 family=2 entries=1 op=nft_register_chain pid=2870 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 02:08:52.731000 audit[2870]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffcacf0130 a2=0 a3=0 items=0 ppid=2837 pid=2870 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:08:52.731000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Dec 16 02:08:52.734000 audit[2871]: NETFILTER_CFG table=nat:51 family=10 entries=1 op=nft_register_chain pid=2871 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 02:08:52.734000 audit[2871]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffcf586a90 a2=0 a3=0 items=0 ppid=2837 pid=2871 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:08:52.734000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Dec 16 02:08:52.738462 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Dec 16 02:08:52.740000 audit[2872]: NETFILTER_CFG table=filter:52 family=2 entries=1 op=nft_register_chain pid=2872 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 02:08:52.740000 audit[2872]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffe514b040 a2=0 a3=0 items=0 ppid=2837 pid=2872 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:08:52.740000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Dec 16 02:08:52.743000 audit[2873]: NETFILTER_CFG table=filter:53 family=10 entries=1 op=nft_register_chain pid=2873 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 02:08:52.743000 audit[2873]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffdd12cbb0 a2=0 a3=0 items=0 ppid=2837 pid=2873 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:08:52.743000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Dec 16 02:08:52.752799 kubelet[2837]: E1216 02:08:52.752729 2837 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ip-172-31-24-92\" not found" Dec 16 02:08:52.760901 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Dec 16 02:08:52.770376 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Dec 16 02:08:52.786610 kubelet[2837]: E1216 02:08:52.786552 2837 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Dec 16 02:08:52.786946 kubelet[2837]: I1216 02:08:52.786908 2837 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 16 02:08:52.787077 kubelet[2837]: I1216 02:08:52.786946 2837 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 16 02:08:52.789473 kubelet[2837]: I1216 02:08:52.788170 2837 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 16 02:08:52.793291 kubelet[2837]: E1216 02:08:52.793227 2837 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Dec 16 02:08:52.793485 kubelet[2837]: E1216 02:08:52.793325 2837 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-24-92\" not found" Dec 16 02:08:52.849580 systemd[1]: Created slice kubepods-burstable-pod4bbfd078539980b148d896aac39e51bb.slice - libcontainer container kubepods-burstable-pod4bbfd078539980b148d896aac39e51bb.slice. Dec 16 02:08:52.856850 kubelet[2837]: I1216 02:08:52.855565 2837 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d2f60d5add98bae365ed6d7cf333e74e-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-24-92\" (UID: \"d2f60d5add98bae365ed6d7cf333e74e\") " pod="kube-system/kube-controller-manager-ip-172-31-24-92" Dec 16 02:08:52.856850 kubelet[2837]: I1216 02:08:52.855643 2837 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d2f60d5add98bae365ed6d7cf333e74e-k8s-certs\") pod \"kube-controller-manager-ip-172-31-24-92\" (UID: \"d2f60d5add98bae365ed6d7cf333e74e\") " pod="kube-system/kube-controller-manager-ip-172-31-24-92" Dec 16 02:08:52.856850 kubelet[2837]: I1216 02:08:52.855684 2837 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d2f60d5add98bae365ed6d7cf333e74e-kubeconfig\") pod \"kube-controller-manager-ip-172-31-24-92\" (UID: \"d2f60d5add98bae365ed6d7cf333e74e\") " pod="kube-system/kube-controller-manager-ip-172-31-24-92" Dec 16 02:08:52.856850 kubelet[2837]: I1216 02:08:52.855718 2837 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d2f60d5add98bae365ed6d7cf333e74e-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-24-92\" (UID: \"d2f60d5add98bae365ed6d7cf333e74e\") " pod="kube-system/kube-controller-manager-ip-172-31-24-92" Dec 16 02:08:52.856850 kubelet[2837]: I1216 02:08:52.855767 2837 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d7966cd8b2a8d78a0e71153decf26fa5-kubeconfig\") pod \"kube-scheduler-ip-172-31-24-92\" (UID: \"d7966cd8b2a8d78a0e71153decf26fa5\") " pod="kube-system/kube-scheduler-ip-172-31-24-92" Dec 16 02:08:52.857187 kubelet[2837]: I1216 02:08:52.855802 2837 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d2f60d5add98bae365ed6d7cf333e74e-ca-certs\") pod \"kube-controller-manager-ip-172-31-24-92\" (UID: \"d2f60d5add98bae365ed6d7cf333e74e\") " pod="kube-system/kube-controller-manager-ip-172-31-24-92" Dec 16 02:08:52.857187 kubelet[2837]: I1216 02:08:52.855835 2837 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4bbfd078539980b148d896aac39e51bb-ca-certs\") pod \"kube-apiserver-ip-172-31-24-92\" (UID: \"4bbfd078539980b148d896aac39e51bb\") " pod="kube-system/kube-apiserver-ip-172-31-24-92" Dec 16 02:08:52.857187 kubelet[2837]: I1216 02:08:52.855868 2837 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4bbfd078539980b148d896aac39e51bb-k8s-certs\") pod \"kube-apiserver-ip-172-31-24-92\" (UID: \"4bbfd078539980b148d896aac39e51bb\") " pod="kube-system/kube-apiserver-ip-172-31-24-92" Dec 16 02:08:52.857187 kubelet[2837]: I1216 02:08:52.855907 2837 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4bbfd078539980b148d896aac39e51bb-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-24-92\" (UID: \"4bbfd078539980b148d896aac39e51bb\") " pod="kube-system/kube-apiserver-ip-172-31-24-92" Dec 16 02:08:52.859579 kubelet[2837]: E1216 02:08:52.859507 2837 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.24.92:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-24-92?timeout=10s\": dial tcp 172.31.24.92:6443: connect: connection refused" interval="400ms" Dec 16 02:08:52.865399 kubelet[2837]: E1216 02:08:52.864305 2837 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-24-92\" not found" node="ip-172-31-24-92" Dec 16 02:08:52.871726 systemd[1]: Created slice kubepods-burstable-podd2f60d5add98bae365ed6d7cf333e74e.slice - libcontainer container kubepods-burstable-podd2f60d5add98bae365ed6d7cf333e74e.slice. Dec 16 02:08:52.877736 kubelet[2837]: E1216 02:08:52.877675 2837 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-24-92\" not found" node="ip-172-31-24-92" Dec 16 02:08:52.883447 systemd[1]: Created slice kubepods-burstable-podd7966cd8b2a8d78a0e71153decf26fa5.slice - libcontainer container kubepods-burstable-podd7966cd8b2a8d78a0e71153decf26fa5.slice. Dec 16 02:08:52.888991 kubelet[2837]: E1216 02:08:52.888644 2837 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-24-92\" not found" node="ip-172-31-24-92" Dec 16 02:08:52.893312 kubelet[2837]: I1216 02:08:52.893271 2837 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-24-92" Dec 16 02:08:52.894214 kubelet[2837]: E1216 02:08:52.894146 2837 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.24.92:6443/api/v1/nodes\": dial tcp 172.31.24.92:6443: connect: connection refused" node="ip-172-31-24-92" Dec 16 02:08:53.097210 kubelet[2837]: I1216 02:08:53.097159 2837 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-24-92" Dec 16 02:08:53.097901 kubelet[2837]: E1216 02:08:53.097730 2837 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.24.92:6443/api/v1/nodes\": dial tcp 172.31.24.92:6443: connect: connection refused" node="ip-172-31-24-92" Dec 16 02:08:53.169826 containerd[1908]: time="2025-12-16T02:08:53.169365444Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-24-92,Uid:4bbfd078539980b148d896aac39e51bb,Namespace:kube-system,Attempt:0,}" Dec 16 02:08:53.182205 containerd[1908]: time="2025-12-16T02:08:53.182108916Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-24-92,Uid:d2f60d5add98bae365ed6d7cf333e74e,Namespace:kube-system,Attempt:0,}" Dec 16 02:08:53.192459 containerd[1908]: time="2025-12-16T02:08:53.192306636Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-24-92,Uid:d7966cd8b2a8d78a0e71153decf26fa5,Namespace:kube-system,Attempt:0,}" Dec 16 02:08:53.260464 kubelet[2837]: E1216 02:08:53.260348 2837 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.24.92:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-24-92?timeout=10s\": dial tcp 172.31.24.92:6443: connect: connection refused" interval="800ms" Dec 16 02:08:53.501356 kubelet[2837]: I1216 02:08:53.501303 2837 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-24-92" Dec 16 02:08:53.502817 kubelet[2837]: E1216 02:08:53.502740 2837 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.24.92:6443/api/v1/nodes\": dial tcp 172.31.24.92:6443: connect: connection refused" node="ip-172-31-24-92" Dec 16 02:08:53.654115 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1053403998.mount: Deactivated successfully. Dec 16 02:08:53.663368 kubelet[2837]: E1216 02:08:53.663293 2837 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://172.31.24.92:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.24.92:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Dec 16 02:08:53.664139 containerd[1908]: time="2025-12-16T02:08:53.664081143Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 16 02:08:53.666129 containerd[1908]: time="2025-12-16T02:08:53.666070935Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 16 02:08:53.668102 containerd[1908]: time="2025-12-16T02:08:53.668030895Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Dec 16 02:08:53.668963 containerd[1908]: time="2025-12-16T02:08:53.668906319Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Dec 16 02:08:53.671634 containerd[1908]: time="2025-12-16T02:08:53.671453799Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 16 02:08:53.673032 containerd[1908]: time="2025-12-16T02:08:53.672821283Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Dec 16 02:08:53.673157 containerd[1908]: time="2025-12-16T02:08:53.673111227Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 16 02:08:53.677711 containerd[1908]: time="2025-12-16T02:08:53.677657367Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 16 02:08:53.680639 containerd[1908]: time="2025-12-16T02:08:53.680572539Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 484.314579ms" Dec 16 02:08:53.684064 containerd[1908]: time="2025-12-16T02:08:53.684006663Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 510.603723ms" Dec 16 02:08:53.688821 containerd[1908]: time="2025-12-16T02:08:53.688693443Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 503.588667ms" Dec 16 02:08:53.750954 containerd[1908]: time="2025-12-16T02:08:53.750877911Z" level=info msg="connecting to shim d1caf9beb499324b1b036d5f568c5d3ee884a780a0dbfc65e5e2bfb51da23cde" address="unix:///run/containerd/s/8d622b48c57065ad352c4bb8ffdb88482360dbae0367b499342c2496e5d5ef83" namespace=k8s.io protocol=ttrpc version=3 Dec 16 02:08:53.752801 containerd[1908]: time="2025-12-16T02:08:53.752606667Z" level=info msg="connecting to shim 803ba0f43e3a0f43b1feba3effd76f8cef0d662bd553fc99baf910a138a1a935" address="unix:///run/containerd/s/cf86a26390c6d2fe99b70a26d21d0fedac350450bceec14aefbcca017f9fe132" namespace=k8s.io protocol=ttrpc version=3 Dec 16 02:08:53.760514 containerd[1908]: time="2025-12-16T02:08:53.759497943Z" level=info msg="connecting to shim 7c5a46af1ee70f640ea2bec31a267bff282fc53d2013ffb78e775424ee2ce8ec" address="unix:///run/containerd/s/27055cffc1e913dd5a2eced25dbef41d7b248f358ae25fd53466422364128939" namespace=k8s.io protocol=ttrpc version=3 Dec 16 02:08:53.813811 systemd[1]: Started cri-containerd-803ba0f43e3a0f43b1feba3effd76f8cef0d662bd553fc99baf910a138a1a935.scope - libcontainer container 803ba0f43e3a0f43b1feba3effd76f8cef0d662bd553fc99baf910a138a1a935. Dec 16 02:08:53.842440 systemd[1]: Started cri-containerd-d1caf9beb499324b1b036d5f568c5d3ee884a780a0dbfc65e5e2bfb51da23cde.scope - libcontainer container d1caf9beb499324b1b036d5f568c5d3ee884a780a0dbfc65e5e2bfb51da23cde. Dec 16 02:08:53.867825 systemd[1]: Started cri-containerd-7c5a46af1ee70f640ea2bec31a267bff282fc53d2013ffb78e775424ee2ce8ec.scope - libcontainer container 7c5a46af1ee70f640ea2bec31a267bff282fc53d2013ffb78e775424ee2ce8ec. Dec 16 02:08:53.882000 audit: BPF prog-id=90 op=LOAD Dec 16 02:08:53.884000 audit: BPF prog-id=91 op=LOAD Dec 16 02:08:53.884000 audit[2931]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000130180 a2=98 a3=0 items=0 ppid=2900 pid=2931 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:08:53.884000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3830336261306634336533613066343362316665626133656666643736 Dec 16 02:08:53.884000 audit: BPF prog-id=91 op=UNLOAD Dec 16 02:08:53.884000 audit[2931]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=2900 pid=2931 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:08:53.884000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3830336261306634336533613066343362316665626133656666643736 Dec 16 02:08:53.885000 audit: BPF prog-id=92 op=LOAD Dec 16 02:08:53.885000 audit[2931]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001303e8 a2=98 a3=0 items=0 ppid=2900 pid=2931 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:08:53.885000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3830336261306634336533613066343362316665626133656666643736 Dec 16 02:08:53.886000 audit: BPF prog-id=93 op=LOAD Dec 16 02:08:53.886000 audit[2931]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=4000130168 a2=98 a3=0 items=0 ppid=2900 pid=2931 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:08:53.886000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3830336261306634336533613066343362316665626133656666643736 Dec 16 02:08:53.887000 audit: BPF prog-id=93 op=UNLOAD Dec 16 02:08:53.887000 audit[2931]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=2900 pid=2931 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:08:53.887000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3830336261306634336533613066343362316665626133656666643736 Dec 16 02:08:53.887000 audit: BPF prog-id=92 op=UNLOAD Dec 16 02:08:53.887000 audit[2931]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=2900 pid=2931 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:08:53.887000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3830336261306634336533613066343362316665626133656666643736 Dec 16 02:08:53.887000 audit: BPF prog-id=94 op=LOAD Dec 16 02:08:53.887000 audit[2931]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000130648 a2=98 a3=0 items=0 ppid=2900 pid=2931 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:08:53.887000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3830336261306634336533613066343362316665626133656666643736 Dec 16 02:08:53.917000 audit: BPF prog-id=95 op=LOAD Dec 16 02:08:53.919000 audit: BPF prog-id=96 op=LOAD Dec 16 02:08:53.920000 audit: BPF prog-id=97 op=LOAD Dec 16 02:08:53.920000 audit[2933]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000186180 a2=98 a3=0 items=0 ppid=2901 pid=2933 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:08:53.920000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6431636166396265623439393332346231623033366435663536386335 Dec 16 02:08:53.920000 audit: BPF prog-id=97 op=UNLOAD Dec 16 02:08:53.920000 audit[2933]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2901 pid=2933 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:08:53.920000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6431636166396265623439393332346231623033366435663536386335 Dec 16 02:08:53.920000 audit: BPF prog-id=98 op=LOAD Dec 16 02:08:53.920000 audit[2933]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001863e8 a2=98 a3=0 items=0 ppid=2901 pid=2933 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:08:53.920000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6431636166396265623439393332346231623033366435663536386335 Dec 16 02:08:53.921000 audit: BPF prog-id=99 op=LOAD Dec 16 02:08:53.921000 audit[2933]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000186168 a2=98 a3=0 items=0 ppid=2901 pid=2933 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:08:53.921000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6431636166396265623439393332346231623033366435663536386335 Dec 16 02:08:53.921000 audit: BPF prog-id=99 op=UNLOAD Dec 16 02:08:53.921000 audit[2933]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2901 pid=2933 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:08:53.921000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6431636166396265623439393332346231623033366435663536386335 Dec 16 02:08:53.921000 audit: BPF prog-id=98 op=UNLOAD Dec 16 02:08:53.921000 audit[2933]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2901 pid=2933 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:08:53.921000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6431636166396265623439393332346231623033366435663536386335 Dec 16 02:08:53.923000 audit: BPF prog-id=100 op=LOAD Dec 16 02:08:53.923000 audit[2947]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130180 a2=98 a3=0 items=0 ppid=2911 pid=2947 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:08:53.921000 audit: BPF prog-id=101 op=LOAD Dec 16 02:08:53.923000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3763356134366166316565373066363430656132626563333161323637 Dec 16 02:08:53.923000 audit: BPF prog-id=100 op=UNLOAD Dec 16 02:08:53.923000 audit[2947]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2911 pid=2947 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:08:53.923000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3763356134366166316565373066363430656132626563333161323637 Dec 16 02:08:53.921000 audit[2933]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000186648 a2=98 a3=0 items=0 ppid=2901 pid=2933 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:08:53.921000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6431636166396265623439393332346231623033366435663536386335 Dec 16 02:08:53.924000 audit: BPF prog-id=102 op=LOAD Dec 16 02:08:53.924000 audit[2947]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001303e8 a2=98 a3=0 items=0 ppid=2911 pid=2947 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:08:53.924000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3763356134366166316565373066363430656132626563333161323637 Dec 16 02:08:53.925000 audit: BPF prog-id=103 op=LOAD Dec 16 02:08:53.925000 audit[2947]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000130168 a2=98 a3=0 items=0 ppid=2911 pid=2947 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:08:53.925000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3763356134366166316565373066363430656132626563333161323637 Dec 16 02:08:53.925000 audit: BPF prog-id=103 op=UNLOAD Dec 16 02:08:53.925000 audit[2947]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2911 pid=2947 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:08:53.925000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3763356134366166316565373066363430656132626563333161323637 Dec 16 02:08:53.925000 audit: BPF prog-id=102 op=UNLOAD Dec 16 02:08:53.925000 audit[2947]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2911 pid=2947 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:08:53.925000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3763356134366166316565373066363430656132626563333161323637 Dec 16 02:08:53.925000 audit: BPF prog-id=104 op=LOAD Dec 16 02:08:53.925000 audit[2947]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130648 a2=98 a3=0 items=0 ppid=2911 pid=2947 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:08:53.925000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3763356134366166316565373066363430656132626563333161323637 Dec 16 02:08:53.979238 kubelet[2837]: E1216 02:08:53.978754 2837 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://172.31.24.92:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.24.92:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Dec 16 02:08:54.013027 containerd[1908]: time="2025-12-16T02:08:54.012857172Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-24-92,Uid:4bbfd078539980b148d896aac39e51bb,Namespace:kube-system,Attempt:0,} returns sandbox id \"803ba0f43e3a0f43b1feba3effd76f8cef0d662bd553fc99baf910a138a1a935\"" Dec 16 02:08:54.031926 containerd[1908]: time="2025-12-16T02:08:54.031869708Z" level=info msg="CreateContainer within sandbox \"803ba0f43e3a0f43b1feba3effd76f8cef0d662bd553fc99baf910a138a1a935\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Dec 16 02:08:54.035603 containerd[1908]: time="2025-12-16T02:08:54.035131032Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-24-92,Uid:d7966cd8b2a8d78a0e71153decf26fa5,Namespace:kube-system,Attempt:0,} returns sandbox id \"d1caf9beb499324b1b036d5f568c5d3ee884a780a0dbfc65e5e2bfb51da23cde\"" Dec 16 02:08:54.037611 containerd[1908]: time="2025-12-16T02:08:54.037033500Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-24-92,Uid:d2f60d5add98bae365ed6d7cf333e74e,Namespace:kube-system,Attempt:0,} returns sandbox id \"7c5a46af1ee70f640ea2bec31a267bff282fc53d2013ffb78e775424ee2ce8ec\"" Dec 16 02:08:54.044139 containerd[1908]: time="2025-12-16T02:08:54.044053020Z" level=info msg="CreateContainer within sandbox \"d1caf9beb499324b1b036d5f568c5d3ee884a780a0dbfc65e5e2bfb51da23cde\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Dec 16 02:08:54.049004 containerd[1908]: time="2025-12-16T02:08:54.048951649Z" level=info msg="CreateContainer within sandbox \"7c5a46af1ee70f640ea2bec31a267bff282fc53d2013ffb78e775424ee2ce8ec\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Dec 16 02:08:54.052992 containerd[1908]: time="2025-12-16T02:08:54.052643077Z" level=info msg="Container e8fc881bee5dac722b48727c5bee0149093f6041f32c31ddced901838013306b: CDI devices from CRI Config.CDIDevices: []" Dec 16 02:08:54.060018 containerd[1908]: time="2025-12-16T02:08:54.059954161Z" level=info msg="Container 2770fe300aa620ad258c140fcc98d061a6520c6d47fea5ca712e08f81e913822: CDI devices from CRI Config.CDIDevices: []" Dec 16 02:08:54.062473 kubelet[2837]: E1216 02:08:54.062320 2837 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.24.92:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-24-92?timeout=10s\": dial tcp 172.31.24.92:6443: connect: connection refused" interval="1.6s" Dec 16 02:08:54.067490 containerd[1908]: time="2025-12-16T02:08:54.067401253Z" level=info msg="CreateContainer within sandbox \"803ba0f43e3a0f43b1feba3effd76f8cef0d662bd553fc99baf910a138a1a935\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"e8fc881bee5dac722b48727c5bee0149093f6041f32c31ddced901838013306b\"" Dec 16 02:08:54.069280 containerd[1908]: time="2025-12-16T02:08:54.069213889Z" level=info msg="StartContainer for \"e8fc881bee5dac722b48727c5bee0149093f6041f32c31ddced901838013306b\"" Dec 16 02:08:54.074325 containerd[1908]: time="2025-12-16T02:08:54.074238421Z" level=info msg="CreateContainer within sandbox \"d1caf9beb499324b1b036d5f568c5d3ee884a780a0dbfc65e5e2bfb51da23cde\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"2770fe300aa620ad258c140fcc98d061a6520c6d47fea5ca712e08f81e913822\"" Dec 16 02:08:54.074696 containerd[1908]: time="2025-12-16T02:08:54.074256361Z" level=info msg="Container 55dcd13e07b733e2ad30aca831ddd8992bea516a34d8a1a1d367617b2150c84a: CDI devices from CRI Config.CDIDevices: []" Dec 16 02:08:54.075001 containerd[1908]: time="2025-12-16T02:08:54.074442781Z" level=info msg="connecting to shim e8fc881bee5dac722b48727c5bee0149093f6041f32c31ddced901838013306b" address="unix:///run/containerd/s/cf86a26390c6d2fe99b70a26d21d0fedac350450bceec14aefbcca017f9fe132" protocol=ttrpc version=3 Dec 16 02:08:54.077131 containerd[1908]: time="2025-12-16T02:08:54.077068969Z" level=info msg="StartContainer for \"2770fe300aa620ad258c140fcc98d061a6520c6d47fea5ca712e08f81e913822\"" Dec 16 02:08:54.077920 kubelet[2837]: E1216 02:08:54.077790 2837 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://172.31.24.92:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.24.92:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Dec 16 02:08:54.082434 containerd[1908]: time="2025-12-16T02:08:54.082274161Z" level=info msg="connecting to shim 2770fe300aa620ad258c140fcc98d061a6520c6d47fea5ca712e08f81e913822" address="unix:///run/containerd/s/8d622b48c57065ad352c4bb8ffdb88482360dbae0367b499342c2496e5d5ef83" protocol=ttrpc version=3 Dec 16 02:08:54.105338 containerd[1908]: time="2025-12-16T02:08:54.105264745Z" level=info msg="CreateContainer within sandbox \"7c5a46af1ee70f640ea2bec31a267bff282fc53d2013ffb78e775424ee2ce8ec\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"55dcd13e07b733e2ad30aca831ddd8992bea516a34d8a1a1d367617b2150c84a\"" Dec 16 02:08:54.109026 containerd[1908]: time="2025-12-16T02:08:54.108957361Z" level=info msg="StartContainer for \"55dcd13e07b733e2ad30aca831ddd8992bea516a34d8a1a1d367617b2150c84a\"" Dec 16 02:08:54.118672 containerd[1908]: time="2025-12-16T02:08:54.118464769Z" level=info msg="connecting to shim 55dcd13e07b733e2ad30aca831ddd8992bea516a34d8a1a1d367617b2150c84a" address="unix:///run/containerd/s/27055cffc1e913dd5a2eced25dbef41d7b248f358ae25fd53466422364128939" protocol=ttrpc version=3 Dec 16 02:08:54.129839 systemd[1]: Started cri-containerd-e8fc881bee5dac722b48727c5bee0149093f6041f32c31ddced901838013306b.scope - libcontainer container e8fc881bee5dac722b48727c5bee0149093f6041f32c31ddced901838013306b. Dec 16 02:08:54.162758 kubelet[2837]: E1216 02:08:54.162695 2837 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://172.31.24.92:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-24-92&limit=500&resourceVersion=0\": dial tcp 172.31.24.92:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Dec 16 02:08:54.168711 systemd[1]: Started cri-containerd-2770fe300aa620ad258c140fcc98d061a6520c6d47fea5ca712e08f81e913822.scope - libcontainer container 2770fe300aa620ad258c140fcc98d061a6520c6d47fea5ca712e08f81e913822. Dec 16 02:08:54.185285 systemd[1]: Started cri-containerd-55dcd13e07b733e2ad30aca831ddd8992bea516a34d8a1a1d367617b2150c84a.scope - libcontainer container 55dcd13e07b733e2ad30aca831ddd8992bea516a34d8a1a1d367617b2150c84a. Dec 16 02:08:54.198000 audit: BPF prog-id=105 op=LOAD Dec 16 02:08:54.199000 audit: BPF prog-id=106 op=LOAD Dec 16 02:08:54.199000 audit[3019]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a8180 a2=98 a3=0 items=0 ppid=2900 pid=3019 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:08:54.199000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6538666338383162656535646163373232623438373237633562656530 Dec 16 02:08:54.199000 audit: BPF prog-id=106 op=UNLOAD Dec 16 02:08:54.199000 audit[3019]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2900 pid=3019 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:08:54.199000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6538666338383162656535646163373232623438373237633562656530 Dec 16 02:08:54.200000 audit: BPF prog-id=107 op=LOAD Dec 16 02:08:54.200000 audit[3019]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a83e8 a2=98 a3=0 items=0 ppid=2900 pid=3019 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:08:54.200000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6538666338383162656535646163373232623438373237633562656530 Dec 16 02:08:54.201000 audit: BPF prog-id=108 op=LOAD Dec 16 02:08:54.201000 audit[3019]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=40001a8168 a2=98 a3=0 items=0 ppid=2900 pid=3019 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:08:54.201000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6538666338383162656535646163373232623438373237633562656530 Dec 16 02:08:54.201000 audit: BPF prog-id=108 op=UNLOAD Dec 16 02:08:54.201000 audit[3019]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2900 pid=3019 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:08:54.201000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6538666338383162656535646163373232623438373237633562656530 Dec 16 02:08:54.202000 audit: BPF prog-id=107 op=UNLOAD Dec 16 02:08:54.202000 audit[3019]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2900 pid=3019 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:08:54.202000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6538666338383162656535646163373232623438373237633562656530 Dec 16 02:08:54.202000 audit: BPF prog-id=109 op=LOAD Dec 16 02:08:54.202000 audit[3019]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a8648 a2=98 a3=0 items=0 ppid=2900 pid=3019 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:08:54.202000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6538666338383162656535646163373232623438373237633562656530 Dec 16 02:08:54.248000 audit: BPF prog-id=110 op=LOAD Dec 16 02:08:54.253000 audit: BPF prog-id=111 op=LOAD Dec 16 02:08:54.253000 audit[3037]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000128180 a2=98 a3=0 items=0 ppid=2911 pid=3037 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:08:54.253000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3535646364313365303762373333653261643330616361383331646464 Dec 16 02:08:54.254000 audit: BPF prog-id=111 op=UNLOAD Dec 16 02:08:54.254000 audit[3037]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2911 pid=3037 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:08:54.254000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3535646364313365303762373333653261643330616361383331646464 Dec 16 02:08:54.256000 audit: BPF prog-id=112 op=LOAD Dec 16 02:08:54.256000 audit[3037]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001283e8 a2=98 a3=0 items=0 ppid=2911 pid=3037 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:08:54.256000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3535646364313365303762373333653261643330616361383331646464 Dec 16 02:08:54.256000 audit: BPF prog-id=113 op=LOAD Dec 16 02:08:54.256000 audit: BPF prog-id=114 op=LOAD Dec 16 02:08:54.256000 audit[3037]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000128168 a2=98 a3=0 items=0 ppid=2911 pid=3037 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:08:54.256000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3535646364313365303762373333653261643330616361383331646464 Dec 16 02:08:54.256000 audit: BPF prog-id=114 op=UNLOAD Dec 16 02:08:54.256000 audit[3037]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2911 pid=3037 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:08:54.256000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3535646364313365303762373333653261643330616361383331646464 Dec 16 02:08:54.256000 audit: BPF prog-id=112 op=UNLOAD Dec 16 02:08:54.256000 audit[3037]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2911 pid=3037 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:08:54.256000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3535646364313365303762373333653261643330616361383331646464 Dec 16 02:08:54.256000 audit: BPF prog-id=115 op=LOAD Dec 16 02:08:54.256000 audit[3037]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000128648 a2=98 a3=0 items=0 ppid=2911 pid=3037 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:08:54.256000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3535646364313365303762373333653261643330616361383331646464 Dec 16 02:08:54.257000 audit: BPF prog-id=116 op=LOAD Dec 16 02:08:54.257000 audit[3025]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000186180 a2=98 a3=0 items=0 ppid=2901 pid=3025 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:08:54.257000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3237373066653330306161363230616432353863313430666363393864 Dec 16 02:08:54.258000 audit: BPF prog-id=116 op=UNLOAD Dec 16 02:08:54.258000 audit[3025]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2901 pid=3025 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:08:54.258000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3237373066653330306161363230616432353863313430666363393864 Dec 16 02:08:54.258000 audit: BPF prog-id=117 op=LOAD Dec 16 02:08:54.258000 audit[3025]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001863e8 a2=98 a3=0 items=0 ppid=2901 pid=3025 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:08:54.258000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3237373066653330306161363230616432353863313430666363393864 Dec 16 02:08:54.258000 audit: BPF prog-id=118 op=LOAD Dec 16 02:08:54.258000 audit[3025]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000186168 a2=98 a3=0 items=0 ppid=2901 pid=3025 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:08:54.258000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3237373066653330306161363230616432353863313430666363393864 Dec 16 02:08:54.258000 audit: BPF prog-id=118 op=UNLOAD Dec 16 02:08:54.258000 audit[3025]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2901 pid=3025 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:08:54.258000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3237373066653330306161363230616432353863313430666363393864 Dec 16 02:08:54.258000 audit: BPF prog-id=117 op=UNLOAD Dec 16 02:08:54.258000 audit[3025]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2901 pid=3025 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:08:54.258000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3237373066653330306161363230616432353863313430666363393864 Dec 16 02:08:54.259000 audit: BPF prog-id=119 op=LOAD Dec 16 02:08:54.259000 audit[3025]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000186648 a2=98 a3=0 items=0 ppid=2901 pid=3025 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:08:54.259000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3237373066653330306161363230616432353863313430666363393864 Dec 16 02:08:54.296989 containerd[1908]: time="2025-12-16T02:08:54.296820146Z" level=info msg="StartContainer for \"e8fc881bee5dac722b48727c5bee0149093f6041f32c31ddced901838013306b\" returns successfully" Dec 16 02:08:54.307557 kubelet[2837]: I1216 02:08:54.306374 2837 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-24-92" Dec 16 02:08:54.308465 kubelet[2837]: E1216 02:08:54.308359 2837 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.24.92:6443/api/v1/nodes\": dial tcp 172.31.24.92:6443: connect: connection refused" node="ip-172-31-24-92" Dec 16 02:08:54.377796 containerd[1908]: time="2025-12-16T02:08:54.377651618Z" level=info msg="StartContainer for \"2770fe300aa620ad258c140fcc98d061a6520c6d47fea5ca712e08f81e913822\" returns successfully" Dec 16 02:08:54.392435 containerd[1908]: time="2025-12-16T02:08:54.392310830Z" level=info msg="StartContainer for \"55dcd13e07b733e2ad30aca831ddd8992bea516a34d8a1a1d367617b2150c84a\" returns successfully" Dec 16 02:08:54.748555 kubelet[2837]: E1216 02:08:54.747774 2837 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-24-92\" not found" node="ip-172-31-24-92" Dec 16 02:08:54.759874 kubelet[2837]: E1216 02:08:54.759836 2837 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-24-92\" not found" node="ip-172-31-24-92" Dec 16 02:08:54.764305 kubelet[2837]: E1216 02:08:54.764258 2837 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-24-92\" not found" node="ip-172-31-24-92" Dec 16 02:08:55.765165 kubelet[2837]: E1216 02:08:55.764822 2837 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-24-92\" not found" node="ip-172-31-24-92" Dec 16 02:08:55.767154 kubelet[2837]: E1216 02:08:55.767111 2837 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-24-92\" not found" node="ip-172-31-24-92" Dec 16 02:08:55.911486 kubelet[2837]: I1216 02:08:55.911352 2837 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-24-92" Dec 16 02:08:56.770537 kubelet[2837]: E1216 02:08:56.770197 2837 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-24-92\" not found" node="ip-172-31-24-92" Dec 16 02:08:59.294945 kubelet[2837]: E1216 02:08:59.294895 2837 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-24-92\" not found" node="ip-172-31-24-92" Dec 16 02:08:59.424928 kubelet[2837]: I1216 02:08:59.424855 2837 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-24-92" Dec 16 02:08:59.424928 kubelet[2837]: E1216 02:08:59.424923 2837 kubelet_node_status.go:486] "Error updating node status, will retry" err="error getting node \"ip-172-31-24-92\": node \"ip-172-31-24-92\" not found" Dec 16 02:08:59.452738 kubelet[2837]: I1216 02:08:59.452665 2837 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-24-92" Dec 16 02:08:59.494438 kubelet[2837]: E1216 02:08:59.494047 2837 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ip-172-31-24-92.188190113d010ff1 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-24-92,UID:ip-172-31-24-92,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-24-92,},FirstTimestamp:2025-12-16 02:08:52.618268657 +0000 UTC m=+0.670571788,LastTimestamp:2025-12-16 02:08:52.618268657 +0000 UTC m=+0.670571788,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-24-92,}" Dec 16 02:08:59.538888 kubelet[2837]: E1216 02:08:59.538825 2837 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-24-92\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ip-172-31-24-92" Dec 16 02:08:59.538888 kubelet[2837]: I1216 02:08:59.538881 2837 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-24-92" Dec 16 02:08:59.546473 kubelet[2837]: E1216 02:08:59.546286 2837 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ip-172-31-24-92\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ip-172-31-24-92" Dec 16 02:08:59.546473 kubelet[2837]: I1216 02:08:59.546343 2837 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-24-92" Dec 16 02:08:59.553759 kubelet[2837]: E1216 02:08:59.553689 2837 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-ip-172-31-24-92\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ip-172-31-24-92" Dec 16 02:08:59.571214 kubelet[2837]: E1216 02:08:59.571157 2837 controller.go:145] "Failed to ensure lease exists, will retry" err="namespaces \"kube-node-lease\" not found" interval="3.2s" Dec 16 02:08:59.612218 kubelet[2837]: I1216 02:08:59.612156 2837 apiserver.go:52] "Watching apiserver" Dec 16 02:08:59.653706 kubelet[2837]: I1216 02:08:59.653626 2837 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Dec 16 02:09:02.194836 update_engine[1854]: I20251216 02:09:02.194743 1854 update_attempter.cc:509] Updating boot flags... Dec 16 02:09:02.708609 systemd[1]: Reload requested from client PID 3219 ('systemctl') (unit session-8.scope)... Dec 16 02:09:02.708645 systemd[1]: Reloading... Dec 16 02:09:03.158454 zram_generator::config[3280]: No configuration found. Dec 16 02:09:03.979851 systemd[1]: Reloading finished in 1268 ms. Dec 16 02:09:04.147182 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 02:09:04.176967 systemd[1]: kubelet.service: Deactivated successfully. Dec 16 02:09:04.178642 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 02:09:04.178762 systemd[1]: kubelet.service: Consumed 1.646s CPU time, 123.4M memory peak. Dec 16 02:09:04.178000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:09:04.180658 kernel: kauditd_printk_skb: 202 callbacks suppressed Dec 16 02:09:04.180770 kernel: audit: type=1131 audit(1765850944.178:406): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:09:04.192493 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 02:09:04.193000 audit: BPF prog-id=120 op=LOAD Dec 16 02:09:04.193000 audit: BPF prog-id=73 op=UNLOAD Dec 16 02:09:04.201054 kernel: audit: type=1334 audit(1765850944.193:407): prog-id=120 op=LOAD Dec 16 02:09:04.201156 kernel: audit: type=1334 audit(1765850944.193:408): prog-id=73 op=UNLOAD Dec 16 02:09:04.193000 audit: BPF prog-id=121 op=LOAD Dec 16 02:09:04.204226 kernel: audit: type=1334 audit(1765850944.193:409): prog-id=121 op=LOAD Dec 16 02:09:04.193000 audit: BPF prog-id=122 op=LOAD Dec 16 02:09:04.209015 kernel: audit: type=1334 audit(1765850944.193:410): prog-id=122 op=LOAD Dec 16 02:09:04.193000 audit: BPF prog-id=74 op=UNLOAD Dec 16 02:09:04.212477 kernel: audit: type=1334 audit(1765850944.193:411): prog-id=74 op=UNLOAD Dec 16 02:09:04.218291 kernel: audit: type=1334 audit(1765850944.193:412): prog-id=75 op=UNLOAD Dec 16 02:09:04.193000 audit: BPF prog-id=75 op=UNLOAD Dec 16 02:09:04.198000 audit: BPF prog-id=123 op=LOAD Dec 16 02:09:04.220030 kernel: audit: type=1334 audit(1765850944.198:413): prog-id=123 op=LOAD Dec 16 02:09:04.221030 kernel: audit: type=1334 audit(1765850944.198:414): prog-id=81 op=UNLOAD Dec 16 02:09:04.198000 audit: BPF prog-id=81 op=UNLOAD Dec 16 02:09:04.225930 kernel: audit: type=1334 audit(1765850944.198:415): prog-id=124 op=LOAD Dec 16 02:09:04.198000 audit: BPF prog-id=124 op=LOAD Dec 16 02:09:04.198000 audit: BPF prog-id=125 op=LOAD Dec 16 02:09:04.198000 audit: BPF prog-id=82 op=UNLOAD Dec 16 02:09:04.198000 audit: BPF prog-id=83 op=UNLOAD Dec 16 02:09:04.206000 audit: BPF prog-id=126 op=LOAD Dec 16 02:09:04.206000 audit: BPF prog-id=87 op=UNLOAD Dec 16 02:09:04.211000 audit: BPF prog-id=127 op=LOAD Dec 16 02:09:04.212000 audit: BPF prog-id=128 op=LOAD Dec 16 02:09:04.212000 audit: BPF prog-id=88 op=UNLOAD Dec 16 02:09:04.212000 audit: BPF prog-id=89 op=UNLOAD Dec 16 02:09:04.226000 audit: BPF prog-id=129 op=LOAD Dec 16 02:09:04.226000 audit: BPF prog-id=70 op=UNLOAD Dec 16 02:09:04.226000 audit: BPF prog-id=130 op=LOAD Dec 16 02:09:04.226000 audit: BPF prog-id=131 op=LOAD Dec 16 02:09:04.227000 audit: BPF prog-id=71 op=UNLOAD Dec 16 02:09:04.227000 audit: BPF prog-id=72 op=UNLOAD Dec 16 02:09:04.228000 audit: BPF prog-id=132 op=LOAD Dec 16 02:09:04.228000 audit: BPF prog-id=76 op=UNLOAD Dec 16 02:09:04.230000 audit: BPF prog-id=133 op=LOAD Dec 16 02:09:04.230000 audit: BPF prog-id=78 op=UNLOAD Dec 16 02:09:04.231000 audit: BPF prog-id=134 op=LOAD Dec 16 02:09:04.231000 audit: BPF prog-id=135 op=LOAD Dec 16 02:09:04.231000 audit: BPF prog-id=79 op=UNLOAD Dec 16 02:09:04.231000 audit: BPF prog-id=80 op=UNLOAD Dec 16 02:09:04.233000 audit: BPF prog-id=136 op=LOAD Dec 16 02:09:04.233000 audit: BPF prog-id=137 op=LOAD Dec 16 02:09:04.233000 audit: BPF prog-id=84 op=UNLOAD Dec 16 02:09:04.233000 audit: BPF prog-id=85 op=UNLOAD Dec 16 02:09:04.235000 audit: BPF prog-id=138 op=LOAD Dec 16 02:09:04.235000 audit: BPF prog-id=77 op=UNLOAD Dec 16 02:09:04.236000 audit: BPF prog-id=139 op=LOAD Dec 16 02:09:04.236000 audit: BPF prog-id=86 op=UNLOAD Dec 16 02:09:04.620723 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 02:09:04.622000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:09:04.639968 (kubelet)[3500]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 16 02:09:04.740354 kubelet[3500]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Dec 16 02:09:04.740354 kubelet[3500]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 16 02:09:04.742449 kubelet[3500]: I1216 02:09:04.741130 3500 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 16 02:09:04.756613 kubelet[3500]: I1216 02:09:04.756542 3500 server.go:529] "Kubelet version" kubeletVersion="v1.34.1" Dec 16 02:09:04.756844 kubelet[3500]: I1216 02:09:04.756820 3500 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 16 02:09:04.757083 kubelet[3500]: I1216 02:09:04.757034 3500 watchdog_linux.go:95] "Systemd watchdog is not enabled" Dec 16 02:09:04.757238 kubelet[3500]: I1216 02:09:04.757213 3500 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Dec 16 02:09:04.757991 kubelet[3500]: I1216 02:09:04.757930 3500 server.go:956] "Client rotation is on, will bootstrap in background" Dec 16 02:09:04.761739 kubelet[3500]: I1216 02:09:04.761697 3500 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Dec 16 02:09:04.778090 kubelet[3500]: I1216 02:09:04.778022 3500 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 16 02:09:04.791709 kubelet[3500]: I1216 02:09:04.791150 3500 server.go:1423] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Dec 16 02:09:04.797468 kubelet[3500]: I1216 02:09:04.797394 3500 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Dec 16 02:09:04.798221 kubelet[3500]: I1216 02:09:04.798160 3500 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 16 02:09:04.798687 kubelet[3500]: I1216 02:09:04.798362 3500 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-24-92","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 16 02:09:04.799008 kubelet[3500]: I1216 02:09:04.798978 3500 topology_manager.go:138] "Creating topology manager with none policy" Dec 16 02:09:04.799163 kubelet[3500]: I1216 02:09:04.799142 3500 container_manager_linux.go:306] "Creating device plugin manager" Dec 16 02:09:04.799340 kubelet[3500]: I1216 02:09:04.799318 3500 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Dec 16 02:09:04.802152 kubelet[3500]: I1216 02:09:04.802075 3500 state_mem.go:36] "Initialized new in-memory state store" Dec 16 02:09:04.802978 kubelet[3500]: I1216 02:09:04.802767 3500 kubelet.go:475] "Attempting to sync node with API server" Dec 16 02:09:04.802978 kubelet[3500]: I1216 02:09:04.802807 3500 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 16 02:09:04.802978 kubelet[3500]: I1216 02:09:04.802855 3500 kubelet.go:387] "Adding apiserver pod source" Dec 16 02:09:04.802978 kubelet[3500]: I1216 02:09:04.802884 3500 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 16 02:09:04.810462 kubelet[3500]: I1216 02:09:04.809993 3500 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v2.1.5" apiVersion="v1" Dec 16 02:09:04.815451 kubelet[3500]: I1216 02:09:04.815356 3500 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Dec 16 02:09:04.815588 kubelet[3500]: I1216 02:09:04.815465 3500 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Dec 16 02:09:04.829808 kubelet[3500]: I1216 02:09:04.829734 3500 server.go:1262] "Started kubelet" Dec 16 02:09:04.833455 kubelet[3500]: I1216 02:09:04.833292 3500 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 16 02:09:04.851488 kubelet[3500]: I1216 02:09:04.850026 3500 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Dec 16 02:09:04.873393 kubelet[3500]: I1216 02:09:04.873216 3500 server.go:310] "Adding debug handlers to kubelet server" Dec 16 02:09:04.883514 kubelet[3500]: I1216 02:09:04.851944 3500 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 16 02:09:04.884107 kubelet[3500]: I1216 02:09:04.884043 3500 server_v1.go:49] "podresources" method="list" useActivePods=true Dec 16 02:09:04.886021 kubelet[3500]: I1216 02:09:04.884779 3500 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 16 02:09:04.886021 kubelet[3500]: E1216 02:09:04.868576 3500 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ip-172-31-24-92\" not found" Dec 16 02:09:04.886021 kubelet[3500]: I1216 02:09:04.860374 3500 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Dec 16 02:09:04.888847 kubelet[3500]: I1216 02:09:04.867882 3500 volume_manager.go:313] "Starting Kubelet Volume Manager" Dec 16 02:09:04.889358 kubelet[3500]: I1216 02:09:04.867900 3500 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Dec 16 02:09:04.892192 kubelet[3500]: I1216 02:09:04.892112 3500 reconciler.go:29] "Reconciler: start to sync state" Dec 16 02:09:04.912613 kubelet[3500]: I1216 02:09:04.912556 3500 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 16 02:09:04.923460 kubelet[3500]: I1216 02:09:04.921697 3500 factory.go:223] Registration of the containerd container factory successfully Dec 16 02:09:04.923460 kubelet[3500]: I1216 02:09:04.921744 3500 factory.go:223] Registration of the systemd container factory successfully Dec 16 02:09:04.947562 kubelet[3500]: E1216 02:09:04.947513 3500 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 16 02:09:04.980713 kubelet[3500]: I1216 02:09:04.980014 3500 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Dec 16 02:09:04.986558 kubelet[3500]: I1216 02:09:04.986517 3500 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Dec 16 02:09:04.987399 kubelet[3500]: I1216 02:09:04.987366 3500 status_manager.go:244] "Starting to sync pod status with apiserver" Dec 16 02:09:04.988786 kubelet[3500]: I1216 02:09:04.988750 3500 kubelet.go:2427] "Starting kubelet main sync loop" Dec 16 02:09:04.989083 kubelet[3500]: E1216 02:09:04.989036 3500 kubelet.go:2451] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 16 02:09:05.090920 kubelet[3500]: E1216 02:09:05.090589 3500 kubelet.go:2451] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Dec 16 02:09:05.154529 kubelet[3500]: I1216 02:09:05.153330 3500 cpu_manager.go:221] "Starting CPU manager" policy="none" Dec 16 02:09:05.154529 kubelet[3500]: I1216 02:09:05.153368 3500 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Dec 16 02:09:05.154529 kubelet[3500]: I1216 02:09:05.153404 3500 state_mem.go:36] "Initialized new in-memory state store" Dec 16 02:09:05.154529 kubelet[3500]: I1216 02:09:05.153691 3500 state_mem.go:88] "Updated default CPUSet" cpuSet="" Dec 16 02:09:05.154529 kubelet[3500]: I1216 02:09:05.153711 3500 state_mem.go:96] "Updated CPUSet assignments" assignments={} Dec 16 02:09:05.154529 kubelet[3500]: I1216 02:09:05.153745 3500 policy_none.go:49] "None policy: Start" Dec 16 02:09:05.154529 kubelet[3500]: I1216 02:09:05.153764 3500 memory_manager.go:187] "Starting memorymanager" policy="None" Dec 16 02:09:05.154529 kubelet[3500]: I1216 02:09:05.153785 3500 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Dec 16 02:09:05.154529 kubelet[3500]: I1216 02:09:05.154030 3500 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Dec 16 02:09:05.154529 kubelet[3500]: I1216 02:09:05.154047 3500 policy_none.go:47] "Start" Dec 16 02:09:05.183655 kubelet[3500]: E1216 02:09:05.183619 3500 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Dec 16 02:09:05.184714 kubelet[3500]: I1216 02:09:05.184686 3500 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 16 02:09:05.184924 kubelet[3500]: I1216 02:09:05.184875 3500 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 16 02:09:05.186483 kubelet[3500]: I1216 02:09:05.186103 3500 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 16 02:09:05.199743 kubelet[3500]: E1216 02:09:05.198894 3500 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Dec 16 02:09:05.292602 kubelet[3500]: I1216 02:09:05.292551 3500 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-24-92" Dec 16 02:09:05.292869 kubelet[3500]: I1216 02:09:05.292820 3500 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-24-92" Dec 16 02:09:05.294244 kubelet[3500]: I1216 02:09:05.292602 3500 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-24-92" Dec 16 02:09:05.312874 kubelet[3500]: I1216 02:09:05.312816 3500 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-24-92" Dec 16 02:09:05.335363 kubelet[3500]: I1216 02:09:05.334977 3500 kubelet_node_status.go:124] "Node was previously registered" node="ip-172-31-24-92" Dec 16 02:09:05.335363 kubelet[3500]: I1216 02:09:05.335101 3500 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-24-92" Dec 16 02:09:05.401200 kubelet[3500]: I1216 02:09:05.400856 3500 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d2f60d5add98bae365ed6d7cf333e74e-ca-certs\") pod \"kube-controller-manager-ip-172-31-24-92\" (UID: \"d2f60d5add98bae365ed6d7cf333e74e\") " pod="kube-system/kube-controller-manager-ip-172-31-24-92" Dec 16 02:09:05.401200 kubelet[3500]: I1216 02:09:05.400922 3500 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d2f60d5add98bae365ed6d7cf333e74e-k8s-certs\") pod \"kube-controller-manager-ip-172-31-24-92\" (UID: \"d2f60d5add98bae365ed6d7cf333e74e\") " pod="kube-system/kube-controller-manager-ip-172-31-24-92" Dec 16 02:09:05.401200 kubelet[3500]: I1216 02:09:05.400967 3500 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d2f60d5add98bae365ed6d7cf333e74e-kubeconfig\") pod \"kube-controller-manager-ip-172-31-24-92\" (UID: \"d2f60d5add98bae365ed6d7cf333e74e\") " pod="kube-system/kube-controller-manager-ip-172-31-24-92" Dec 16 02:09:05.401200 kubelet[3500]: I1216 02:09:05.401002 3500 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d7966cd8b2a8d78a0e71153decf26fa5-kubeconfig\") pod \"kube-scheduler-ip-172-31-24-92\" (UID: \"d7966cd8b2a8d78a0e71153decf26fa5\") " pod="kube-system/kube-scheduler-ip-172-31-24-92" Dec 16 02:09:05.401200 kubelet[3500]: I1216 02:09:05.401041 3500 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4bbfd078539980b148d896aac39e51bb-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-24-92\" (UID: \"4bbfd078539980b148d896aac39e51bb\") " pod="kube-system/kube-apiserver-ip-172-31-24-92" Dec 16 02:09:05.401574 kubelet[3500]: I1216 02:09:05.401084 3500 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d2f60d5add98bae365ed6d7cf333e74e-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-24-92\" (UID: \"d2f60d5add98bae365ed6d7cf333e74e\") " pod="kube-system/kube-controller-manager-ip-172-31-24-92" Dec 16 02:09:05.401574 kubelet[3500]: I1216 02:09:05.401136 3500 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d2f60d5add98bae365ed6d7cf333e74e-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-24-92\" (UID: \"d2f60d5add98bae365ed6d7cf333e74e\") " pod="kube-system/kube-controller-manager-ip-172-31-24-92" Dec 16 02:09:05.402318 kubelet[3500]: I1216 02:09:05.402252 3500 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4bbfd078539980b148d896aac39e51bb-ca-certs\") pod \"kube-apiserver-ip-172-31-24-92\" (UID: \"4bbfd078539980b148d896aac39e51bb\") " pod="kube-system/kube-apiserver-ip-172-31-24-92" Dec 16 02:09:05.402558 kubelet[3500]: I1216 02:09:05.402515 3500 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4bbfd078539980b148d896aac39e51bb-k8s-certs\") pod \"kube-apiserver-ip-172-31-24-92\" (UID: \"4bbfd078539980b148d896aac39e51bb\") " pod="kube-system/kube-apiserver-ip-172-31-24-92" Dec 16 02:09:05.816280 kubelet[3500]: I1216 02:09:05.816117 3500 apiserver.go:52] "Watching apiserver" Dec 16 02:09:05.890216 kubelet[3500]: I1216 02:09:05.890133 3500 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Dec 16 02:09:06.018151 kubelet[3500]: I1216 02:09:06.017882 3500 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-24-92" podStartSLOduration=1.017864388 podStartE2EDuration="1.017864388s" podCreationTimestamp="2025-12-16 02:09:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 02:09:06.017844432 +0000 UTC m=+1.365920360" watchObservedRunningTime="2025-12-16 02:09:06.017864388 +0000 UTC m=+1.365940304" Dec 16 02:09:06.051740 kubelet[3500]: I1216 02:09:06.051628 3500 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-24-92" podStartSLOduration=1.051606828 podStartE2EDuration="1.051606828s" podCreationTimestamp="2025-12-16 02:09:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 02:09:06.044848788 +0000 UTC m=+1.392924716" watchObservedRunningTime="2025-12-16 02:09:06.051606828 +0000 UTC m=+1.399682780" Dec 16 02:09:06.079549 kubelet[3500]: I1216 02:09:06.077325 3500 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-24-92" Dec 16 02:09:06.086476 kubelet[3500]: E1216 02:09:06.086384 3500 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-ip-172-31-24-92\" already exists" pod="kube-system/kube-scheduler-ip-172-31-24-92" Dec 16 02:09:06.126361 kubelet[3500]: I1216 02:09:06.126267 3500 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-24-92" podStartSLOduration=1.126222853 podStartE2EDuration="1.126222853s" podCreationTimestamp="2025-12-16 02:09:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 02:09:06.093747708 +0000 UTC m=+1.441823648" watchObservedRunningTime="2025-12-16 02:09:06.126222853 +0000 UTC m=+1.474298757" Dec 16 02:09:07.907367 kubelet[3500]: I1216 02:09:07.907246 3500 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Dec 16 02:09:07.908946 containerd[1908]: time="2025-12-16T02:09:07.908886521Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Dec 16 02:09:07.910633 kubelet[3500]: I1216 02:09:07.910192 3500 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Dec 16 02:09:08.534591 systemd[1]: Created slice kubepods-besteffort-pod6e0842a3_21a1_4213_b4ae_4941dd08ee35.slice - libcontainer container kubepods-besteffort-pod6e0842a3_21a1_4213_b4ae_4941dd08ee35.slice. Dec 16 02:09:08.622714 kubelet[3500]: I1216 02:09:08.622645 3500 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6e0842a3-21a1-4213-b4ae-4941dd08ee35-lib-modules\") pod \"kube-proxy-8j9wl\" (UID: \"6e0842a3-21a1-4213-b4ae-4941dd08ee35\") " pod="kube-system/kube-proxy-8j9wl" Dec 16 02:09:08.622882 kubelet[3500]: I1216 02:09:08.622714 3500 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8mjtn\" (UniqueName: \"kubernetes.io/projected/6e0842a3-21a1-4213-b4ae-4941dd08ee35-kube-api-access-8mjtn\") pod \"kube-proxy-8j9wl\" (UID: \"6e0842a3-21a1-4213-b4ae-4941dd08ee35\") " pod="kube-system/kube-proxy-8j9wl" Dec 16 02:09:08.622882 kubelet[3500]: I1216 02:09:08.622769 3500 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/6e0842a3-21a1-4213-b4ae-4941dd08ee35-kube-proxy\") pod \"kube-proxy-8j9wl\" (UID: \"6e0842a3-21a1-4213-b4ae-4941dd08ee35\") " pod="kube-system/kube-proxy-8j9wl" Dec 16 02:09:08.622882 kubelet[3500]: I1216 02:09:08.622807 3500 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6e0842a3-21a1-4213-b4ae-4941dd08ee35-xtables-lock\") pod \"kube-proxy-8j9wl\" (UID: \"6e0842a3-21a1-4213-b4ae-4941dd08ee35\") " pod="kube-system/kube-proxy-8j9wl" Dec 16 02:09:08.858578 containerd[1908]: time="2025-12-16T02:09:08.858193758Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-8j9wl,Uid:6e0842a3-21a1-4213-b4ae-4941dd08ee35,Namespace:kube-system,Attempt:0,}" Dec 16 02:09:08.926482 containerd[1908]: time="2025-12-16T02:09:08.925609950Z" level=info msg="connecting to shim 32a81ca13e27126e041cc18551f20ceb4c150221ce26fe3859075f5c43897e9f" address="unix:///run/containerd/s/697d3705461acd33e22f9a5892141d1cb38d38f5943704aeae2b6aeb4b0dcc45" namespace=k8s.io protocol=ttrpc version=3 Dec 16 02:09:09.034775 systemd[1]: Started cri-containerd-32a81ca13e27126e041cc18551f20ceb4c150221ce26fe3859075f5c43897e9f.scope - libcontainer container 32a81ca13e27126e041cc18551f20ceb4c150221ce26fe3859075f5c43897e9f. Dec 16 02:09:09.227077 kubelet[3500]: I1216 02:09:09.226751 3500 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/5312aea8-e4ec-4538-892c-09070271c0cd-var-lib-calico\") pod \"tigera-operator-65cdcdfd6d-ghhwl\" (UID: \"5312aea8-e4ec-4538-892c-09070271c0cd\") " pod="tigera-operator/tigera-operator-65cdcdfd6d-ghhwl" Dec 16 02:09:09.227077 kubelet[3500]: I1216 02:09:09.226829 3500 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4t4s8\" (UniqueName: \"kubernetes.io/projected/5312aea8-e4ec-4538-892c-09070271c0cd-kube-api-access-4t4s8\") pod \"tigera-operator-65cdcdfd6d-ghhwl\" (UID: \"5312aea8-e4ec-4538-892c-09070271c0cd\") " pod="tigera-operator/tigera-operator-65cdcdfd6d-ghhwl" Dec 16 02:09:09.230655 systemd[1]: Created slice kubepods-besteffort-pod5312aea8_e4ec_4538_892c_09070271c0cd.slice - libcontainer container kubepods-besteffort-pod5312aea8_e4ec_4538_892c_09070271c0cd.slice. Dec 16 02:09:09.261244 kernel: kauditd_printk_skb: 32 callbacks suppressed Dec 16 02:09:09.261373 kernel: audit: type=1334 audit(1765850949.257:448): prog-id=140 op=LOAD Dec 16 02:09:09.257000 audit: BPF prog-id=140 op=LOAD Dec 16 02:09:09.262000 audit: BPF prog-id=141 op=LOAD Dec 16 02:09:09.264691 kernel: audit: type=1334 audit(1765850949.262:449): prog-id=141 op=LOAD Dec 16 02:09:09.262000 audit[3566]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001b0180 a2=98 a3=0 items=0 ppid=3556 pid=3566 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:09:09.271338 kernel: audit: type=1300 audit(1765850949.262:449): arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001b0180 a2=98 a3=0 items=0 ppid=3556 pid=3566 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:09:09.262000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3332613831636131336532373132366530343163633138353531663230 Dec 16 02:09:09.278381 kernel: audit: type=1327 audit(1765850949.262:449): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3332613831636131336532373132366530343163633138353531663230 Dec 16 02:09:09.278754 kernel: audit: type=1334 audit(1765850949.262:450): prog-id=141 op=UNLOAD Dec 16 02:09:09.262000 audit: BPF prog-id=141 op=UNLOAD Dec 16 02:09:09.262000 audit[3566]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3556 pid=3566 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:09:09.286851 kernel: audit: type=1300 audit(1765850949.262:450): arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3556 pid=3566 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:09:09.262000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3332613831636131336532373132366530343163633138353531663230 Dec 16 02:09:09.292955 kernel: audit: type=1327 audit(1765850949.262:450): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3332613831636131336532373132366530343163633138353531663230 Dec 16 02:09:09.262000 audit: BPF prog-id=142 op=LOAD Dec 16 02:09:09.294919 kernel: audit: type=1334 audit(1765850949.262:451): prog-id=142 op=LOAD Dec 16 02:09:09.295188 kernel: audit: type=1300 audit(1765850949.262:451): arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001b03e8 a2=98 a3=0 items=0 ppid=3556 pid=3566 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:09:09.262000 audit[3566]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001b03e8 a2=98 a3=0 items=0 ppid=3556 pid=3566 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:09:09.262000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3332613831636131336532373132366530343163633138353531663230 Dec 16 02:09:09.308873 kernel: audit: type=1327 audit(1765850949.262:451): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3332613831636131336532373132366530343163633138353531663230 Dec 16 02:09:09.264000 audit: BPF prog-id=143 op=LOAD Dec 16 02:09:09.264000 audit[3566]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=40001b0168 a2=98 a3=0 items=0 ppid=3556 pid=3566 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:09:09.264000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3332613831636131336532373132366530343163633138353531663230 Dec 16 02:09:09.264000 audit: BPF prog-id=143 op=UNLOAD Dec 16 02:09:09.264000 audit[3566]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3556 pid=3566 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:09:09.264000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3332613831636131336532373132366530343163633138353531663230 Dec 16 02:09:09.264000 audit: BPF prog-id=142 op=UNLOAD Dec 16 02:09:09.264000 audit[3566]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3556 pid=3566 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:09:09.264000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3332613831636131336532373132366530343163633138353531663230 Dec 16 02:09:09.264000 audit: BPF prog-id=144 op=LOAD Dec 16 02:09:09.264000 audit[3566]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001b0648 a2=98 a3=0 items=0 ppid=3556 pid=3566 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:09:09.264000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3332613831636131336532373132366530343163633138353531663230 Dec 16 02:09:09.354565 containerd[1908]: time="2025-12-16T02:09:09.353066621Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-8j9wl,Uid:6e0842a3-21a1-4213-b4ae-4941dd08ee35,Namespace:kube-system,Attempt:0,} returns sandbox id \"32a81ca13e27126e041cc18551f20ceb4c150221ce26fe3859075f5c43897e9f\"" Dec 16 02:09:09.381464 containerd[1908]: time="2025-12-16T02:09:09.381312221Z" level=info msg="CreateContainer within sandbox \"32a81ca13e27126e041cc18551f20ceb4c150221ce26fe3859075f5c43897e9f\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Dec 16 02:09:09.405630 containerd[1908]: time="2025-12-16T02:09:09.405568517Z" level=info msg="Container da63e23cda6a43a4fa150c71d34d5b19f971711a896fbe9244256b69ead85fb1: CDI devices from CRI Config.CDIDevices: []" Dec 16 02:09:09.426491 containerd[1908]: time="2025-12-16T02:09:09.426379313Z" level=info msg="CreateContainer within sandbox \"32a81ca13e27126e041cc18551f20ceb4c150221ce26fe3859075f5c43897e9f\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"da63e23cda6a43a4fa150c71d34d5b19f971711a896fbe9244256b69ead85fb1\"" Dec 16 02:09:09.429513 containerd[1908]: time="2025-12-16T02:09:09.429368585Z" level=info msg="StartContainer for \"da63e23cda6a43a4fa150c71d34d5b19f971711a896fbe9244256b69ead85fb1\"" Dec 16 02:09:09.434226 containerd[1908]: time="2025-12-16T02:09:09.434150969Z" level=info msg="connecting to shim da63e23cda6a43a4fa150c71d34d5b19f971711a896fbe9244256b69ead85fb1" address="unix:///run/containerd/s/697d3705461acd33e22f9a5892141d1cb38d38f5943704aeae2b6aeb4b0dcc45" protocol=ttrpc version=3 Dec 16 02:09:09.469778 systemd[1]: Started cri-containerd-da63e23cda6a43a4fa150c71d34d5b19f971711a896fbe9244256b69ead85fb1.scope - libcontainer container da63e23cda6a43a4fa150c71d34d5b19f971711a896fbe9244256b69ead85fb1. Dec 16 02:09:09.548104 containerd[1908]: time="2025-12-16T02:09:09.547880478Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-65cdcdfd6d-ghhwl,Uid:5312aea8-e4ec-4538-892c-09070271c0cd,Namespace:tigera-operator,Attempt:0,}" Dec 16 02:09:09.554000 audit: BPF prog-id=145 op=LOAD Dec 16 02:09:09.554000 audit[3595]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001a03e8 a2=98 a3=0 items=0 ppid=3556 pid=3595 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:09:09.554000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6461363365323363646136613433613466613135306337316433346435 Dec 16 02:09:09.555000 audit: BPF prog-id=146 op=LOAD Dec 16 02:09:09.555000 audit[3595]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=40001a0168 a2=98 a3=0 items=0 ppid=3556 pid=3595 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:09:09.555000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6461363365323363646136613433613466613135306337316433346435 Dec 16 02:09:09.555000 audit: BPF prog-id=146 op=UNLOAD Dec 16 02:09:09.555000 audit[3595]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3556 pid=3595 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:09:09.555000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6461363365323363646136613433613466613135306337316433346435 Dec 16 02:09:09.556000 audit: BPF prog-id=145 op=UNLOAD Dec 16 02:09:09.556000 audit[3595]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3556 pid=3595 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:09:09.556000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6461363365323363646136613433613466613135306337316433346435 Dec 16 02:09:09.556000 audit: BPF prog-id=147 op=LOAD Dec 16 02:09:09.556000 audit[3595]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001a0648 a2=98 a3=0 items=0 ppid=3556 pid=3595 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:09:09.556000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6461363365323363646136613433613466613135306337316433346435 Dec 16 02:09:09.606868 containerd[1908]: time="2025-12-16T02:09:09.606663066Z" level=info msg="connecting to shim 279e37c9ba6f84db4d530ea8bfda8b5db2c2a65c17dcfb515495dcefd42eb0c4" address="unix:///run/containerd/s/5a72882a5edc2d3952d612fdcb706c3dc91188d6333ab1c3f1b8dc912f658d5a" namespace=k8s.io protocol=ttrpc version=3 Dec 16 02:09:09.609520 containerd[1908]: time="2025-12-16T02:09:09.609290154Z" level=info msg="StartContainer for \"da63e23cda6a43a4fa150c71d34d5b19f971711a896fbe9244256b69ead85fb1\" returns successfully" Dec 16 02:09:09.668017 systemd[1]: Started cri-containerd-279e37c9ba6f84db4d530ea8bfda8b5db2c2a65c17dcfb515495dcefd42eb0c4.scope - libcontainer container 279e37c9ba6f84db4d530ea8bfda8b5db2c2a65c17dcfb515495dcefd42eb0c4. Dec 16 02:09:09.705000 audit: BPF prog-id=148 op=LOAD Dec 16 02:09:09.707000 audit: BPF prog-id=149 op=LOAD Dec 16 02:09:09.707000 audit[3646]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000106180 a2=98 a3=0 items=0 ppid=3634 pid=3646 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:09:09.707000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3237396533376339626136663834646234643533306561386266646138 Dec 16 02:09:09.707000 audit: BPF prog-id=149 op=UNLOAD Dec 16 02:09:09.707000 audit[3646]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3634 pid=3646 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:09:09.707000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3237396533376339626136663834646234643533306561386266646138 Dec 16 02:09:09.707000 audit: BPF prog-id=150 op=LOAD Dec 16 02:09:09.707000 audit[3646]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001063e8 a2=98 a3=0 items=0 ppid=3634 pid=3646 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:09:09.707000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3237396533376339626136663834646234643533306561386266646138 Dec 16 02:09:09.707000 audit: BPF prog-id=151 op=LOAD Dec 16 02:09:09.707000 audit[3646]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000106168 a2=98 a3=0 items=0 ppid=3634 pid=3646 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:09:09.707000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3237396533376339626136663834646234643533306561386266646138 Dec 16 02:09:09.708000 audit: BPF prog-id=151 op=UNLOAD Dec 16 02:09:09.708000 audit[3646]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3634 pid=3646 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:09:09.708000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3237396533376339626136663834646234643533306561386266646138 Dec 16 02:09:09.708000 audit: BPF prog-id=150 op=UNLOAD Dec 16 02:09:09.708000 audit[3646]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3634 pid=3646 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:09:09.708000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3237396533376339626136663834646234643533306561386266646138 Dec 16 02:09:09.708000 audit: BPF prog-id=152 op=LOAD Dec 16 02:09:09.708000 audit[3646]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000106648 a2=98 a3=0 items=0 ppid=3634 pid=3646 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:09:09.708000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3237396533376339626136663834646234643533306561386266646138 Dec 16 02:09:09.809596 containerd[1908]: time="2025-12-16T02:09:09.809195659Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-65cdcdfd6d-ghhwl,Uid:5312aea8-e4ec-4538-892c-09070271c0cd,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"279e37c9ba6f84db4d530ea8bfda8b5db2c2a65c17dcfb515495dcefd42eb0c4\"" Dec 16 02:09:09.817106 containerd[1908]: time="2025-12-16T02:09:09.817033735Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Dec 16 02:09:10.033000 audit[3705]: NETFILTER_CFG table=mangle:54 family=2 entries=1 op=nft_register_chain pid=3705 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 02:09:10.033000 audit[3705]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffd9b62920 a2=0 a3=1 items=0 ppid=3608 pid=3705 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:09:10.033000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Dec 16 02:09:10.036000 audit[3706]: NETFILTER_CFG table=nat:55 family=2 entries=1 op=nft_register_chain pid=3706 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 02:09:10.036000 audit[3706]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffc169eb30 a2=0 a3=1 items=0 ppid=3608 pid=3706 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:09:10.036000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Dec 16 02:09:10.039000 audit[3707]: NETFILTER_CFG table=filter:56 family=2 entries=1 op=nft_register_chain pid=3707 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 02:09:10.039000 audit[3707]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=fffff42f23e0 a2=0 a3=1 items=0 ppid=3608 pid=3707 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:09:10.039000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Dec 16 02:09:10.056000 audit[3711]: NETFILTER_CFG table=mangle:57 family=10 entries=1 op=nft_register_chain pid=3711 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 02:09:10.056000 audit[3711]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffc6da66e0 a2=0 a3=1 items=0 ppid=3608 pid=3711 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:09:10.056000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Dec 16 02:09:10.066000 audit[3712]: NETFILTER_CFG table=nat:58 family=10 entries=1 op=nft_register_chain pid=3712 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 02:09:10.066000 audit[3712]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffffc87b1d0 a2=0 a3=1 items=0 ppid=3608 pid=3712 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:09:10.066000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Dec 16 02:09:10.072000 audit[3714]: NETFILTER_CFG table=filter:59 family=10 entries=1 op=nft_register_chain pid=3714 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 02:09:10.072000 audit[3714]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffe0349210 a2=0 a3=1 items=0 ppid=3608 pid=3714 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:09:10.072000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Dec 16 02:09:10.117845 kubelet[3500]: I1216 02:09:10.117750 3500 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-8j9wl" podStartSLOduration=2.117727996 podStartE2EDuration="2.117727996s" podCreationTimestamp="2025-12-16 02:09:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 02:09:10.117291676 +0000 UTC m=+5.465367652" watchObservedRunningTime="2025-12-16 02:09:10.117727996 +0000 UTC m=+5.465804056" Dec 16 02:09:10.147000 audit[3715]: NETFILTER_CFG table=filter:60 family=2 entries=1 op=nft_register_chain pid=3715 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 02:09:10.147000 audit[3715]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=108 a0=3 a1=ffffd656c3e0 a2=0 a3=1 items=0 ppid=3608 pid=3715 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:09:10.147000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Dec 16 02:09:10.153000 audit[3717]: NETFILTER_CFG table=filter:61 family=2 entries=1 op=nft_register_rule pid=3717 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 02:09:10.153000 audit[3717]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=752 a0=3 a1=ffffd5342660 a2=0 a3=1 items=0 ppid=3608 pid=3717 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:09:10.153000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669636520706F7274616C73002D Dec 16 02:09:10.163000 audit[3720]: NETFILTER_CFG table=filter:62 family=2 entries=1 op=nft_register_rule pid=3720 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 02:09:10.163000 audit[3720]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=752 a0=3 a1=fffff5a40c00 a2=0 a3=1 items=0 ppid=3608 pid=3720 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:09:10.163000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669636520706F7274616C73 Dec 16 02:09:10.166000 audit[3721]: NETFILTER_CFG table=filter:63 family=2 entries=1 op=nft_register_chain pid=3721 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 02:09:10.166000 audit[3721]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffff663b620 a2=0 a3=1 items=0 ppid=3608 pid=3721 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:09:10.166000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Dec 16 02:09:10.172000 audit[3723]: NETFILTER_CFG table=filter:64 family=2 entries=1 op=nft_register_rule pid=3723 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 02:09:10.172000 audit[3723]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffe7f605b0 a2=0 a3=1 items=0 ppid=3608 pid=3723 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:09:10.172000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Dec 16 02:09:10.174000 audit[3724]: NETFILTER_CFG table=filter:65 family=2 entries=1 op=nft_register_chain pid=3724 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 02:09:10.174000 audit[3724]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffdd3735c0 a2=0 a3=1 items=0 ppid=3608 pid=3724 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:09:10.174000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D5345525649434553002D740066696C746572 Dec 16 02:09:10.181000 audit[3726]: NETFILTER_CFG table=filter:66 family=2 entries=1 op=nft_register_rule pid=3726 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 02:09:10.181000 audit[3726]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=744 a0=3 a1=ffffda2a3980 a2=0 a3=1 items=0 ppid=3608 pid=3726 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:09:10.181000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Dec 16 02:09:10.190000 audit[3729]: NETFILTER_CFG table=filter:67 family=2 entries=1 op=nft_register_rule pid=3729 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 02:09:10.190000 audit[3729]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=744 a0=3 a1=ffffdac7d230 a2=0 a3=1 items=0 ppid=3608 pid=3729 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:09:10.190000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Dec 16 02:09:10.193000 audit[3730]: NETFILTER_CFG table=filter:68 family=2 entries=1 op=nft_register_chain pid=3730 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 02:09:10.193000 audit[3730]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffecf2cf60 a2=0 a3=1 items=0 ppid=3608 pid=3730 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:09:10.193000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D464F5257415244002D740066696C746572 Dec 16 02:09:10.199000 audit[3732]: NETFILTER_CFG table=filter:69 family=2 entries=1 op=nft_register_rule pid=3732 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 02:09:10.199000 audit[3732]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=fffff1226c30 a2=0 a3=1 items=0 ppid=3608 pid=3732 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:09:10.199000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Dec 16 02:09:10.202000 audit[3733]: NETFILTER_CFG table=filter:70 family=2 entries=1 op=nft_register_chain pid=3733 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 02:09:10.202000 audit[3733]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffcafed1d0 a2=0 a3=1 items=0 ppid=3608 pid=3733 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:09:10.202000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Dec 16 02:09:10.208000 audit[3735]: NETFILTER_CFG table=filter:71 family=2 entries=1 op=nft_register_rule pid=3735 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 02:09:10.208000 audit[3735]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffc4ee9860 a2=0 a3=1 items=0 ppid=3608 pid=3735 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:09:10.208000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A004B5542452D50524F5859 Dec 16 02:09:10.217000 audit[3738]: NETFILTER_CFG table=filter:72 family=2 entries=1 op=nft_register_rule pid=3738 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 02:09:10.217000 audit[3738]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffe2ce8470 a2=0 a3=1 items=0 ppid=3608 pid=3738 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:09:10.217000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A004B5542452D50524F58 Dec 16 02:09:10.227000 audit[3741]: NETFILTER_CFG table=filter:73 family=2 entries=1 op=nft_register_rule pid=3741 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 02:09:10.227000 audit[3741]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffdc604500 a2=0 a3=1 items=0 ppid=3608 pid=3741 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:09:10.227000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A004B5542452D50524F Dec 16 02:09:10.230000 audit[3742]: NETFILTER_CFG table=nat:74 family=2 entries=1 op=nft_register_chain pid=3742 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 02:09:10.230000 audit[3742]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=fffffed4c6b0 a2=0 a3=1 items=0 ppid=3608 pid=3742 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:09:10.230000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D5345525649434553002D74006E6174 Dec 16 02:09:10.237000 audit[3744]: NETFILTER_CFG table=nat:75 family=2 entries=1 op=nft_register_rule pid=3744 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 02:09:10.237000 audit[3744]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=524 a0=3 a1=fffff66ba310 a2=0 a3=1 items=0 ppid=3608 pid=3744 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:09:10.237000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Dec 16 02:09:10.247000 audit[3747]: NETFILTER_CFG table=nat:76 family=2 entries=1 op=nft_register_rule pid=3747 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 02:09:10.247000 audit[3747]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffe28bde80 a2=0 a3=1 items=0 ppid=3608 pid=3747 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:09:10.247000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Dec 16 02:09:10.250000 audit[3748]: NETFILTER_CFG table=nat:77 family=2 entries=1 op=nft_register_chain pid=3748 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 02:09:10.250000 audit[3748]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffc61f2f80 a2=0 a3=1 items=0 ppid=3608 pid=3748 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:09:10.250000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Dec 16 02:09:10.260000 audit[3750]: NETFILTER_CFG table=nat:78 family=2 entries=1 op=nft_register_rule pid=3750 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 02:09:10.260000 audit[3750]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=532 a0=3 a1=ffffe9aba290 a2=0 a3=1 items=0 ppid=3608 pid=3750 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:09:10.260000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Dec 16 02:09:10.312000 audit[3756]: NETFILTER_CFG table=filter:79 family=2 entries=8 op=nft_register_rule pid=3756 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 02:09:10.312000 audit[3756]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=ffffd5719dc0 a2=0 a3=1 items=0 ppid=3608 pid=3756 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:09:10.312000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 02:09:10.323000 audit[3756]: NETFILTER_CFG table=nat:80 family=2 entries=14 op=nft_register_chain pid=3756 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 02:09:10.323000 audit[3756]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5508 a0=3 a1=ffffd5719dc0 a2=0 a3=1 items=0 ppid=3608 pid=3756 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:09:10.323000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 02:09:10.330000 audit[3761]: NETFILTER_CFG table=filter:81 family=10 entries=1 op=nft_register_chain pid=3761 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 02:09:10.330000 audit[3761]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=108 a0=3 a1=ffffcbe1aeb0 a2=0 a3=1 items=0 ppid=3608 pid=3761 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:09:10.330000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Dec 16 02:09:10.337000 audit[3763]: NETFILTER_CFG table=filter:82 family=10 entries=2 op=nft_register_chain pid=3763 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 02:09:10.337000 audit[3763]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=836 a0=3 a1=ffffe75ff420 a2=0 a3=1 items=0 ppid=3608 pid=3763 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:09:10.337000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669636520706F7274616C73 Dec 16 02:09:10.347000 audit[3766]: NETFILTER_CFG table=filter:83 family=10 entries=1 op=nft_register_rule pid=3766 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 02:09:10.347000 audit[3766]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=752 a0=3 a1=ffffe0846c00 a2=0 a3=1 items=0 ppid=3608 pid=3766 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:09:10.347000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669636520706F7274616C Dec 16 02:09:10.351000 audit[3767]: NETFILTER_CFG table=filter:84 family=10 entries=1 op=nft_register_chain pid=3767 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 02:09:10.351000 audit[3767]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffd8689ac0 a2=0 a3=1 items=0 ppid=3608 pid=3767 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:09:10.351000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Dec 16 02:09:10.357000 audit[3769]: NETFILTER_CFG table=filter:85 family=10 entries=1 op=nft_register_rule pid=3769 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 02:09:10.357000 audit[3769]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffdd79d6b0 a2=0 a3=1 items=0 ppid=3608 pid=3769 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:09:10.357000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Dec 16 02:09:10.360000 audit[3770]: NETFILTER_CFG table=filter:86 family=10 entries=1 op=nft_register_chain pid=3770 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 02:09:10.360000 audit[3770]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffe7d765b0 a2=0 a3=1 items=0 ppid=3608 pid=3770 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:09:10.360000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D5345525649434553002D740066696C746572 Dec 16 02:09:10.367000 audit[3772]: NETFILTER_CFG table=filter:87 family=10 entries=1 op=nft_register_rule pid=3772 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 02:09:10.367000 audit[3772]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=744 a0=3 a1=ffffdfa4db30 a2=0 a3=1 items=0 ppid=3608 pid=3772 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:09:10.367000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Dec 16 02:09:10.380000 audit[3775]: NETFILTER_CFG table=filter:88 family=10 entries=2 op=nft_register_chain pid=3775 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 02:09:10.380000 audit[3775]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=828 a0=3 a1=ffffe4cbefb0 a2=0 a3=1 items=0 ppid=3608 pid=3775 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:09:10.380000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Dec 16 02:09:10.383000 audit[3776]: NETFILTER_CFG table=filter:89 family=10 entries=1 op=nft_register_chain pid=3776 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 02:09:10.383000 audit[3776]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffefa3fa70 a2=0 a3=1 items=0 ppid=3608 pid=3776 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:09:10.383000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D464F5257415244002D740066696C746572 Dec 16 02:09:10.390000 audit[3778]: NETFILTER_CFG table=filter:90 family=10 entries=1 op=nft_register_rule pid=3778 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 02:09:10.390000 audit[3778]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffc2c8afe0 a2=0 a3=1 items=0 ppid=3608 pid=3778 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:09:10.390000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Dec 16 02:09:10.393000 audit[3779]: NETFILTER_CFG table=filter:91 family=10 entries=1 op=nft_register_chain pid=3779 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 02:09:10.393000 audit[3779]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=fffffe49ddb0 a2=0 a3=1 items=0 ppid=3608 pid=3779 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:09:10.393000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Dec 16 02:09:10.399000 audit[3781]: NETFILTER_CFG table=filter:92 family=10 entries=1 op=nft_register_rule pid=3781 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 02:09:10.399000 audit[3781]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffddf51640 a2=0 a3=1 items=0 ppid=3608 pid=3781 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:09:10.399000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A004B5542452D50524F58 Dec 16 02:09:10.410000 audit[3784]: NETFILTER_CFG table=filter:93 family=10 entries=1 op=nft_register_rule pid=3784 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 02:09:10.410000 audit[3784]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffd81a1370 a2=0 a3=1 items=0 ppid=3608 pid=3784 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:09:10.410000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A004B5542452D50524F Dec 16 02:09:10.420000 audit[3787]: NETFILTER_CFG table=filter:94 family=10 entries=1 op=nft_register_rule pid=3787 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 02:09:10.420000 audit[3787]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffc142bac0 a2=0 a3=1 items=0 ppid=3608 pid=3787 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:09:10.420000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A004B5542452D5052 Dec 16 02:09:10.423000 audit[3788]: NETFILTER_CFG table=nat:95 family=10 entries=1 op=nft_register_chain pid=3788 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 02:09:10.423000 audit[3788]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=fffff9dadb70 a2=0 a3=1 items=0 ppid=3608 pid=3788 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:09:10.423000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D5345525649434553002D74006E6174 Dec 16 02:09:10.430000 audit[3790]: NETFILTER_CFG table=nat:96 family=10 entries=1 op=nft_register_rule pid=3790 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 02:09:10.430000 audit[3790]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=524 a0=3 a1=ffffe777e110 a2=0 a3=1 items=0 ppid=3608 pid=3790 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:09:10.430000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Dec 16 02:09:10.440000 audit[3793]: NETFILTER_CFG table=nat:97 family=10 entries=1 op=nft_register_rule pid=3793 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 02:09:10.440000 audit[3793]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffe46895b0 a2=0 a3=1 items=0 ppid=3608 pid=3793 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:09:10.440000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Dec 16 02:09:10.443000 audit[3794]: NETFILTER_CFG table=nat:98 family=10 entries=1 op=nft_register_chain pid=3794 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 02:09:10.443000 audit[3794]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffe789f4e0 a2=0 a3=1 items=0 ppid=3608 pid=3794 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:09:10.443000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Dec 16 02:09:10.449000 audit[3796]: NETFILTER_CFG table=nat:99 family=10 entries=2 op=nft_register_chain pid=3796 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 02:09:10.449000 audit[3796]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=612 a0=3 a1=ffffdae86350 a2=0 a3=1 items=0 ppid=3608 pid=3796 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:09:10.449000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Dec 16 02:09:10.452000 audit[3797]: NETFILTER_CFG table=filter:100 family=10 entries=1 op=nft_register_chain pid=3797 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 02:09:10.452000 audit[3797]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffcfa19ae0 a2=0 a3=1 items=0 ppid=3608 pid=3797 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:09:10.452000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D4649524557414C4C002D740066696C746572 Dec 16 02:09:10.460000 audit[3799]: NETFILTER_CFG table=filter:101 family=10 entries=1 op=nft_register_rule pid=3799 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 02:09:10.460000 audit[3799]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=228 a0=3 a1=fffffb2407f0 a2=0 a3=1 items=0 ppid=3608 pid=3799 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:09:10.460000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Dec 16 02:09:10.471000 audit[3802]: NETFILTER_CFG table=filter:102 family=10 entries=1 op=nft_register_rule pid=3802 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 02:09:10.471000 audit[3802]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=228 a0=3 a1=ffffd69e5f90 a2=0 a3=1 items=0 ppid=3608 pid=3802 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:09:10.471000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Dec 16 02:09:10.488000 audit[3804]: NETFILTER_CFG table=filter:103 family=10 entries=3 op=nft_register_rule pid=3804 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Dec 16 02:09:10.488000 audit[3804]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2088 a0=3 a1=ffffe4688880 a2=0 a3=1 items=0 ppid=3608 pid=3804 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:09:10.488000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 02:09:10.490000 audit[3804]: NETFILTER_CFG table=nat:104 family=10 entries=7 op=nft_register_chain pid=3804 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Dec 16 02:09:10.490000 audit[3804]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2056 a0=3 a1=ffffe4688880 a2=0 a3=1 items=0 ppid=3608 pid=3804 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:09:10.490000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 02:09:11.593746 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3736765841.mount: Deactivated successfully. Dec 16 02:09:12.464468 containerd[1908]: time="2025-12-16T02:09:12.464218304Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 02:09:12.466113 containerd[1908]: time="2025-12-16T02:09:12.465719348Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=20773434" Dec 16 02:09:12.467381 containerd[1908]: time="2025-12-16T02:09:12.467290100Z" level=info msg="ImageCreate event name:\"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 02:09:12.471587 containerd[1908]: time="2025-12-16T02:09:12.471517004Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 02:09:12.473384 containerd[1908]: time="2025-12-16T02:09:12.473297048Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"22147999\" in 2.656194325s" Dec 16 02:09:12.473384 containerd[1908]: time="2025-12-16T02:09:12.473367896Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\"" Dec 16 02:09:12.485078 containerd[1908]: time="2025-12-16T02:09:12.484999856Z" level=info msg="CreateContainer within sandbox \"279e37c9ba6f84db4d530ea8bfda8b5db2c2a65c17dcfb515495dcefd42eb0c4\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Dec 16 02:09:12.503779 containerd[1908]: time="2025-12-16T02:09:12.503706308Z" level=info msg="Container 618860c6776a82bb73cfa045239b6e1b346c5352c8829b8ebbcc6193f4e451aa: CDI devices from CRI Config.CDIDevices: []" Dec 16 02:09:12.511256 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2960945037.mount: Deactivated successfully. Dec 16 02:09:12.518762 containerd[1908]: time="2025-12-16T02:09:12.518579564Z" level=info msg="CreateContainer within sandbox \"279e37c9ba6f84db4d530ea8bfda8b5db2c2a65c17dcfb515495dcefd42eb0c4\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"618860c6776a82bb73cfa045239b6e1b346c5352c8829b8ebbcc6193f4e451aa\"" Dec 16 02:09:12.521559 containerd[1908]: time="2025-12-16T02:09:12.520797212Z" level=info msg="StartContainer for \"618860c6776a82bb73cfa045239b6e1b346c5352c8829b8ebbcc6193f4e451aa\"" Dec 16 02:09:12.522692 containerd[1908]: time="2025-12-16T02:09:12.522619796Z" level=info msg="connecting to shim 618860c6776a82bb73cfa045239b6e1b346c5352c8829b8ebbcc6193f4e451aa" address="unix:///run/containerd/s/5a72882a5edc2d3952d612fdcb706c3dc91188d6333ab1c3f1b8dc912f658d5a" protocol=ttrpc version=3 Dec 16 02:09:12.565787 systemd[1]: Started cri-containerd-618860c6776a82bb73cfa045239b6e1b346c5352c8829b8ebbcc6193f4e451aa.scope - libcontainer container 618860c6776a82bb73cfa045239b6e1b346c5352c8829b8ebbcc6193f4e451aa. Dec 16 02:09:12.592000 audit: BPF prog-id=153 op=LOAD Dec 16 02:09:12.594000 audit: BPF prog-id=154 op=LOAD Dec 16 02:09:12.594000 audit[3813]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178180 a2=98 a3=0 items=0 ppid=3634 pid=3813 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:09:12.594000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3631383836306336373736613832626237336366613034353233396236 Dec 16 02:09:12.594000 audit: BPF prog-id=154 op=UNLOAD Dec 16 02:09:12.594000 audit[3813]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3634 pid=3813 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:09:12.594000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3631383836306336373736613832626237336366613034353233396236 Dec 16 02:09:12.594000 audit: BPF prog-id=155 op=LOAD Dec 16 02:09:12.594000 audit[3813]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=3634 pid=3813 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:09:12.594000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3631383836306336373736613832626237336366613034353233396236 Dec 16 02:09:12.594000 audit: BPF prog-id=156 op=LOAD Dec 16 02:09:12.594000 audit[3813]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000178168 a2=98 a3=0 items=0 ppid=3634 pid=3813 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:09:12.594000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3631383836306336373736613832626237336366613034353233396236 Dec 16 02:09:12.595000 audit: BPF prog-id=156 op=UNLOAD Dec 16 02:09:12.595000 audit[3813]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3634 pid=3813 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:09:12.595000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3631383836306336373736613832626237336366613034353233396236 Dec 16 02:09:12.595000 audit: BPF prog-id=155 op=UNLOAD Dec 16 02:09:12.595000 audit[3813]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3634 pid=3813 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:09:12.595000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3631383836306336373736613832626237336366613034353233396236 Dec 16 02:09:12.595000 audit: BPF prog-id=157 op=LOAD Dec 16 02:09:12.595000 audit[3813]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178648 a2=98 a3=0 items=0 ppid=3634 pid=3813 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:09:12.595000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3631383836306336373736613832626237336366613034353233396236 Dec 16 02:09:12.637724 containerd[1908]: time="2025-12-16T02:09:12.637552365Z" level=info msg="StartContainer for \"618860c6776a82bb73cfa045239b6e1b346c5352c8829b8ebbcc6193f4e451aa\" returns successfully" Dec 16 02:09:13.169995 kubelet[3500]: I1216 02:09:13.169735 3500 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-65cdcdfd6d-ghhwl" podStartSLOduration=1.510047514 podStartE2EDuration="4.169705651s" podCreationTimestamp="2025-12-16 02:09:09 +0000 UTC" firstStartedPulling="2025-12-16 02:09:09.816000367 +0000 UTC m=+5.164076283" lastFinishedPulling="2025-12-16 02:09:12.475658504 +0000 UTC m=+7.823734420" observedRunningTime="2025-12-16 02:09:13.138645307 +0000 UTC m=+8.486721271" watchObservedRunningTime="2025-12-16 02:09:13.169705651 +0000 UTC m=+8.517781675" Dec 16 02:09:21.725477 sudo[2252]: pam_unix(sudo:session): session closed for user root Dec 16 02:09:21.725000 audit[2252]: USER_END pid=2252 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 16 02:09:21.727885 kernel: kauditd_printk_skb: 224 callbacks suppressed Dec 16 02:09:21.728043 kernel: audit: type=1106 audit(1765850961.725:528): pid=2252 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 16 02:09:21.725000 audit[2252]: CRED_DISP pid=2252 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 16 02:09:21.742577 kernel: audit: type=1104 audit(1765850961.725:529): pid=2252 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 16 02:09:21.757201 sshd[2251]: Connection closed by 139.178.89.65 port 55590 Dec 16 02:09:21.757017 sshd-session[2247]: pam_unix(sshd:session): session closed for user core Dec 16 02:09:21.767000 audit[2247]: USER_END pid=2247 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 02:09:21.782389 systemd[1]: sshd@6-172.31.24.92:22-139.178.89.65:55590.service: Deactivated successfully. Dec 16 02:09:21.768000 audit[2247]: CRED_DISP pid=2247 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 02:09:21.792224 kernel: audit: type=1106 audit(1765850961.767:530): pid=2247 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 02:09:21.792442 kernel: audit: type=1104 audit(1765850961.768:531): pid=2247 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 02:09:21.797702 systemd[1]: session-8.scope: Deactivated successfully. Dec 16 02:09:21.782000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-172.31.24.92:22-139.178.89.65:55590 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:09:21.803562 kernel: audit: type=1131 audit(1765850961.782:532): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-172.31.24.92:22-139.178.89.65:55590 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:09:21.803641 systemd[1]: session-8.scope: Consumed 10.758s CPU time, 222.9M memory peak. Dec 16 02:09:21.809257 systemd-logind[1853]: Session 8 logged out. Waiting for processes to exit. Dec 16 02:09:21.816223 systemd-logind[1853]: Removed session 8. Dec 16 02:09:25.129000 audit[3893]: NETFILTER_CFG table=filter:105 family=2 entries=15 op=nft_register_rule pid=3893 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 02:09:25.129000 audit[3893]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5992 a0=3 a1=ffffd35b0180 a2=0 a3=1 items=0 ppid=3608 pid=3893 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:09:25.142446 kernel: audit: type=1325 audit(1765850965.129:533): table=filter:105 family=2 entries=15 op=nft_register_rule pid=3893 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 02:09:25.142572 kernel: audit: type=1300 audit(1765850965.129:533): arch=c00000b7 syscall=211 success=yes exit=5992 a0=3 a1=ffffd35b0180 a2=0 a3=1 items=0 ppid=3608 pid=3893 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:09:25.129000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 02:09:25.146562 kernel: audit: type=1327 audit(1765850965.129:533): proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 02:09:25.148000 audit[3893]: NETFILTER_CFG table=nat:106 family=2 entries=12 op=nft_register_rule pid=3893 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 02:09:25.160532 kernel: audit: type=1325 audit(1765850965.148:534): table=nat:106 family=2 entries=12 op=nft_register_rule pid=3893 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 02:09:25.160679 kernel: audit: type=1300 audit(1765850965.148:534): arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffd35b0180 a2=0 a3=1 items=0 ppid=3608 pid=3893 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:09:25.148000 audit[3893]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffd35b0180 a2=0 a3=1 items=0 ppid=3608 pid=3893 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:09:25.148000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 02:09:26.306000 audit[3895]: NETFILTER_CFG table=filter:107 family=2 entries=16 op=nft_register_rule pid=3895 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 02:09:26.306000 audit[3895]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5992 a0=3 a1=ffffd46ba340 a2=0 a3=1 items=0 ppid=3608 pid=3895 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:09:26.306000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 02:09:26.311000 audit[3895]: NETFILTER_CFG table=nat:108 family=2 entries=12 op=nft_register_rule pid=3895 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 02:09:26.311000 audit[3895]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffd46ba340 a2=0 a3=1 items=0 ppid=3608 pid=3895 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:09:26.311000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 02:09:33.703000 audit[3898]: NETFILTER_CFG table=filter:109 family=2 entries=17 op=nft_register_rule pid=3898 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 02:09:33.708985 kernel: kauditd_printk_skb: 7 callbacks suppressed Dec 16 02:09:33.709109 kernel: audit: type=1325 audit(1765850973.703:537): table=filter:109 family=2 entries=17 op=nft_register_rule pid=3898 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 02:09:33.703000 audit[3898]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=6736 a0=3 a1=fffffc936ff0 a2=0 a3=1 items=0 ppid=3608 pid=3898 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:09:33.716477 kernel: audit: type=1300 audit(1765850973.703:537): arch=c00000b7 syscall=211 success=yes exit=6736 a0=3 a1=fffffc936ff0 a2=0 a3=1 items=0 ppid=3608 pid=3898 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:09:33.703000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 02:09:33.719961 kernel: audit: type=1327 audit(1765850973.703:537): proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 02:09:33.716000 audit[3898]: NETFILTER_CFG table=nat:110 family=2 entries=12 op=nft_register_rule pid=3898 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 02:09:33.716000 audit[3898]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=fffffc936ff0 a2=0 a3=1 items=0 ppid=3608 pid=3898 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:09:33.731643 kernel: audit: type=1325 audit(1765850973.716:538): table=nat:110 family=2 entries=12 op=nft_register_rule pid=3898 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 02:09:33.731814 kernel: audit: type=1300 audit(1765850973.716:538): arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=fffffc936ff0 a2=0 a3=1 items=0 ppid=3608 pid=3898 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:09:33.736387 kernel: audit: type=1327 audit(1765850973.716:538): proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 02:09:33.716000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 02:09:34.759000 audit[3901]: NETFILTER_CFG table=filter:111 family=2 entries=19 op=nft_register_rule pid=3901 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 02:09:34.759000 audit[3901]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=7480 a0=3 a1=ffffce9218c0 a2=0 a3=1 items=0 ppid=3608 pid=3901 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:09:34.764803 kernel: audit: type=1325 audit(1765850974.759:539): table=filter:111 family=2 entries=19 op=nft_register_rule pid=3901 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 02:09:34.759000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 02:09:34.777448 kernel: audit: type=1300 audit(1765850974.759:539): arch=c00000b7 syscall=211 success=yes exit=7480 a0=3 a1=ffffce9218c0 a2=0 a3=1 items=0 ppid=3608 pid=3901 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:09:34.777599 kernel: audit: type=1327 audit(1765850974.759:539): proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 02:09:34.764000 audit[3901]: NETFILTER_CFG table=nat:112 family=2 entries=12 op=nft_register_rule pid=3901 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 02:09:34.781997 kernel: audit: type=1325 audit(1765850974.764:540): table=nat:112 family=2 entries=12 op=nft_register_rule pid=3901 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 02:09:34.764000 audit[3901]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffce9218c0 a2=0 a3=1 items=0 ppid=3608 pid=3901 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:09:34.764000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 02:09:39.278235 kubelet[3500]: E1216 02:09:39.277617 3500 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"tigera-ca-bundle\" is forbidden: User \"system:node:ip-172-31-24-92\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'ip-172-31-24-92' and this object" logger="UnhandledError" reflector="object-\"calico-system\"/\"tigera-ca-bundle\"" type="*v1.ConfigMap" Dec 16 02:09:39.278235 kubelet[3500]: E1216 02:09:39.277991 3500 reflector.go:205] "Failed to watch" err="failed to list *v1.Secret: secrets \"typha-certs\" is forbidden: User \"system:node:ip-172-31-24-92\" cannot list resource \"secrets\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'ip-172-31-24-92' and this object" logger="UnhandledError" reflector="object-\"calico-system\"/\"typha-certs\"" type="*v1.Secret" Dec 16 02:09:39.278235 kubelet[3500]: E1216 02:09:39.278118 3500 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:ip-172-31-24-92\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'ip-172-31-24-92' and this object" logger="UnhandledError" reflector="object-\"calico-system\"/\"kube-root-ca.crt\"" type="*v1.ConfigMap" Dec 16 02:09:39.288925 systemd[1]: Created slice kubepods-besteffort-pod6234432b_cc90_4682_8a5d_e8c0a2c8837f.slice - libcontainer container kubepods-besteffort-pod6234432b_cc90_4682_8a5d_e8c0a2c8837f.slice. Dec 16 02:09:39.298788 kubelet[3500]: E1216 02:09:39.298710 3500 status_manager.go:1018] "Failed to get status for pod" err="pods \"calico-typha-7f44f9b84d-kd557\" is forbidden: User \"system:node:ip-172-31-24-92\" cannot get resource \"pods\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'ip-172-31-24-92' and this object" podUID="6234432b-cc90-4682-8a5d-e8c0a2c8837f" pod="calico-system/calico-typha-7f44f9b84d-kd557" Dec 16 02:09:39.306506 kernel: kauditd_printk_skb: 2 callbacks suppressed Dec 16 02:09:39.306644 kernel: audit: type=1325 audit(1765850979.300:541): table=filter:113 family=2 entries=21 op=nft_register_rule pid=3905 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 02:09:39.300000 audit[3905]: NETFILTER_CFG table=filter:113 family=2 entries=21 op=nft_register_rule pid=3905 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 02:09:39.300000 audit[3905]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=8224 a0=3 a1=ffffcf506790 a2=0 a3=1 items=0 ppid=3608 pid=3905 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:09:39.325454 kernel: audit: type=1300 audit(1765850979.300:541): arch=c00000b7 syscall=211 success=yes exit=8224 a0=3 a1=ffffcf506790 a2=0 a3=1 items=0 ppid=3608 pid=3905 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:09:39.300000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 02:09:39.331449 kernel: audit: type=1327 audit(1765850979.300:541): proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 02:09:39.334000 audit[3905]: NETFILTER_CFG table=nat:114 family=2 entries=12 op=nft_register_rule pid=3905 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 02:09:39.334000 audit[3905]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffcf506790 a2=0 a3=1 items=0 ppid=3608 pid=3905 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:09:39.346147 kernel: audit: type=1325 audit(1765850979.334:542): table=nat:114 family=2 entries=12 op=nft_register_rule pid=3905 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 02:09:39.346275 kernel: audit: type=1300 audit(1765850979.334:542): arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffcf506790 a2=0 a3=1 items=0 ppid=3608 pid=3905 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:09:39.348456 kubelet[3500]: I1216 02:09:39.346651 3500 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/6234432b-cc90-4682-8a5d-e8c0a2c8837f-typha-certs\") pod \"calico-typha-7f44f9b84d-kd557\" (UID: \"6234432b-cc90-4682-8a5d-e8c0a2c8837f\") " pod="calico-system/calico-typha-7f44f9b84d-kd557" Dec 16 02:09:39.348456 kubelet[3500]: I1216 02:09:39.346732 3500 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nczl2\" (UniqueName: \"kubernetes.io/projected/6234432b-cc90-4682-8a5d-e8c0a2c8837f-kube-api-access-nczl2\") pod \"calico-typha-7f44f9b84d-kd557\" (UID: \"6234432b-cc90-4682-8a5d-e8c0a2c8837f\") " pod="calico-system/calico-typha-7f44f9b84d-kd557" Dec 16 02:09:39.348456 kubelet[3500]: I1216 02:09:39.346787 3500 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6234432b-cc90-4682-8a5d-e8c0a2c8837f-tigera-ca-bundle\") pod \"calico-typha-7f44f9b84d-kd557\" (UID: \"6234432b-cc90-4682-8a5d-e8c0a2c8837f\") " pod="calico-system/calico-typha-7f44f9b84d-kd557" Dec 16 02:09:39.334000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 02:09:39.354452 kernel: audit: type=1327 audit(1765850979.334:542): proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 02:09:39.472568 systemd[1]: Created slice kubepods-besteffort-pod6c68f210_a253_45d5_bf4c_ea80e7392c79.slice - libcontainer container kubepods-besteffort-pod6c68f210_a253_45d5_bf4c_ea80e7392c79.slice. Dec 16 02:09:39.549958 kubelet[3500]: I1216 02:09:39.549105 3500 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/6c68f210-a253-45d5-bf4c-ea80e7392c79-flexvol-driver-host\") pod \"calico-node-895lg\" (UID: \"6c68f210-a253-45d5-bf4c-ea80e7392c79\") " pod="calico-system/calico-node-895lg" Dec 16 02:09:39.550306 kubelet[3500]: I1216 02:09:39.550260 3500 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6c68f210-a253-45d5-bf4c-ea80e7392c79-lib-modules\") pod \"calico-node-895lg\" (UID: \"6c68f210-a253-45d5-bf4c-ea80e7392c79\") " pod="calico-system/calico-node-895lg" Dec 16 02:09:39.551345 kubelet[3500]: I1216 02:09:39.551041 3500 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/6c68f210-a253-45d5-bf4c-ea80e7392c79-cni-log-dir\") pod \"calico-node-895lg\" (UID: \"6c68f210-a253-45d5-bf4c-ea80e7392c79\") " pod="calico-system/calico-node-895lg" Dec 16 02:09:39.552210 kubelet[3500]: I1216 02:09:39.551584 3500 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/6c68f210-a253-45d5-bf4c-ea80e7392c79-cni-net-dir\") pod \"calico-node-895lg\" (UID: \"6c68f210-a253-45d5-bf4c-ea80e7392c79\") " pod="calico-system/calico-node-895lg" Dec 16 02:09:39.552210 kubelet[3500]: I1216 02:09:39.551668 3500 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/6c68f210-a253-45d5-bf4c-ea80e7392c79-cni-bin-dir\") pod \"calico-node-895lg\" (UID: \"6c68f210-a253-45d5-bf4c-ea80e7392c79\") " pod="calico-system/calico-node-895lg" Dec 16 02:09:39.552210 kubelet[3500]: I1216 02:09:39.551711 3500 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6c68f210-a253-45d5-bf4c-ea80e7392c79-xtables-lock\") pod \"calico-node-895lg\" (UID: \"6c68f210-a253-45d5-bf4c-ea80e7392c79\") " pod="calico-system/calico-node-895lg" Dec 16 02:09:39.552210 kubelet[3500]: I1216 02:09:39.551756 3500 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/6c68f210-a253-45d5-bf4c-ea80e7392c79-var-lib-calico\") pod \"calico-node-895lg\" (UID: \"6c68f210-a253-45d5-bf4c-ea80e7392c79\") " pod="calico-system/calico-node-895lg" Dec 16 02:09:39.552210 kubelet[3500]: I1216 02:09:39.551794 3500 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/6c68f210-a253-45d5-bf4c-ea80e7392c79-node-certs\") pod \"calico-node-895lg\" (UID: \"6c68f210-a253-45d5-bf4c-ea80e7392c79\") " pod="calico-system/calico-node-895lg" Dec 16 02:09:39.552649 kubelet[3500]: I1216 02:09:39.551832 3500 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/6c68f210-a253-45d5-bf4c-ea80e7392c79-policysync\") pod \"calico-node-895lg\" (UID: \"6c68f210-a253-45d5-bf4c-ea80e7392c79\") " pod="calico-system/calico-node-895lg" Dec 16 02:09:39.552649 kubelet[3500]: I1216 02:09:39.551868 3500 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/6c68f210-a253-45d5-bf4c-ea80e7392c79-var-run-calico\") pod \"calico-node-895lg\" (UID: \"6c68f210-a253-45d5-bf4c-ea80e7392c79\") " pod="calico-system/calico-node-895lg" Dec 16 02:09:39.552649 kubelet[3500]: I1216 02:09:39.551910 3500 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ghrwn\" (UniqueName: \"kubernetes.io/projected/6c68f210-a253-45d5-bf4c-ea80e7392c79-kube-api-access-ghrwn\") pod \"calico-node-895lg\" (UID: \"6c68f210-a253-45d5-bf4c-ea80e7392c79\") " pod="calico-system/calico-node-895lg" Dec 16 02:09:39.552649 kubelet[3500]: I1216 02:09:39.551948 3500 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6c68f210-a253-45d5-bf4c-ea80e7392c79-tigera-ca-bundle\") pod \"calico-node-895lg\" (UID: \"6c68f210-a253-45d5-bf4c-ea80e7392c79\") " pod="calico-system/calico-node-895lg" Dec 16 02:09:39.612399 kubelet[3500]: E1216 02:09:39.611606 3500 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7f5sg" podUID="aaad2db4-9021-4d31-8275-e9b7ba731389" Dec 16 02:09:39.653330 kubelet[3500]: I1216 02:09:39.653281 3500 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/aaad2db4-9021-4d31-8275-e9b7ba731389-kubelet-dir\") pod \"csi-node-driver-7f5sg\" (UID: \"aaad2db4-9021-4d31-8275-e9b7ba731389\") " pod="calico-system/csi-node-driver-7f5sg" Dec 16 02:09:39.654278 kubelet[3500]: I1216 02:09:39.654122 3500 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gk4fp\" (UniqueName: \"kubernetes.io/projected/aaad2db4-9021-4d31-8275-e9b7ba731389-kube-api-access-gk4fp\") pod \"csi-node-driver-7f5sg\" (UID: \"aaad2db4-9021-4d31-8275-e9b7ba731389\") " pod="calico-system/csi-node-driver-7f5sg" Dec 16 02:09:39.654894 kubelet[3500]: I1216 02:09:39.654840 3500 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/aaad2db4-9021-4d31-8275-e9b7ba731389-registration-dir\") pod \"csi-node-driver-7f5sg\" (UID: \"aaad2db4-9021-4d31-8275-e9b7ba731389\") " pod="calico-system/csi-node-driver-7f5sg" Dec 16 02:09:39.655553 kubelet[3500]: I1216 02:09:39.655352 3500 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/aaad2db4-9021-4d31-8275-e9b7ba731389-socket-dir\") pod \"csi-node-driver-7f5sg\" (UID: \"aaad2db4-9021-4d31-8275-e9b7ba731389\") " pod="calico-system/csi-node-driver-7f5sg" Dec 16 02:09:39.658097 kubelet[3500]: I1216 02:09:39.657497 3500 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/aaad2db4-9021-4d31-8275-e9b7ba731389-varrun\") pod \"csi-node-driver-7f5sg\" (UID: \"aaad2db4-9021-4d31-8275-e9b7ba731389\") " pod="calico-system/csi-node-driver-7f5sg" Dec 16 02:09:39.682732 kubelet[3500]: E1216 02:09:39.682667 3500 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:09:39.682732 kubelet[3500]: W1216 02:09:39.682716 3500 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:09:39.682938 kubelet[3500]: E1216 02:09:39.682773 3500 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:09:39.759820 kubelet[3500]: E1216 02:09:39.759278 3500 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:09:39.759820 kubelet[3500]: W1216 02:09:39.759465 3500 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:09:39.759820 kubelet[3500]: E1216 02:09:39.759516 3500 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:09:39.761152 kubelet[3500]: E1216 02:09:39.760719 3500 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:09:39.761152 kubelet[3500]: W1216 02:09:39.760757 3500 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:09:39.761152 kubelet[3500]: E1216 02:09:39.760791 3500 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:09:39.761670 kubelet[3500]: E1216 02:09:39.761302 3500 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:09:39.761670 kubelet[3500]: W1216 02:09:39.761333 3500 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:09:39.761670 kubelet[3500]: E1216 02:09:39.761364 3500 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:09:39.762535 kubelet[3500]: E1216 02:09:39.762481 3500 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:09:39.762680 kubelet[3500]: W1216 02:09:39.762549 3500 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:09:39.762680 kubelet[3500]: E1216 02:09:39.762588 3500 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:09:39.763717 kubelet[3500]: E1216 02:09:39.763665 3500 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:09:39.763717 kubelet[3500]: W1216 02:09:39.763705 3500 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:09:39.765102 kubelet[3500]: E1216 02:09:39.763743 3500 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:09:39.765102 kubelet[3500]: E1216 02:09:39.764934 3500 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:09:39.765102 kubelet[3500]: W1216 02:09:39.764963 3500 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:09:39.765102 kubelet[3500]: E1216 02:09:39.764992 3500 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:09:39.765748 kubelet[3500]: E1216 02:09:39.765684 3500 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:09:39.765748 kubelet[3500]: W1216 02:09:39.765737 3500 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:09:39.766038 kubelet[3500]: E1216 02:09:39.765771 3500 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:09:39.766804 kubelet[3500]: E1216 02:09:39.766747 3500 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:09:39.766804 kubelet[3500]: W1216 02:09:39.766790 3500 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:09:39.767188 kubelet[3500]: E1216 02:09:39.766826 3500 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:09:39.767880 kubelet[3500]: E1216 02:09:39.767777 3500 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:09:39.767880 kubelet[3500]: W1216 02:09:39.767820 3500 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:09:39.767880 kubelet[3500]: E1216 02:09:39.767872 3500 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:09:39.768571 kubelet[3500]: E1216 02:09:39.768536 3500 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:09:39.768660 kubelet[3500]: W1216 02:09:39.768571 3500 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:09:39.768660 kubelet[3500]: E1216 02:09:39.768603 3500 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:09:39.769131 kubelet[3500]: E1216 02:09:39.769098 3500 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:09:39.769131 kubelet[3500]: W1216 02:09:39.769129 3500 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:09:39.769473 kubelet[3500]: E1216 02:09:39.769159 3500 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:09:39.769817 kubelet[3500]: E1216 02:09:39.769778 3500 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:09:39.769958 kubelet[3500]: W1216 02:09:39.769816 3500 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:09:39.769958 kubelet[3500]: E1216 02:09:39.769883 3500 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:09:39.770596 kubelet[3500]: E1216 02:09:39.770556 3500 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:09:39.770596 kubelet[3500]: W1216 02:09:39.770594 3500 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:09:39.771159 kubelet[3500]: E1216 02:09:39.770625 3500 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:09:39.771548 kubelet[3500]: E1216 02:09:39.771507 3500 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:09:39.771697 kubelet[3500]: W1216 02:09:39.771546 3500 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:09:39.771697 kubelet[3500]: E1216 02:09:39.771582 3500 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:09:39.772745 kubelet[3500]: E1216 02:09:39.772701 3500 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:09:39.772958 kubelet[3500]: W1216 02:09:39.772763 3500 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:09:39.772958 kubelet[3500]: E1216 02:09:39.772801 3500 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:09:39.773550 kubelet[3500]: E1216 02:09:39.773497 3500 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:09:39.773550 kubelet[3500]: W1216 02:09:39.773540 3500 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:09:39.774031 kubelet[3500]: E1216 02:09:39.773577 3500 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:09:39.775260 kubelet[3500]: E1216 02:09:39.775187 3500 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:09:39.775260 kubelet[3500]: W1216 02:09:39.775235 3500 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:09:39.775841 kubelet[3500]: E1216 02:09:39.775273 3500 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:09:39.776537 kubelet[3500]: E1216 02:09:39.776484 3500 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:09:39.776537 kubelet[3500]: W1216 02:09:39.776526 3500 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:09:39.777051 kubelet[3500]: E1216 02:09:39.776561 3500 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:09:39.777648 kubelet[3500]: E1216 02:09:39.777590 3500 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:09:39.777648 kubelet[3500]: W1216 02:09:39.777634 3500 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:09:39.777989 kubelet[3500]: E1216 02:09:39.777671 3500 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:09:39.778877 kubelet[3500]: E1216 02:09:39.778504 3500 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:09:39.778877 kubelet[3500]: W1216 02:09:39.778546 3500 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:09:39.778877 kubelet[3500]: E1216 02:09:39.778581 3500 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:09:39.779391 kubelet[3500]: E1216 02:09:39.779278 3500 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:09:39.779391 kubelet[3500]: W1216 02:09:39.779385 3500 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:09:39.779391 kubelet[3500]: E1216 02:09:39.779489 3500 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:09:39.780562 kubelet[3500]: E1216 02:09:39.780189 3500 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:09:39.780562 kubelet[3500]: W1216 02:09:39.780259 3500 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:09:39.780562 kubelet[3500]: E1216 02:09:39.780321 3500 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:09:39.780990 kubelet[3500]: E1216 02:09:39.780918 3500 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:09:39.780990 kubelet[3500]: W1216 02:09:39.780983 3500 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:09:39.781197 kubelet[3500]: E1216 02:09:39.781046 3500 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:09:39.781806 kubelet[3500]: E1216 02:09:39.781727 3500 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:09:39.781806 kubelet[3500]: W1216 02:09:39.781768 3500 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:09:39.781806 kubelet[3500]: E1216 02:09:39.781801 3500 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:09:39.782405 kubelet[3500]: E1216 02:09:39.782356 3500 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:09:39.782405 kubelet[3500]: W1216 02:09:39.782393 3500 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:09:39.782405 kubelet[3500]: E1216 02:09:39.782460 3500 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:09:40.257998 kubelet[3500]: E1216 02:09:40.257910 3500 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:09:40.257998 kubelet[3500]: W1216 02:09:40.257979 3500 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:09:40.258200 kubelet[3500]: E1216 02:09:40.258042 3500 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:09:40.260340 kubelet[3500]: E1216 02:09:40.260304 3500 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:09:40.260635 kubelet[3500]: W1216 02:09:40.260541 3500 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:09:40.260635 kubelet[3500]: E1216 02:09:40.260580 3500 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:09:40.364000 audit[3945]: NETFILTER_CFG table=filter:115 family=2 entries=22 op=nft_register_rule pid=3945 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 02:09:40.364000 audit[3945]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=8224 a0=3 a1=ffffff27c2b0 a2=0 a3=1 items=0 ppid=3608 pid=3945 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:09:40.375038 kernel: audit: type=1325 audit(1765850980.364:543): table=filter:115 family=2 entries=22 op=nft_register_rule pid=3945 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 02:09:40.375185 kernel: audit: type=1300 audit(1765850980.364:543): arch=c00000b7 syscall=211 success=yes exit=8224 a0=3 a1=ffffff27c2b0 a2=0 a3=1 items=0 ppid=3608 pid=3945 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:09:40.364000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 02:09:40.378214 kernel: audit: type=1327 audit(1765850980.364:543): proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 02:09:40.378441 kernel: audit: type=1325 audit(1765850980.375:544): table=nat:116 family=2 entries=12 op=nft_register_rule pid=3945 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 02:09:40.375000 audit[3945]: NETFILTER_CFG table=nat:116 family=2 entries=12 op=nft_register_rule pid=3945 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 02:09:40.375000 audit[3945]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffff27c2b0 a2=0 a3=1 items=0 ppid=3608 pid=3945 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:09:40.375000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 02:09:40.428662 kubelet[3500]: E1216 02:09:40.428509 3500 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:09:40.432706 kubelet[3500]: W1216 02:09:40.429098 3500 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:09:40.432706 kubelet[3500]: E1216 02:09:40.429142 3500 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:09:40.437776 kubelet[3500]: E1216 02:09:40.437625 3500 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:09:40.438717 kubelet[3500]: W1216 02:09:40.438489 3500 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:09:40.440472 kubelet[3500]: E1216 02:09:40.439254 3500 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:09:40.443822 kubelet[3500]: E1216 02:09:40.443720 3500 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:09:40.444104 kubelet[3500]: W1216 02:09:40.443983 3500 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:09:40.444340 kubelet[3500]: E1216 02:09:40.444255 3500 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:09:40.449175 kubelet[3500]: E1216 02:09:40.449142 3500 secret.go:189] Couldn't get secret calico-system/typha-certs: failed to sync secret cache: timed out waiting for the condition Dec 16 02:09:40.449615 kubelet[3500]: E1216 02:09:40.449498 3500 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6234432b-cc90-4682-8a5d-e8c0a2c8837f-typha-certs podName:6234432b-cc90-4682-8a5d-e8c0a2c8837f nodeName:}" failed. No retries permitted until 2025-12-16 02:09:40.949462027 +0000 UTC m=+36.297537943 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "typha-certs" (UniqueName: "kubernetes.io/secret/6234432b-cc90-4682-8a5d-e8c0a2c8837f-typha-certs") pod "calico-typha-7f44f9b84d-kd557" (UID: "6234432b-cc90-4682-8a5d-e8c0a2c8837f") : failed to sync secret cache: timed out waiting for the condition Dec 16 02:09:40.476497 kubelet[3500]: E1216 02:09:40.476399 3500 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:09:40.476497 kubelet[3500]: W1216 02:09:40.476490 3500 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:09:40.476700 kubelet[3500]: E1216 02:09:40.476523 3500 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:09:40.578764 kubelet[3500]: E1216 02:09:40.577682 3500 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:09:40.578764 kubelet[3500]: W1216 02:09:40.577732 3500 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:09:40.578764 kubelet[3500]: E1216 02:09:40.577764 3500 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:09:40.679345 kubelet[3500]: E1216 02:09:40.679190 3500 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:09:40.679345 kubelet[3500]: W1216 02:09:40.679226 3500 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:09:40.679345 kubelet[3500]: E1216 02:09:40.679258 3500 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:09:40.691438 containerd[1908]: time="2025-12-16T02:09:40.691321260Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-895lg,Uid:6c68f210-a253-45d5-bf4c-ea80e7392c79,Namespace:calico-system,Attempt:0,}" Dec 16 02:09:40.739627 containerd[1908]: time="2025-12-16T02:09:40.739539480Z" level=info msg="connecting to shim a5d0a390e97e2c3326054ce43ba0461da271523947c8c1cab4f38b939055352a" address="unix:///run/containerd/s/045030b7c328c54bca72b955db729ddaa1467fa7ba3ab052475179968efa8b83" namespace=k8s.io protocol=ttrpc version=3 Dec 16 02:09:40.780192 kubelet[3500]: E1216 02:09:40.780024 3500 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:09:40.780192 kubelet[3500]: W1216 02:09:40.780061 3500 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:09:40.780192 kubelet[3500]: E1216 02:09:40.780095 3500 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:09:40.803865 systemd[1]: Started cri-containerd-a5d0a390e97e2c3326054ce43ba0461da271523947c8c1cab4f38b939055352a.scope - libcontainer container a5d0a390e97e2c3326054ce43ba0461da271523947c8c1cab4f38b939055352a. Dec 16 02:09:40.824000 audit: BPF prog-id=158 op=LOAD Dec 16 02:09:40.827000 audit: BPF prog-id=159 op=LOAD Dec 16 02:09:40.827000 audit[3974]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178180 a2=98 a3=0 items=0 ppid=3963 pid=3974 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:09:40.827000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6135643061333930653937653263333332363035346365343362613034 Dec 16 02:09:40.828000 audit: BPF prog-id=159 op=UNLOAD Dec 16 02:09:40.828000 audit[3974]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3963 pid=3974 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:09:40.828000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6135643061333930653937653263333332363035346365343362613034 Dec 16 02:09:40.829000 audit: BPF prog-id=160 op=LOAD Dec 16 02:09:40.829000 audit[3974]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=3963 pid=3974 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:09:40.829000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6135643061333930653937653263333332363035346365343362613034 Dec 16 02:09:40.830000 audit: BPF prog-id=161 op=LOAD Dec 16 02:09:40.830000 audit[3974]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000178168 a2=98 a3=0 items=0 ppid=3963 pid=3974 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:09:40.830000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6135643061333930653937653263333332363035346365343362613034 Dec 16 02:09:40.830000 audit: BPF prog-id=161 op=UNLOAD Dec 16 02:09:40.830000 audit[3974]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3963 pid=3974 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:09:40.830000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6135643061333930653937653263333332363035346365343362613034 Dec 16 02:09:40.830000 audit: BPF prog-id=160 op=UNLOAD Dec 16 02:09:40.830000 audit[3974]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3963 pid=3974 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:09:40.830000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6135643061333930653937653263333332363035346365343362613034 Dec 16 02:09:40.830000 audit: BPF prog-id=162 op=LOAD Dec 16 02:09:40.830000 audit[3974]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178648 a2=98 a3=0 items=0 ppid=3963 pid=3974 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:09:40.830000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6135643061333930653937653263333332363035346365343362613034 Dec 16 02:09:40.870987 containerd[1908]: time="2025-12-16T02:09:40.870849481Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-895lg,Uid:6c68f210-a253-45d5-bf4c-ea80e7392c79,Namespace:calico-system,Attempt:0,} returns sandbox id \"a5d0a390e97e2c3326054ce43ba0461da271523947c8c1cab4f38b939055352a\"" Dec 16 02:09:40.876148 containerd[1908]: time="2025-12-16T02:09:40.876080017Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Dec 16 02:09:40.883942 kubelet[3500]: E1216 02:09:40.883846 3500 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:09:40.883942 kubelet[3500]: W1216 02:09:40.883917 3500 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:09:40.884118 kubelet[3500]: E1216 02:09:40.883980 3500 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:09:40.986167 kubelet[3500]: E1216 02:09:40.986032 3500 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:09:40.986167 kubelet[3500]: W1216 02:09:40.986089 3500 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:09:40.986167 kubelet[3500]: E1216 02:09:40.986126 3500 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:09:40.986773 kubelet[3500]: E1216 02:09:40.986708 3500 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:09:40.986879 kubelet[3500]: W1216 02:09:40.986773 3500 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:09:40.986879 kubelet[3500]: E1216 02:09:40.986842 3500 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:09:40.987464 kubelet[3500]: E1216 02:09:40.987423 3500 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:09:40.987464 kubelet[3500]: W1216 02:09:40.987461 3500 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:09:40.987676 kubelet[3500]: E1216 02:09:40.987521 3500 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:09:40.987969 kubelet[3500]: E1216 02:09:40.987934 3500 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:09:40.988060 kubelet[3500]: W1216 02:09:40.987967 3500 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:09:40.988060 kubelet[3500]: E1216 02:09:40.987997 3500 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:09:40.988513 kubelet[3500]: E1216 02:09:40.988475 3500 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:09:40.988513 kubelet[3500]: W1216 02:09:40.988511 3500 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:09:40.988745 kubelet[3500]: E1216 02:09:40.988544 3500 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:09:40.992043 kubelet[3500]: E1216 02:09:40.991147 3500 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7f5sg" podUID="aaad2db4-9021-4d31-8275-e9b7ba731389" Dec 16 02:09:41.006020 kubelet[3500]: E1216 02:09:41.005700 3500 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:09:41.007625 kubelet[3500]: W1216 02:09:41.007579 3500 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:09:41.007793 kubelet[3500]: E1216 02:09:41.007767 3500 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:09:41.103688 containerd[1908]: time="2025-12-16T02:09:41.102278770Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7f44f9b84d-kd557,Uid:6234432b-cc90-4682-8a5d-e8c0a2c8837f,Namespace:calico-system,Attempt:0,}" Dec 16 02:09:41.155404 containerd[1908]: time="2025-12-16T02:09:41.155331635Z" level=info msg="connecting to shim 4af3ab060a5a63ef7e8c46f3753a5c0688f40937761f9072ec16005af7d8084c" address="unix:///run/containerd/s/f96a478be3e2d9350b072fa3f7c5a71cf5e521a08bd541089ff74a2e2d256c78" namespace=k8s.io protocol=ttrpc version=3 Dec 16 02:09:41.214833 systemd[1]: Started cri-containerd-4af3ab060a5a63ef7e8c46f3753a5c0688f40937761f9072ec16005af7d8084c.scope - libcontainer container 4af3ab060a5a63ef7e8c46f3753a5c0688f40937761f9072ec16005af7d8084c. Dec 16 02:09:41.250000 audit: BPF prog-id=163 op=LOAD Dec 16 02:09:41.251000 audit: BPF prog-id=164 op=LOAD Dec 16 02:09:41.251000 audit[4031]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000106180 a2=98 a3=0 items=0 ppid=4018 pid=4031 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:09:41.251000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3461663361623036306135613633656637653863343666333735336135 Dec 16 02:09:41.252000 audit: BPF prog-id=164 op=UNLOAD Dec 16 02:09:41.252000 audit[4031]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4018 pid=4031 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:09:41.252000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3461663361623036306135613633656637653863343666333735336135 Dec 16 02:09:41.252000 audit: BPF prog-id=165 op=LOAD Dec 16 02:09:41.252000 audit[4031]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001063e8 a2=98 a3=0 items=0 ppid=4018 pid=4031 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:09:41.252000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3461663361623036306135613633656637653863343666333735336135 Dec 16 02:09:41.253000 audit: BPF prog-id=166 op=LOAD Dec 16 02:09:41.253000 audit[4031]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000106168 a2=98 a3=0 items=0 ppid=4018 pid=4031 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:09:41.253000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3461663361623036306135613633656637653863343666333735336135 Dec 16 02:09:41.253000 audit: BPF prog-id=166 op=UNLOAD Dec 16 02:09:41.253000 audit[4031]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4018 pid=4031 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:09:41.253000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3461663361623036306135613633656637653863343666333735336135 Dec 16 02:09:41.253000 audit: BPF prog-id=165 op=UNLOAD Dec 16 02:09:41.253000 audit[4031]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4018 pid=4031 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:09:41.253000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3461663361623036306135613633656637653863343666333735336135 Dec 16 02:09:41.254000 audit: BPF prog-id=167 op=LOAD Dec 16 02:09:41.254000 audit[4031]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000106648 a2=98 a3=0 items=0 ppid=4018 pid=4031 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:09:41.254000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3461663361623036306135613633656637653863343666333735336135 Dec 16 02:09:41.317210 containerd[1908]: time="2025-12-16T02:09:41.316967063Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7f44f9b84d-kd557,Uid:6234432b-cc90-4682-8a5d-e8c0a2c8837f,Namespace:calico-system,Attempt:0,} returns sandbox id \"4af3ab060a5a63ef7e8c46f3753a5c0688f40937761f9072ec16005af7d8084c\"" Dec 16 02:09:42.078046 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4254333055.mount: Deactivated successfully. Dec 16 02:09:42.228278 containerd[1908]: time="2025-12-16T02:09:42.227583672Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 02:09:42.231865 containerd[1908]: time="2025-12-16T02:09:42.231681456Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=0" Dec 16 02:09:42.233687 containerd[1908]: time="2025-12-16T02:09:42.233523084Z" level=info msg="ImageCreate event name:\"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 02:09:42.240598 containerd[1908]: time="2025-12-16T02:09:42.240477444Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 02:09:42.242965 containerd[1908]: time="2025-12-16T02:09:42.242735196Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5636392\" in 1.366582399s" Dec 16 02:09:42.242965 containerd[1908]: time="2025-12-16T02:09:42.242801664Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\"" Dec 16 02:09:42.246836 containerd[1908]: time="2025-12-16T02:09:42.246202188Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Dec 16 02:09:42.257314 containerd[1908]: time="2025-12-16T02:09:42.257255652Z" level=info msg="CreateContainer within sandbox \"a5d0a390e97e2c3326054ce43ba0461da271523947c8c1cab4f38b939055352a\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Dec 16 02:09:42.280475 containerd[1908]: time="2025-12-16T02:09:42.278777364Z" level=info msg="Container 8588cab6e7f22bce532dad0140bf6a4c16ab96f646a08ec1dd3b6dd9027bed1f: CDI devices from CRI Config.CDIDevices: []" Dec 16 02:09:42.304388 containerd[1908]: time="2025-12-16T02:09:42.304285380Z" level=info msg="CreateContainer within sandbox \"a5d0a390e97e2c3326054ce43ba0461da271523947c8c1cab4f38b939055352a\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"8588cab6e7f22bce532dad0140bf6a4c16ab96f646a08ec1dd3b6dd9027bed1f\"" Dec 16 02:09:42.308714 containerd[1908]: time="2025-12-16T02:09:42.308634348Z" level=info msg="StartContainer for \"8588cab6e7f22bce532dad0140bf6a4c16ab96f646a08ec1dd3b6dd9027bed1f\"" Dec 16 02:09:42.312148 containerd[1908]: time="2025-12-16T02:09:42.311970600Z" level=info msg="connecting to shim 8588cab6e7f22bce532dad0140bf6a4c16ab96f646a08ec1dd3b6dd9027bed1f" address="unix:///run/containerd/s/045030b7c328c54bca72b955db729ddaa1467fa7ba3ab052475179968efa8b83" protocol=ttrpc version=3 Dec 16 02:09:42.367845 systemd[1]: Started cri-containerd-8588cab6e7f22bce532dad0140bf6a4c16ab96f646a08ec1dd3b6dd9027bed1f.scope - libcontainer container 8588cab6e7f22bce532dad0140bf6a4c16ab96f646a08ec1dd3b6dd9027bed1f. Dec 16 02:09:42.455000 audit: BPF prog-id=168 op=LOAD Dec 16 02:09:42.455000 audit[4064]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40002283e8 a2=98 a3=0 items=0 ppid=3963 pid=4064 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:09:42.455000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3835383863616236653766323262636535333264616430313430626636 Dec 16 02:09:42.456000 audit: BPF prog-id=169 op=LOAD Dec 16 02:09:42.456000 audit[4064]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=4000228168 a2=98 a3=0 items=0 ppid=3963 pid=4064 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:09:42.456000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3835383863616236653766323262636535333264616430313430626636 Dec 16 02:09:42.456000 audit: BPF prog-id=169 op=UNLOAD Dec 16 02:09:42.456000 audit[4064]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3963 pid=4064 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:09:42.456000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3835383863616236653766323262636535333264616430313430626636 Dec 16 02:09:42.456000 audit: BPF prog-id=168 op=UNLOAD Dec 16 02:09:42.456000 audit[4064]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3963 pid=4064 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:09:42.456000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3835383863616236653766323262636535333264616430313430626636 Dec 16 02:09:42.456000 audit: BPF prog-id=170 op=LOAD Dec 16 02:09:42.456000 audit[4064]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000228648 a2=98 a3=0 items=0 ppid=3963 pid=4064 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:09:42.456000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3835383863616236653766323262636535333264616430313430626636 Dec 16 02:09:42.501506 containerd[1908]: time="2025-12-16T02:09:42.501249349Z" level=info msg="StartContainer for \"8588cab6e7f22bce532dad0140bf6a4c16ab96f646a08ec1dd3b6dd9027bed1f\" returns successfully" Dec 16 02:09:42.543007 systemd[1]: cri-containerd-8588cab6e7f22bce532dad0140bf6a4c16ab96f646a08ec1dd3b6dd9027bed1f.scope: Deactivated successfully. Dec 16 02:09:42.550000 audit: BPF prog-id=170 op=UNLOAD Dec 16 02:09:42.553261 containerd[1908]: time="2025-12-16T02:09:42.553197853Z" level=info msg="received container exit event container_id:\"8588cab6e7f22bce532dad0140bf6a4c16ab96f646a08ec1dd3b6dd9027bed1f\" id:\"8588cab6e7f22bce532dad0140bf6a4c16ab96f646a08ec1dd3b6dd9027bed1f\" pid:4077 exited_at:{seconds:1765850982 nanos:551958169}" Dec 16 02:09:42.991510 kubelet[3500]: E1216 02:09:42.990240 3500 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7f5sg" podUID="aaad2db4-9021-4d31-8275-e9b7ba731389" Dec 16 02:09:43.001116 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8588cab6e7f22bce532dad0140bf6a4c16ab96f646a08ec1dd3b6dd9027bed1f-rootfs.mount: Deactivated successfully. Dec 16 02:09:44.225189 containerd[1908]: time="2025-12-16T02:09:44.225104102Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 02:09:44.227400 containerd[1908]: time="2025-12-16T02:09:44.227314406Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=0" Dec 16 02:09:44.227915 containerd[1908]: time="2025-12-16T02:09:44.227859482Z" level=info msg="ImageCreate event name:\"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 02:09:44.238463 containerd[1908]: time="2025-12-16T02:09:44.238380638Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 02:09:44.240782 containerd[1908]: time="2025-12-16T02:09:44.240700274Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"33090541\" in 1.994426962s" Dec 16 02:09:44.240782 containerd[1908]: time="2025-12-16T02:09:44.240779150Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\"" Dec 16 02:09:44.247524 containerd[1908]: time="2025-12-16T02:09:44.247345634Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Dec 16 02:09:44.289067 containerd[1908]: time="2025-12-16T02:09:44.288978806Z" level=info msg="CreateContainer within sandbox \"4af3ab060a5a63ef7e8c46f3753a5c0688f40937761f9072ec16005af7d8084c\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Dec 16 02:09:44.309059 containerd[1908]: time="2025-12-16T02:09:44.307337966Z" level=info msg="Container 6f6c0a5244f6741aa5ff768f7b96ae30dbfc1862e081817aaea0703ae272738e: CDI devices from CRI Config.CDIDevices: []" Dec 16 02:09:44.319974 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3479148898.mount: Deactivated successfully. Dec 16 02:09:44.327309 containerd[1908]: time="2025-12-16T02:09:44.327209030Z" level=info msg="CreateContainer within sandbox \"4af3ab060a5a63ef7e8c46f3753a5c0688f40937761f9072ec16005af7d8084c\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"6f6c0a5244f6741aa5ff768f7b96ae30dbfc1862e081817aaea0703ae272738e\"" Dec 16 02:09:44.329344 containerd[1908]: time="2025-12-16T02:09:44.329223986Z" level=info msg="StartContainer for \"6f6c0a5244f6741aa5ff768f7b96ae30dbfc1862e081817aaea0703ae272738e\"" Dec 16 02:09:44.332710 containerd[1908]: time="2025-12-16T02:09:44.332626370Z" level=info msg="connecting to shim 6f6c0a5244f6741aa5ff768f7b96ae30dbfc1862e081817aaea0703ae272738e" address="unix:///run/containerd/s/f96a478be3e2d9350b072fa3f7c5a71cf5e521a08bd541089ff74a2e2d256c78" protocol=ttrpc version=3 Dec 16 02:09:44.383841 systemd[1]: Started cri-containerd-6f6c0a5244f6741aa5ff768f7b96ae30dbfc1862e081817aaea0703ae272738e.scope - libcontainer container 6f6c0a5244f6741aa5ff768f7b96ae30dbfc1862e081817aaea0703ae272738e. Dec 16 02:09:44.422475 kernel: kauditd_printk_skb: 62 callbacks suppressed Dec 16 02:09:44.422622 kernel: audit: type=1334 audit(1765850984.419:567): prog-id=171 op=LOAD Dec 16 02:09:44.419000 audit: BPF prog-id=171 op=LOAD Dec 16 02:09:44.423000 audit: BPF prog-id=172 op=LOAD Dec 16 02:09:44.425643 kernel: audit: type=1334 audit(1765850984.423:568): prog-id=172 op=LOAD Dec 16 02:09:44.432177 kernel: audit: type=1300 audit(1765850984.423:568): arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178180 a2=98 a3=0 items=0 ppid=4018 pid=4119 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:09:44.423000 audit[4119]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178180 a2=98 a3=0 items=0 ppid=4018 pid=4119 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:09:44.423000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3666366330613532343466363734316161356666373638663762393661 Dec 16 02:09:44.438289 kernel: audit: type=1327 audit(1765850984.423:568): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3666366330613532343466363734316161356666373638663762393661 Dec 16 02:09:44.423000 audit: BPF prog-id=172 op=UNLOAD Dec 16 02:09:44.440377 kernel: audit: type=1334 audit(1765850984.423:569): prog-id=172 op=UNLOAD Dec 16 02:09:44.423000 audit[4119]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4018 pid=4119 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:09:44.446656 kernel: audit: type=1300 audit(1765850984.423:569): arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4018 pid=4119 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:09:44.423000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3666366330613532343466363734316161356666373638663762393661 Dec 16 02:09:44.452727 kernel: audit: type=1327 audit(1765850984.423:569): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3666366330613532343466363734316161356666373638663762393661 Dec 16 02:09:44.423000 audit: BPF prog-id=173 op=LOAD Dec 16 02:09:44.454847 kernel: audit: type=1334 audit(1765850984.423:570): prog-id=173 op=LOAD Dec 16 02:09:44.454957 kernel: audit: type=1300 audit(1765850984.423:570): arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=4018 pid=4119 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:09:44.423000 audit[4119]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=4018 pid=4119 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:09:44.423000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3666366330613532343466363734316161356666373638663762393661 Dec 16 02:09:44.466885 kernel: audit: type=1327 audit(1765850984.423:570): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3666366330613532343466363734316161356666373638663762393661 Dec 16 02:09:44.425000 audit: BPF prog-id=174 op=LOAD Dec 16 02:09:44.425000 audit[4119]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000178168 a2=98 a3=0 items=0 ppid=4018 pid=4119 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:09:44.425000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3666366330613532343466363734316161356666373638663762393661 Dec 16 02:09:44.438000 audit: BPF prog-id=174 op=UNLOAD Dec 16 02:09:44.438000 audit[4119]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4018 pid=4119 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:09:44.438000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3666366330613532343466363734316161356666373638663762393661 Dec 16 02:09:44.438000 audit: BPF prog-id=173 op=UNLOAD Dec 16 02:09:44.438000 audit[4119]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4018 pid=4119 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:09:44.438000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3666366330613532343466363734316161356666373638663762393661 Dec 16 02:09:44.438000 audit: BPF prog-id=175 op=LOAD Dec 16 02:09:44.438000 audit[4119]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178648 a2=98 a3=0 items=0 ppid=4018 pid=4119 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:09:44.438000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3666366330613532343466363734316161356666373638663762393661 Dec 16 02:09:44.536468 containerd[1908]: time="2025-12-16T02:09:44.534138879Z" level=info msg="StartContainer for \"6f6c0a5244f6741aa5ff768f7b96ae30dbfc1862e081817aaea0703ae272738e\" returns successfully" Dec 16 02:09:44.994218 kubelet[3500]: E1216 02:09:44.993932 3500 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7f5sg" podUID="aaad2db4-9021-4d31-8275-e9b7ba731389" Dec 16 02:09:45.289557 kubelet[3500]: I1216 02:09:45.289338 3500 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-7f44f9b84d-kd557" podStartSLOduration=3.36616824 podStartE2EDuration="6.289318227s" podCreationTimestamp="2025-12-16 02:09:39 +0000 UTC" firstStartedPulling="2025-12-16 02:09:41.320680835 +0000 UTC m=+36.668756751" lastFinishedPulling="2025-12-16 02:09:44.24383081 +0000 UTC m=+39.591906738" observedRunningTime="2025-12-16 02:09:45.289035771 +0000 UTC m=+40.637111687" watchObservedRunningTime="2025-12-16 02:09:45.289318227 +0000 UTC m=+40.637394131" Dec 16 02:09:45.388000 audit[4157]: NETFILTER_CFG table=filter:117 family=2 entries=21 op=nft_register_rule pid=4157 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 02:09:45.388000 audit[4157]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=7480 a0=3 a1=ffffdb4954b0 a2=0 a3=1 items=0 ppid=3608 pid=4157 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:09:45.388000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 02:09:45.393000 audit[4157]: NETFILTER_CFG table=nat:118 family=2 entries=19 op=nft_register_chain pid=4157 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 02:09:45.393000 audit[4157]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=6276 a0=3 a1=ffffdb4954b0 a2=0 a3=1 items=0 ppid=3608 pid=4157 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:09:45.393000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 02:09:46.991714 kubelet[3500]: E1216 02:09:46.991265 3500 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7f5sg" podUID="aaad2db4-9021-4d31-8275-e9b7ba731389" Dec 16 02:09:47.488136 containerd[1908]: time="2025-12-16T02:09:47.488028030Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 02:09:47.490464 containerd[1908]: time="2025-12-16T02:09:47.490241094Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=65921248" Dec 16 02:09:47.493466 containerd[1908]: time="2025-12-16T02:09:47.492252462Z" level=info msg="ImageCreate event name:\"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 02:09:47.496938 containerd[1908]: time="2025-12-16T02:09:47.496847994Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 02:09:47.501571 containerd[1908]: time="2025-12-16T02:09:47.501486486Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"67295507\" in 3.253969144s" Dec 16 02:09:47.501571 containerd[1908]: time="2025-12-16T02:09:47.501559746Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\"" Dec 16 02:09:47.511958 containerd[1908]: time="2025-12-16T02:09:47.511901622Z" level=info msg="CreateContainer within sandbox \"a5d0a390e97e2c3326054ce43ba0461da271523947c8c1cab4f38b939055352a\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Dec 16 02:09:47.532486 containerd[1908]: time="2025-12-16T02:09:47.530645262Z" level=info msg="Container 42c2e3a8d253a2ac2d501f817c3b3af90bceb32b644c4aaaed7b9eacd84cb917: CDI devices from CRI Config.CDIDevices: []" Dec 16 02:09:47.550224 containerd[1908]: time="2025-12-16T02:09:47.550122450Z" level=info msg="CreateContainer within sandbox \"a5d0a390e97e2c3326054ce43ba0461da271523947c8c1cab4f38b939055352a\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"42c2e3a8d253a2ac2d501f817c3b3af90bceb32b644c4aaaed7b9eacd84cb917\"" Dec 16 02:09:47.555612 containerd[1908]: time="2025-12-16T02:09:47.555544170Z" level=info msg="StartContainer for \"42c2e3a8d253a2ac2d501f817c3b3af90bceb32b644c4aaaed7b9eacd84cb917\"" Dec 16 02:09:47.562472 containerd[1908]: time="2025-12-16T02:09:47.562293258Z" level=info msg="connecting to shim 42c2e3a8d253a2ac2d501f817c3b3af90bceb32b644c4aaaed7b9eacd84cb917" address="unix:///run/containerd/s/045030b7c328c54bca72b955db729ddaa1467fa7ba3ab052475179968efa8b83" protocol=ttrpc version=3 Dec 16 02:09:47.607188 systemd[1]: Started cri-containerd-42c2e3a8d253a2ac2d501f817c3b3af90bceb32b644c4aaaed7b9eacd84cb917.scope - libcontainer container 42c2e3a8d253a2ac2d501f817c3b3af90bceb32b644c4aaaed7b9eacd84cb917. Dec 16 02:09:47.696000 audit: BPF prog-id=176 op=LOAD Dec 16 02:09:47.696000 audit[4166]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001303e8 a2=98 a3=0 items=0 ppid=3963 pid=4166 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:09:47.696000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3432633265336138643235336132616332643530316638313763336233 Dec 16 02:09:47.696000 audit: BPF prog-id=177 op=LOAD Dec 16 02:09:47.696000 audit[4166]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=4000130168 a2=98 a3=0 items=0 ppid=3963 pid=4166 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:09:47.696000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3432633265336138643235336132616332643530316638313763336233 Dec 16 02:09:47.697000 audit: BPF prog-id=177 op=UNLOAD Dec 16 02:09:47.697000 audit[4166]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3963 pid=4166 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:09:47.697000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3432633265336138643235336132616332643530316638313763336233 Dec 16 02:09:47.698000 audit: BPF prog-id=176 op=UNLOAD Dec 16 02:09:47.698000 audit[4166]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3963 pid=4166 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:09:47.698000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3432633265336138643235336132616332643530316638313763336233 Dec 16 02:09:47.698000 audit: BPF prog-id=178 op=LOAD Dec 16 02:09:47.698000 audit[4166]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000130648 a2=98 a3=0 items=0 ppid=3963 pid=4166 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:09:47.698000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3432633265336138643235336132616332643530316638313763336233 Dec 16 02:09:47.755179 containerd[1908]: time="2025-12-16T02:09:47.754857583Z" level=info msg="StartContainer for \"42c2e3a8d253a2ac2d501f817c3b3af90bceb32b644c4aaaed7b9eacd84cb917\" returns successfully" Dec 16 02:09:48.898152 systemd[1]: cri-containerd-42c2e3a8d253a2ac2d501f817c3b3af90bceb32b644c4aaaed7b9eacd84cb917.scope: Deactivated successfully. Dec 16 02:09:48.900095 systemd[1]: cri-containerd-42c2e3a8d253a2ac2d501f817c3b3af90bceb32b644c4aaaed7b9eacd84cb917.scope: Consumed 1.050s CPU time, 189.3M memory peak, 165.9M written to disk. Dec 16 02:09:48.902032 containerd[1908]: time="2025-12-16T02:09:48.900842493Z" level=info msg="received container exit event container_id:\"42c2e3a8d253a2ac2d501f817c3b3af90bceb32b644c4aaaed7b9eacd84cb917\" id:\"42c2e3a8d253a2ac2d501f817c3b3af90bceb32b644c4aaaed7b9eacd84cb917\" pid:4179 exited_at:{seconds:1765850988 nanos:900090525}" Dec 16 02:09:48.903000 audit: BPF prog-id=178 op=UNLOAD Dec 16 02:09:48.948190 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-42c2e3a8d253a2ac2d501f817c3b3af90bceb32b644c4aaaed7b9eacd84cb917-rootfs.mount: Deactivated successfully. Dec 16 02:09:48.990941 kubelet[3500]: E1216 02:09:48.990719 3500 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7f5sg" podUID="aaad2db4-9021-4d31-8275-e9b7ba731389" Dec 16 02:09:48.996038 kubelet[3500]: I1216 02:09:48.995990 3500 kubelet_node_status.go:439] "Fast updating node status as it just became ready" Dec 16 02:09:49.128798 systemd[1]: Created slice kubepods-burstable-pod7c2b8467_d5c0_4053_9ef4_fa4d698ae6d3.slice - libcontainer container kubepods-burstable-pod7c2b8467_d5c0_4053_9ef4_fa4d698ae6d3.slice. Dec 16 02:09:49.179151 systemd[1]: Created slice kubepods-burstable-podc6cccb09_9581_436e_8372_f4efd2272de1.slice - libcontainer container kubepods-burstable-podc6cccb09_9581_436e_8372_f4efd2272de1.slice. Dec 16 02:09:49.222523 systemd[1]: Created slice kubepods-besteffort-pod1d7a12f8_f60f_4170_be36_168aef541297.slice - libcontainer container kubepods-besteffort-pod1d7a12f8_f60f_4170_be36_168aef541297.slice. Dec 16 02:09:49.252681 kubelet[3500]: I1216 02:09:49.252622 3500 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7c2b8467-d5c0-4053-9ef4-fa4d698ae6d3-config-volume\") pod \"coredns-66bc5c9577-sw9cf\" (UID: \"7c2b8467-d5c0-4053-9ef4-fa4d698ae6d3\") " pod="kube-system/coredns-66bc5c9577-sw9cf" Dec 16 02:09:49.266654 kubelet[3500]: I1216 02:09:49.252710 3500 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c6cccb09-9581-436e-8372-f4efd2272de1-config-volume\") pod \"coredns-66bc5c9577-8srj8\" (UID: \"c6cccb09-9581-436e-8372-f4efd2272de1\") " pod="kube-system/coredns-66bc5c9577-8srj8" Dec 16 02:09:49.266654 kubelet[3500]: I1216 02:09:49.252762 3500 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kxl6s\" (UniqueName: \"kubernetes.io/projected/7c2b8467-d5c0-4053-9ef4-fa4d698ae6d3-kube-api-access-kxl6s\") pod \"coredns-66bc5c9577-sw9cf\" (UID: \"7c2b8467-d5c0-4053-9ef4-fa4d698ae6d3\") " pod="kube-system/coredns-66bc5c9577-sw9cf" Dec 16 02:09:49.266654 kubelet[3500]: I1216 02:09:49.252805 3500 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k7bbl\" (UniqueName: \"kubernetes.io/projected/c6cccb09-9581-436e-8372-f4efd2272de1-kube-api-access-k7bbl\") pod \"coredns-66bc5c9577-8srj8\" (UID: \"c6cccb09-9581-436e-8372-f4efd2272de1\") " pod="kube-system/coredns-66bc5c9577-8srj8" Dec 16 02:09:49.292706 systemd[1]: Created slice kubepods-besteffort-pod6363be22_676f_4db3_afb1_0a1ce8d8def2.slice - libcontainer container kubepods-besteffort-pod6363be22_676f_4db3_afb1_0a1ce8d8def2.slice. Dec 16 02:09:49.348374 systemd[1]: Created slice kubepods-besteffort-pod2d19a364_8480_43c0_bbf1_372d74633ca8.slice - libcontainer container kubepods-besteffort-pod2d19a364_8480_43c0_bbf1_372d74633ca8.slice. Dec 16 02:09:49.353895 kubelet[3500]: I1216 02:09:49.353816 3500 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/6363be22-676f-4db3-afb1-0a1ce8d8def2-calico-apiserver-certs\") pod \"calico-apiserver-8495b986f5-t8ws5\" (UID: \"6363be22-676f-4db3-afb1-0a1ce8d8def2\") " pod="calico-apiserver/calico-apiserver-8495b986f5-t8ws5" Dec 16 02:09:49.353895 kubelet[3500]: I1216 02:09:49.353897 3500 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ps455\" (UniqueName: \"kubernetes.io/projected/6363be22-676f-4db3-afb1-0a1ce8d8def2-kube-api-access-ps455\") pod \"calico-apiserver-8495b986f5-t8ws5\" (UID: \"6363be22-676f-4db3-afb1-0a1ce8d8def2\") " pod="calico-apiserver/calico-apiserver-8495b986f5-t8ws5" Dec 16 02:09:49.353895 kubelet[3500]: I1216 02:09:49.353950 3500 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-drkf9\" (UniqueName: \"kubernetes.io/projected/1d7a12f8-f60f-4170-be36-168aef541297-kube-api-access-drkf9\") pod \"calico-kube-controllers-54647b869b-dj58v\" (UID: \"1d7a12f8-f60f-4170-be36-168aef541297\") " pod="calico-system/calico-kube-controllers-54647b869b-dj58v" Dec 16 02:09:49.354304 kubelet[3500]: I1216 02:09:49.354032 3500 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1d7a12f8-f60f-4170-be36-168aef541297-tigera-ca-bundle\") pod \"calico-kube-controllers-54647b869b-dj58v\" (UID: \"1d7a12f8-f60f-4170-be36-168aef541297\") " pod="calico-system/calico-kube-controllers-54647b869b-dj58v" Dec 16 02:09:49.447292 systemd[1]: Created slice kubepods-besteffort-pod574f6c46_74d5_42f4_9d86_d6cdf3677ba5.slice - libcontainer container kubepods-besteffort-pod574f6c46_74d5_42f4_9d86_d6cdf3677ba5.slice. Dec 16 02:09:49.455481 kubelet[3500]: I1216 02:09:49.454863 3500 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/574f6c46-74d5-42f4-9d86-d6cdf3677ba5-whisker-backend-key-pair\") pod \"whisker-66dd69554-nqj9s\" (UID: \"574f6c46-74d5-42f4-9d86-d6cdf3677ba5\") " pod="calico-system/whisker-66dd69554-nqj9s" Dec 16 02:09:49.455481 kubelet[3500]: I1216 02:09:49.455043 3500 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l6625\" (UniqueName: \"kubernetes.io/projected/574f6c46-74d5-42f4-9d86-d6cdf3677ba5-kube-api-access-l6625\") pod \"whisker-66dd69554-nqj9s\" (UID: \"574f6c46-74d5-42f4-9d86-d6cdf3677ba5\") " pod="calico-system/whisker-66dd69554-nqj9s" Dec 16 02:09:49.455481 kubelet[3500]: I1216 02:09:49.455196 3500 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hpgw9\" (UniqueName: \"kubernetes.io/projected/2d19a364-8480-43c0-bbf1-372d74633ca8-kube-api-access-hpgw9\") pod \"calico-apiserver-8495b986f5-pp87t\" (UID: \"2d19a364-8480-43c0-bbf1-372d74633ca8\") " pod="calico-apiserver/calico-apiserver-8495b986f5-pp87t" Dec 16 02:09:49.455481 kubelet[3500]: I1216 02:09:49.455238 3500 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/574f6c46-74d5-42f4-9d86-d6cdf3677ba5-whisker-ca-bundle\") pod \"whisker-66dd69554-nqj9s\" (UID: \"574f6c46-74d5-42f4-9d86-d6cdf3677ba5\") " pod="calico-system/whisker-66dd69554-nqj9s" Dec 16 02:09:49.455481 kubelet[3500]: I1216 02:09:49.455381 3500 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/2d19a364-8480-43c0-bbf1-372d74633ca8-calico-apiserver-certs\") pod \"calico-apiserver-8495b986f5-pp87t\" (UID: \"2d19a364-8480-43c0-bbf1-372d74633ca8\") " pod="calico-apiserver/calico-apiserver-8495b986f5-pp87t" Dec 16 02:09:49.528449 systemd[1]: Created slice kubepods-besteffort-pod5f75e4b0_aa22_4937_a793_7da0a16c1ff9.slice - libcontainer container kubepods-besteffort-pod5f75e4b0_aa22_4937_a793_7da0a16c1ff9.slice. Dec 16 02:09:49.557202 kubelet[3500]: I1216 02:09:49.557045 3500 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8snx8\" (UniqueName: \"kubernetes.io/projected/5f75e4b0-aa22-4937-a793-7da0a16c1ff9-kube-api-access-8snx8\") pod \"goldmane-7c778bb748-5q889\" (UID: \"5f75e4b0-aa22-4937-a793-7da0a16c1ff9\") " pod="calico-system/goldmane-7c778bb748-5q889" Dec 16 02:09:49.558762 kubelet[3500]: I1216 02:09:49.558397 3500 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5f75e4b0-aa22-4937-a793-7da0a16c1ff9-config\") pod \"goldmane-7c778bb748-5q889\" (UID: \"5f75e4b0-aa22-4937-a793-7da0a16c1ff9\") " pod="calico-system/goldmane-7c778bb748-5q889" Dec 16 02:09:49.563309 kubelet[3500]: I1216 02:09:49.561664 3500 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5f75e4b0-aa22-4937-a793-7da0a16c1ff9-goldmane-ca-bundle\") pod \"goldmane-7c778bb748-5q889\" (UID: \"5f75e4b0-aa22-4937-a793-7da0a16c1ff9\") " pod="calico-system/goldmane-7c778bb748-5q889" Dec 16 02:09:49.563309 kubelet[3500]: I1216 02:09:49.561741 3500 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/5f75e4b0-aa22-4937-a793-7da0a16c1ff9-goldmane-key-pair\") pod \"goldmane-7c778bb748-5q889\" (UID: \"5f75e4b0-aa22-4937-a793-7da0a16c1ff9\") " pod="calico-system/goldmane-7c778bb748-5q889" Dec 16 02:09:49.578978 containerd[1908]: time="2025-12-16T02:09:49.578898488Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-sw9cf,Uid:7c2b8467-d5c0-4053-9ef4-fa4d698ae6d3,Namespace:kube-system,Attempt:0,}" Dec 16 02:09:49.648101 containerd[1908]: time="2025-12-16T02:09:49.648009957Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-8srj8,Uid:c6cccb09-9581-436e-8372-f4efd2272de1,Namespace:kube-system,Attempt:0,}" Dec 16 02:09:49.725549 containerd[1908]: time="2025-12-16T02:09:49.725279469Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8495b986f5-t8ws5,Uid:6363be22-676f-4db3-afb1-0a1ce8d8def2,Namespace:calico-apiserver,Attempt:0,}" Dec 16 02:09:49.798383 containerd[1908]: time="2025-12-16T02:09:49.798294693Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-54647b869b-dj58v,Uid:1d7a12f8-f60f-4170-be36-168aef541297,Namespace:calico-system,Attempt:0,}" Dec 16 02:09:49.895259 containerd[1908]: time="2025-12-16T02:09:49.895115122Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8495b986f5-pp87t,Uid:2d19a364-8480-43c0-bbf1-372d74633ca8,Namespace:calico-apiserver,Attempt:0,}" Dec 16 02:09:49.930575 containerd[1908]: time="2025-12-16T02:09:49.930327550Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-5q889,Uid:5f75e4b0-aa22-4937-a793-7da0a16c1ff9,Namespace:calico-system,Attempt:0,}" Dec 16 02:09:49.935667 containerd[1908]: time="2025-12-16T02:09:49.935583154Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-66dd69554-nqj9s,Uid:574f6c46-74d5-42f4-9d86-d6cdf3677ba5,Namespace:calico-system,Attempt:0,}" Dec 16 02:09:50.368449 containerd[1908]: time="2025-12-16T02:09:50.363873392Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Dec 16 02:09:50.490932 containerd[1908]: time="2025-12-16T02:09:50.490851249Z" level=error msg="Failed to destroy network for sandbox \"5cd6d44f68f87eec74d5e0577386002e38296e2dc5ec018309f2edaafcaa7d8c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 02:09:50.502881 containerd[1908]: time="2025-12-16T02:09:50.502759521Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-66dd69554-nqj9s,Uid:574f6c46-74d5-42f4-9d86-d6cdf3677ba5,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"5cd6d44f68f87eec74d5e0577386002e38296e2dc5ec018309f2edaafcaa7d8c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 02:09:50.503813 kubelet[3500]: E1216 02:09:50.503141 3500 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5cd6d44f68f87eec74d5e0577386002e38296e2dc5ec018309f2edaafcaa7d8c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 02:09:50.503813 kubelet[3500]: E1216 02:09:50.503242 3500 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5cd6d44f68f87eec74d5e0577386002e38296e2dc5ec018309f2edaafcaa7d8c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-66dd69554-nqj9s" Dec 16 02:09:50.503813 kubelet[3500]: E1216 02:09:50.503282 3500 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5cd6d44f68f87eec74d5e0577386002e38296e2dc5ec018309f2edaafcaa7d8c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-66dd69554-nqj9s" Dec 16 02:09:50.504529 kubelet[3500]: E1216 02:09:50.503403 3500 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-66dd69554-nqj9s_calico-system(574f6c46-74d5-42f4-9d86-d6cdf3677ba5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-66dd69554-nqj9s_calico-system(574f6c46-74d5-42f4-9d86-d6cdf3677ba5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5cd6d44f68f87eec74d5e0577386002e38296e2dc5ec018309f2edaafcaa7d8c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-66dd69554-nqj9s" podUID="574f6c46-74d5-42f4-9d86-d6cdf3677ba5" Dec 16 02:09:50.515122 containerd[1908]: time="2025-12-16T02:09:50.514891269Z" level=error msg="Failed to destroy network for sandbox \"e21ade9d7c570d3ef58ef98c59f262f5427a54cf55308891084df1b13b0e6f81\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 02:09:50.527572 containerd[1908]: time="2025-12-16T02:09:50.527497413Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-54647b869b-dj58v,Uid:1d7a12f8-f60f-4170-be36-168aef541297,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"e21ade9d7c570d3ef58ef98c59f262f5427a54cf55308891084df1b13b0e6f81\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 02:09:50.528655 containerd[1908]: time="2025-12-16T02:09:50.528499857Z" level=error msg="Failed to destroy network for sandbox \"bb3f43bced9aaf74e9b82228a772e232296cdecbc4d4ae46580c1d40612454d1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 02:09:50.528809 kubelet[3500]: E1216 02:09:50.528346 3500 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e21ade9d7c570d3ef58ef98c59f262f5427a54cf55308891084df1b13b0e6f81\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 02:09:50.529256 kubelet[3500]: E1216 02:09:50.528922 3500 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e21ade9d7c570d3ef58ef98c59f262f5427a54cf55308891084df1b13b0e6f81\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-54647b869b-dj58v" Dec 16 02:09:50.529256 kubelet[3500]: E1216 02:09:50.528978 3500 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e21ade9d7c570d3ef58ef98c59f262f5427a54cf55308891084df1b13b0e6f81\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-54647b869b-dj58v" Dec 16 02:09:50.530532 kubelet[3500]: E1216 02:09:50.529384 3500 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-54647b869b-dj58v_calico-system(1d7a12f8-f60f-4170-be36-168aef541297)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-54647b869b-dj58v_calico-system(1d7a12f8-f60f-4170-be36-168aef541297)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e21ade9d7c570d3ef58ef98c59f262f5427a54cf55308891084df1b13b0e6f81\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-54647b869b-dj58v" podUID="1d7a12f8-f60f-4170-be36-168aef541297" Dec 16 02:09:50.540540 containerd[1908]: time="2025-12-16T02:09:50.540460737Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-8srj8,Uid:c6cccb09-9581-436e-8372-f4efd2272de1,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"bb3f43bced9aaf74e9b82228a772e232296cdecbc4d4ae46580c1d40612454d1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 02:09:50.542004 kubelet[3500]: E1216 02:09:50.541542 3500 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bb3f43bced9aaf74e9b82228a772e232296cdecbc4d4ae46580c1d40612454d1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 02:09:50.542004 kubelet[3500]: E1216 02:09:50.541737 3500 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bb3f43bced9aaf74e9b82228a772e232296cdecbc4d4ae46580c1d40612454d1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-8srj8" Dec 16 02:09:50.542004 kubelet[3500]: E1216 02:09:50.541776 3500 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bb3f43bced9aaf74e9b82228a772e232296cdecbc4d4ae46580c1d40612454d1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-8srj8" Dec 16 02:09:50.544120 kubelet[3500]: E1216 02:09:50.541953 3500 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-8srj8_kube-system(c6cccb09-9581-436e-8372-f4efd2272de1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-8srj8_kube-system(c6cccb09-9581-436e-8372-f4efd2272de1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"bb3f43bced9aaf74e9b82228a772e232296cdecbc4d4ae46580c1d40612454d1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-8srj8" podUID="c6cccb09-9581-436e-8372-f4efd2272de1" Dec 16 02:09:50.546470 containerd[1908]: time="2025-12-16T02:09:50.546362265Z" level=error msg="Failed to destroy network for sandbox \"11f11d7f01f673d7f8fd32403f95214bb90adbe377c26b9f827c19843f2b4e6f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 02:09:50.557043 containerd[1908]: time="2025-12-16T02:09:50.556966545Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8495b986f5-pp87t,Uid:2d19a364-8480-43c0-bbf1-372d74633ca8,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"11f11d7f01f673d7f8fd32403f95214bb90adbe377c26b9f827c19843f2b4e6f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 02:09:50.559730 kubelet[3500]: E1216 02:09:50.559614 3500 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"11f11d7f01f673d7f8fd32403f95214bb90adbe377c26b9f827c19843f2b4e6f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 02:09:50.559730 kubelet[3500]: E1216 02:09:50.559713 3500 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"11f11d7f01f673d7f8fd32403f95214bb90adbe377c26b9f827c19843f2b4e6f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-8495b986f5-pp87t" Dec 16 02:09:50.560060 kubelet[3500]: E1216 02:09:50.559752 3500 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"11f11d7f01f673d7f8fd32403f95214bb90adbe377c26b9f827c19843f2b4e6f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-8495b986f5-pp87t" Dec 16 02:09:50.560060 kubelet[3500]: E1216 02:09:50.559866 3500 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-8495b986f5-pp87t_calico-apiserver(2d19a364-8480-43c0-bbf1-372d74633ca8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-8495b986f5-pp87t_calico-apiserver(2d19a364-8480-43c0-bbf1-372d74633ca8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"11f11d7f01f673d7f8fd32403f95214bb90adbe377c26b9f827c19843f2b4e6f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-8495b986f5-pp87t" podUID="2d19a364-8480-43c0-bbf1-372d74633ca8" Dec 16 02:09:50.569368 containerd[1908]: time="2025-12-16T02:09:50.567778917Z" level=error msg="Failed to destroy network for sandbox \"f66a66938eae4b071a616206164f014fb9f113b05a64f402d3e256b91e75decc\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 02:09:50.569368 containerd[1908]: time="2025-12-16T02:09:50.569084721Z" level=error msg="Failed to destroy network for sandbox \"5867be7b055d9566d03d3e1bca57c0c4ae4b44a8d9cd9befeca3cd6af967ca57\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 02:09:50.577468 containerd[1908]: time="2025-12-16T02:09:50.577352925Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-5q889,Uid:5f75e4b0-aa22-4937-a793-7da0a16c1ff9,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"5867be7b055d9566d03d3e1bca57c0c4ae4b44a8d9cd9befeca3cd6af967ca57\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 02:09:50.580026 kubelet[3500]: E1216 02:09:50.577781 3500 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5867be7b055d9566d03d3e1bca57c0c4ae4b44a8d9cd9befeca3cd6af967ca57\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 02:09:50.580026 kubelet[3500]: E1216 02:09:50.577867 3500 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5867be7b055d9566d03d3e1bca57c0c4ae4b44a8d9cd9befeca3cd6af967ca57\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7c778bb748-5q889" Dec 16 02:09:50.580026 kubelet[3500]: E1216 02:09:50.577904 3500 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5867be7b055d9566d03d3e1bca57c0c4ae4b44a8d9cd9befeca3cd6af967ca57\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7c778bb748-5q889" Dec 16 02:09:50.580955 kubelet[3500]: E1216 02:09:50.577997 3500 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-7c778bb748-5q889_calico-system(5f75e4b0-aa22-4937-a793-7da0a16c1ff9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-7c778bb748-5q889_calico-system(5f75e4b0-aa22-4937-a793-7da0a16c1ff9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5867be7b055d9566d03d3e1bca57c0c4ae4b44a8d9cd9befeca3cd6af967ca57\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-7c778bb748-5q889" podUID="5f75e4b0-aa22-4937-a793-7da0a16c1ff9" Dec 16 02:09:50.584000 containerd[1908]: time="2025-12-16T02:09:50.583933761Z" level=error msg="Failed to destroy network for sandbox \"2699dbdb0e64729e8ba4102b886b7a0b084194c4aa709eb3a29559e5d2986a0c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 02:09:50.584679 containerd[1908]: time="2025-12-16T02:09:50.584265501Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-sw9cf,Uid:7c2b8467-d5c0-4053-9ef4-fa4d698ae6d3,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"f66a66938eae4b071a616206164f014fb9f113b05a64f402d3e256b91e75decc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 02:09:50.586556 kubelet[3500]: E1216 02:09:50.586087 3500 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f66a66938eae4b071a616206164f014fb9f113b05a64f402d3e256b91e75decc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 02:09:50.586556 kubelet[3500]: E1216 02:09:50.586179 3500 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f66a66938eae4b071a616206164f014fb9f113b05a64f402d3e256b91e75decc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-sw9cf" Dec 16 02:09:50.586556 kubelet[3500]: E1216 02:09:50.586213 3500 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f66a66938eae4b071a616206164f014fb9f113b05a64f402d3e256b91e75decc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-sw9cf" Dec 16 02:09:50.586823 kubelet[3500]: E1216 02:09:50.586313 3500 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-sw9cf_kube-system(7c2b8467-d5c0-4053-9ef4-fa4d698ae6d3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-sw9cf_kube-system(7c2b8467-d5c0-4053-9ef4-fa4d698ae6d3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f66a66938eae4b071a616206164f014fb9f113b05a64f402d3e256b91e75decc\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-sw9cf" podUID="7c2b8467-d5c0-4053-9ef4-fa4d698ae6d3" Dec 16 02:09:50.591269 containerd[1908]: time="2025-12-16T02:09:50.591154521Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8495b986f5-t8ws5,Uid:6363be22-676f-4db3-afb1-0a1ce8d8def2,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"2699dbdb0e64729e8ba4102b886b7a0b084194c4aa709eb3a29559e5d2986a0c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 02:09:50.591857 kubelet[3500]: E1216 02:09:50.591795 3500 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2699dbdb0e64729e8ba4102b886b7a0b084194c4aa709eb3a29559e5d2986a0c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 02:09:50.591981 kubelet[3500]: E1216 02:09:50.591883 3500 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2699dbdb0e64729e8ba4102b886b7a0b084194c4aa709eb3a29559e5d2986a0c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-8495b986f5-t8ws5" Dec 16 02:09:50.591981 kubelet[3500]: E1216 02:09:50.591920 3500 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2699dbdb0e64729e8ba4102b886b7a0b084194c4aa709eb3a29559e5d2986a0c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-8495b986f5-t8ws5" Dec 16 02:09:50.592878 kubelet[3500]: E1216 02:09:50.592596 3500 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-8495b986f5-t8ws5_calico-apiserver(6363be22-676f-4db3-afb1-0a1ce8d8def2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-8495b986f5-t8ws5_calico-apiserver(6363be22-676f-4db3-afb1-0a1ce8d8def2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2699dbdb0e64729e8ba4102b886b7a0b084194c4aa709eb3a29559e5d2986a0c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-8495b986f5-t8ws5" podUID="6363be22-676f-4db3-afb1-0a1ce8d8def2" Dec 16 02:09:50.946785 systemd[1]: run-netns-cni\x2dce76898e\x2d4a0b\x2d0fca\x2d079e\x2d4203df589e91.mount: Deactivated successfully. Dec 16 02:09:50.947020 systemd[1]: run-netns-cni\x2dd3c5b8af\x2d9e1c\x2d262d\x2de3df\x2d8f014578f097.mount: Deactivated successfully. Dec 16 02:09:50.947158 systemd[1]: run-netns-cni\x2dcdf7f240\x2d8fff\x2d5742\x2d9daf\x2d006e8f2d13dd.mount: Deactivated successfully. Dec 16 02:09:50.947288 systemd[1]: run-netns-cni\x2dd7044149\x2d2aec\x2d475c\x2dfed1\x2deb588891cf3f.mount: Deactivated successfully. Dec 16 02:09:50.947440 systemd[1]: run-netns-cni\x2d7213961c\x2d7fe6\x2daa5a\x2d066f\x2d5161e243e89d.mount: Deactivated successfully. Dec 16 02:09:50.947605 systemd[1]: run-netns-cni\x2d34dd0895\x2d3177\x2d2bc3\x2d2bc3\x2dc548b0143fcc.mount: Deactivated successfully. Dec 16 02:09:51.006881 systemd[1]: Created slice kubepods-besteffort-podaaad2db4_9021_4d31_8275_e9b7ba731389.slice - libcontainer container kubepods-besteffort-podaaad2db4_9021_4d31_8275_e9b7ba731389.slice. Dec 16 02:09:51.019326 containerd[1908]: time="2025-12-16T02:09:51.019235515Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-7f5sg,Uid:aaad2db4-9021-4d31-8275-e9b7ba731389,Namespace:calico-system,Attempt:0,}" Dec 16 02:09:51.139551 containerd[1908]: time="2025-12-16T02:09:51.139480304Z" level=error msg="Failed to destroy network for sandbox \"307415fd2ea076d8084689ceef0af69cf625c00e9b352ef25b8641dd8a582ea0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 02:09:51.145154 systemd[1]: run-netns-cni\x2d793dafce\x2d818e\x2d7754\x2df2ba\x2d3507753086d7.mount: Deactivated successfully. Dec 16 02:09:51.145501 containerd[1908]: time="2025-12-16T02:09:51.145127132Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-7f5sg,Uid:aaad2db4-9021-4d31-8275-e9b7ba731389,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"307415fd2ea076d8084689ceef0af69cf625c00e9b352ef25b8641dd8a582ea0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 02:09:51.145870 kubelet[3500]: E1216 02:09:51.145522 3500 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"307415fd2ea076d8084689ceef0af69cf625c00e9b352ef25b8641dd8a582ea0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 02:09:51.145870 kubelet[3500]: E1216 02:09:51.145622 3500 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"307415fd2ea076d8084689ceef0af69cf625c00e9b352ef25b8641dd8a582ea0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-7f5sg" Dec 16 02:09:51.145870 kubelet[3500]: E1216 02:09:51.145665 3500 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"307415fd2ea076d8084689ceef0af69cf625c00e9b352ef25b8641dd8a582ea0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-7f5sg" Dec 16 02:09:51.146603 kubelet[3500]: E1216 02:09:51.145755 3500 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-7f5sg_calico-system(aaad2db4-9021-4d31-8275-e9b7ba731389)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-7f5sg_calico-system(aaad2db4-9021-4d31-8275-e9b7ba731389)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"307415fd2ea076d8084689ceef0af69cf625c00e9b352ef25b8641dd8a582ea0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-7f5sg" podUID="aaad2db4-9021-4d31-8275-e9b7ba731389" Dec 16 02:09:57.781401 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2329023542.mount: Deactivated successfully. Dec 16 02:09:57.831710 containerd[1908]: time="2025-12-16T02:09:57.831622217Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 02:09:57.836457 containerd[1908]: time="2025-12-16T02:09:57.835628897Z" level=info msg="ImageCreate event name:\"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 02:09:57.836678 containerd[1908]: time="2025-12-16T02:09:57.836358869Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=150930912" Dec 16 02:09:57.842492 containerd[1908]: time="2025-12-16T02:09:57.841549697Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 02:09:57.843848 containerd[1908]: time="2025-12-16T02:09:57.843454613Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"150934424\" in 7.479459925s" Dec 16 02:09:57.843848 containerd[1908]: time="2025-12-16T02:09:57.843533033Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\"" Dec 16 02:09:57.915746 containerd[1908]: time="2025-12-16T02:09:57.915655422Z" level=info msg="CreateContainer within sandbox \"a5d0a390e97e2c3326054ce43ba0461da271523947c8c1cab4f38b939055352a\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Dec 16 02:09:57.930102 containerd[1908]: time="2025-12-16T02:09:57.930023442Z" level=info msg="Container 2a862059bea196c4ebd19a3e3fc35339d1055282cba6723405f4f9178cb35389: CDI devices from CRI Config.CDIDevices: []" Dec 16 02:09:57.941579 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1714162797.mount: Deactivated successfully. Dec 16 02:09:57.957157 containerd[1908]: time="2025-12-16T02:09:57.957050322Z" level=info msg="CreateContainer within sandbox \"a5d0a390e97e2c3326054ce43ba0461da271523947c8c1cab4f38b939055352a\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"2a862059bea196c4ebd19a3e3fc35339d1055282cba6723405f4f9178cb35389\"" Dec 16 02:09:57.958559 containerd[1908]: time="2025-12-16T02:09:57.958456950Z" level=info msg="StartContainer for \"2a862059bea196c4ebd19a3e3fc35339d1055282cba6723405f4f9178cb35389\"" Dec 16 02:09:57.962028 containerd[1908]: time="2025-12-16T02:09:57.961973322Z" level=info msg="connecting to shim 2a862059bea196c4ebd19a3e3fc35339d1055282cba6723405f4f9178cb35389" address="unix:///run/containerd/s/045030b7c328c54bca72b955db729ddaa1467fa7ba3ab052475179968efa8b83" protocol=ttrpc version=3 Dec 16 02:09:58.009791 systemd[1]: Started cri-containerd-2a862059bea196c4ebd19a3e3fc35339d1055282cba6723405f4f9178cb35389.scope - libcontainer container 2a862059bea196c4ebd19a3e3fc35339d1055282cba6723405f4f9178cb35389. Dec 16 02:09:58.189000 audit: BPF prog-id=179 op=LOAD Dec 16 02:09:58.191512 kernel: kauditd_printk_skb: 34 callbacks suppressed Dec 16 02:09:58.191901 kernel: audit: type=1334 audit(1765850998.189:583): prog-id=179 op=LOAD Dec 16 02:09:58.189000 audit[4441]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001303e8 a2=98 a3=0 items=0 ppid=3963 pid=4441 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:09:58.202742 kernel: audit: type=1300 audit(1765850998.189:583): arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001303e8 a2=98 a3=0 items=0 ppid=3963 pid=4441 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:09:58.189000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3261383632303539626561313936633465626431396133653366633335 Dec 16 02:09:58.212119 kernel: audit: type=1327 audit(1765850998.189:583): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3261383632303539626561313936633465626431396133653366633335 Dec 16 02:09:58.192000 audit: BPF prog-id=180 op=LOAD Dec 16 02:09:58.215467 kernel: audit: type=1334 audit(1765850998.192:584): prog-id=180 op=LOAD Dec 16 02:09:58.215833 kernel: audit: type=1300 audit(1765850998.192:584): arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=4000130168 a2=98 a3=0 items=0 ppid=3963 pid=4441 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:09:58.192000 audit[4441]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=4000130168 a2=98 a3=0 items=0 ppid=3963 pid=4441 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:09:58.192000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3261383632303539626561313936633465626431396133653366633335 Dec 16 02:09:58.231749 kernel: audit: type=1327 audit(1765850998.192:584): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3261383632303539626561313936633465626431396133653366633335 Dec 16 02:09:58.192000 audit: BPF prog-id=180 op=UNLOAD Dec 16 02:09:58.234323 kernel: audit: type=1334 audit(1765850998.192:585): prog-id=180 op=UNLOAD Dec 16 02:09:58.234498 kernel: audit: type=1300 audit(1765850998.192:585): arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3963 pid=4441 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:09:58.192000 audit[4441]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3963 pid=4441 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:09:58.192000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3261383632303539626561313936633465626431396133653366633335 Dec 16 02:09:58.248805 kernel: audit: type=1327 audit(1765850998.192:585): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3261383632303539626561313936633465626431396133653366633335 Dec 16 02:09:58.192000 audit: BPF prog-id=179 op=UNLOAD Dec 16 02:09:58.254972 kernel: audit: type=1334 audit(1765850998.192:586): prog-id=179 op=UNLOAD Dec 16 02:09:58.192000 audit[4441]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3963 pid=4441 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:09:58.192000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3261383632303539626561313936633465626431396133653366633335 Dec 16 02:09:58.192000 audit: BPF prog-id=181 op=LOAD Dec 16 02:09:58.192000 audit[4441]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000130648 a2=98 a3=0 items=0 ppid=3963 pid=4441 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:09:58.192000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3261383632303539626561313936633465626431396133653366633335 Dec 16 02:09:58.298932 containerd[1908]: time="2025-12-16T02:09:58.298841344Z" level=info msg="StartContainer for \"2a862059bea196c4ebd19a3e3fc35339d1055282cba6723405f4f9178cb35389\" returns successfully" Dec 16 02:09:58.475596 kubelet[3500]: I1216 02:09:58.468045 3500 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-895lg" podStartSLOduration=2.482470704 podStartE2EDuration="19.46801684s" podCreationTimestamp="2025-12-16 02:09:39 +0000 UTC" firstStartedPulling="2025-12-16 02:09:40.874459717 +0000 UTC m=+36.222535633" lastFinishedPulling="2025-12-16 02:09:57.860005865 +0000 UTC m=+53.208081769" observedRunningTime="2025-12-16 02:09:58.463541128 +0000 UTC m=+53.811617056" watchObservedRunningTime="2025-12-16 02:09:58.46801684 +0000 UTC m=+53.816092768" Dec 16 02:09:58.719563 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Dec 16 02:09:58.719748 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Dec 16 02:09:59.060536 kubelet[3500]: I1216 02:09:59.059469 3500 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/574f6c46-74d5-42f4-9d86-d6cdf3677ba5-whisker-backend-key-pair\") pod \"574f6c46-74d5-42f4-9d86-d6cdf3677ba5\" (UID: \"574f6c46-74d5-42f4-9d86-d6cdf3677ba5\") " Dec 16 02:09:59.060536 kubelet[3500]: I1216 02:09:59.060307 3500 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/574f6c46-74d5-42f4-9d86-d6cdf3677ba5-whisker-ca-bundle\") pod \"574f6c46-74d5-42f4-9d86-d6cdf3677ba5\" (UID: \"574f6c46-74d5-42f4-9d86-d6cdf3677ba5\") " Dec 16 02:09:59.060536 kubelet[3500]: I1216 02:09:59.060370 3500 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l6625\" (UniqueName: \"kubernetes.io/projected/574f6c46-74d5-42f4-9d86-d6cdf3677ba5-kube-api-access-l6625\") pod \"574f6c46-74d5-42f4-9d86-d6cdf3677ba5\" (UID: \"574f6c46-74d5-42f4-9d86-d6cdf3677ba5\") " Dec 16 02:09:59.071451 kubelet[3500]: I1216 02:09:59.065348 3500 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/574f6c46-74d5-42f4-9d86-d6cdf3677ba5-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "574f6c46-74d5-42f4-9d86-d6cdf3677ba5" (UID: "574f6c46-74d5-42f4-9d86-d6cdf3677ba5"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 16 02:09:59.080584 systemd[1]: var-lib-kubelet-pods-574f6c46\x2d74d5\x2d42f4\x2d9d86\x2dd6cdf3677ba5-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dl6625.mount: Deactivated successfully. Dec 16 02:09:59.084010 kubelet[3500]: I1216 02:09:59.083581 3500 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/574f6c46-74d5-42f4-9d86-d6cdf3677ba5-kube-api-access-l6625" (OuterVolumeSpecName: "kube-api-access-l6625") pod "574f6c46-74d5-42f4-9d86-d6cdf3677ba5" (UID: "574f6c46-74d5-42f4-9d86-d6cdf3677ba5"). InnerVolumeSpecName "kube-api-access-l6625". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 16 02:09:59.094755 kubelet[3500]: I1216 02:09:59.094678 3500 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/574f6c46-74d5-42f4-9d86-d6cdf3677ba5-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "574f6c46-74d5-42f4-9d86-d6cdf3677ba5" (UID: "574f6c46-74d5-42f4-9d86-d6cdf3677ba5"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 16 02:09:59.096035 systemd[1]: var-lib-kubelet-pods-574f6c46\x2d74d5\x2d42f4\x2d9d86\x2dd6cdf3677ba5-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Dec 16 02:09:59.161156 kubelet[3500]: I1216 02:09:59.161075 3500 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/574f6c46-74d5-42f4-9d86-d6cdf3677ba5-whisker-backend-key-pair\") on node \"ip-172-31-24-92\" DevicePath \"\"" Dec 16 02:09:59.161156 kubelet[3500]: I1216 02:09:59.161139 3500 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/574f6c46-74d5-42f4-9d86-d6cdf3677ba5-whisker-ca-bundle\") on node \"ip-172-31-24-92\" DevicePath \"\"" Dec 16 02:09:59.161156 kubelet[3500]: I1216 02:09:59.161165 3500 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-l6625\" (UniqueName: \"kubernetes.io/projected/574f6c46-74d5-42f4-9d86-d6cdf3677ba5-kube-api-access-l6625\") on node \"ip-172-31-24-92\" DevicePath \"\"" Dec 16 02:09:59.441317 systemd[1]: Removed slice kubepods-besteffort-pod574f6c46_74d5_42f4_9d86_d6cdf3677ba5.slice - libcontainer container kubepods-besteffort-pod574f6c46_74d5_42f4_9d86_d6cdf3677ba5.slice. Dec 16 02:09:59.632531 systemd[1]: Created slice kubepods-besteffort-podb08348aa_b9db_4017_ab2d_63cae97b2a73.slice - libcontainer container kubepods-besteffort-podb08348aa_b9db_4017_ab2d_63cae97b2a73.slice. Dec 16 02:09:59.766852 kubelet[3500]: I1216 02:09:59.766752 3500 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/b08348aa-b9db-4017-ab2d-63cae97b2a73-whisker-backend-key-pair\") pod \"whisker-77f9546868-lgh2z\" (UID: \"b08348aa-b9db-4017-ab2d-63cae97b2a73\") " pod="calico-system/whisker-77f9546868-lgh2z" Dec 16 02:09:59.767772 kubelet[3500]: I1216 02:09:59.766868 3500 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b08348aa-b9db-4017-ab2d-63cae97b2a73-whisker-ca-bundle\") pod \"whisker-77f9546868-lgh2z\" (UID: \"b08348aa-b9db-4017-ab2d-63cae97b2a73\") " pod="calico-system/whisker-77f9546868-lgh2z" Dec 16 02:09:59.767772 kubelet[3500]: I1216 02:09:59.766930 3500 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-58tv7\" (UniqueName: \"kubernetes.io/projected/b08348aa-b9db-4017-ab2d-63cae97b2a73-kube-api-access-58tv7\") pod \"whisker-77f9546868-lgh2z\" (UID: \"b08348aa-b9db-4017-ab2d-63cae97b2a73\") " pod="calico-system/whisker-77f9546868-lgh2z" Dec 16 02:09:59.953835 containerd[1908]: time="2025-12-16T02:09:59.953772752Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-77f9546868-lgh2z,Uid:b08348aa-b9db-4017-ab2d-63cae97b2a73,Namespace:calico-system,Attempt:0,}" Dec 16 02:10:00.997770 kubelet[3500]: I1216 02:10:00.997719 3500 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="574f6c46-74d5-42f4-9d86-d6cdf3677ba5" path="/var/lib/kubelet/pods/574f6c46-74d5-42f4-9d86-d6cdf3677ba5/volumes" Dec 16 02:10:01.719803 (udev-worker)[4507]: Network interface NamePolicy= disabled on kernel command line. Dec 16 02:10:01.721071 systemd-networkd[1478]: cali5c40471f5b2: Link UP Dec 16 02:10:01.728544 systemd-networkd[1478]: cali5c40471f5b2: Gained carrier Dec 16 02:10:01.840341 containerd[1908]: 2025-12-16 02:10:00.061 [INFO][4556] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Dec 16 02:10:01.840341 containerd[1908]: 2025-12-16 02:10:01.164 [INFO][4556] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--24--92-k8s-whisker--77f9546868--lgh2z-eth0 whisker-77f9546868- calico-system b08348aa-b9db-4017-ab2d-63cae97b2a73 951 0 2025-12-16 02:09:59 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:77f9546868 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ip-172-31-24-92 whisker-77f9546868-lgh2z eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali5c40471f5b2 [] [] }} ContainerID="ae4799696207bc0c07a24257baec6c831bd4632eb3f0a58fefcfc011fa9ac020" Namespace="calico-system" Pod="whisker-77f9546868-lgh2z" WorkloadEndpoint="ip--172--31--24--92-k8s-whisker--77f9546868--lgh2z-" Dec 16 02:10:01.840341 containerd[1908]: 2025-12-16 02:10:01.165 [INFO][4556] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ae4799696207bc0c07a24257baec6c831bd4632eb3f0a58fefcfc011fa9ac020" Namespace="calico-system" Pod="whisker-77f9546868-lgh2z" WorkloadEndpoint="ip--172--31--24--92-k8s-whisker--77f9546868--lgh2z-eth0" Dec 16 02:10:01.840341 containerd[1908]: 2025-12-16 02:10:01.325 [INFO][4652] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ae4799696207bc0c07a24257baec6c831bd4632eb3f0a58fefcfc011fa9ac020" HandleID="k8s-pod-network.ae4799696207bc0c07a24257baec6c831bd4632eb3f0a58fefcfc011fa9ac020" Workload="ip--172--31--24--92-k8s-whisker--77f9546868--lgh2z-eth0" Dec 16 02:10:01.841260 containerd[1908]: 2025-12-16 02:10:01.327 [INFO][4652] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="ae4799696207bc0c07a24257baec6c831bd4632eb3f0a58fefcfc011fa9ac020" HandleID="k8s-pod-network.ae4799696207bc0c07a24257baec6c831bd4632eb3f0a58fefcfc011fa9ac020" Workload="ip--172--31--24--92-k8s-whisker--77f9546868--lgh2z-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400030db40), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-24-92", "pod":"whisker-77f9546868-lgh2z", "timestamp":"2025-12-16 02:10:01.325715587 +0000 UTC"}, Hostname:"ip-172-31-24-92", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 02:10:01.841260 containerd[1908]: 2025-12-16 02:10:01.327 [INFO][4652] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 02:10:01.841260 containerd[1908]: 2025-12-16 02:10:01.327 [INFO][4652] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 02:10:01.841260 containerd[1908]: 2025-12-16 02:10:01.328 [INFO][4652] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-24-92' Dec 16 02:10:01.841260 containerd[1908]: 2025-12-16 02:10:01.385 [INFO][4652] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.ae4799696207bc0c07a24257baec6c831bd4632eb3f0a58fefcfc011fa9ac020" host="ip-172-31-24-92" Dec 16 02:10:01.841260 containerd[1908]: 2025-12-16 02:10:01.466 [INFO][4652] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-24-92" Dec 16 02:10:01.841260 containerd[1908]: 2025-12-16 02:10:01.480 [INFO][4652] ipam/ipam.go 511: Trying affinity for 192.168.71.192/26 host="ip-172-31-24-92" Dec 16 02:10:01.841260 containerd[1908]: 2025-12-16 02:10:01.487 [INFO][4652] ipam/ipam.go 158: Attempting to load block cidr=192.168.71.192/26 host="ip-172-31-24-92" Dec 16 02:10:01.841260 containerd[1908]: 2025-12-16 02:10:01.496 [INFO][4652] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.71.192/26 host="ip-172-31-24-92" Dec 16 02:10:01.842939 containerd[1908]: 2025-12-16 02:10:01.496 [INFO][4652] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.71.192/26 handle="k8s-pod-network.ae4799696207bc0c07a24257baec6c831bd4632eb3f0a58fefcfc011fa9ac020" host="ip-172-31-24-92" Dec 16 02:10:01.842939 containerd[1908]: 2025-12-16 02:10:01.503 [INFO][4652] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.ae4799696207bc0c07a24257baec6c831bd4632eb3f0a58fefcfc011fa9ac020 Dec 16 02:10:01.842939 containerd[1908]: 2025-12-16 02:10:01.519 [INFO][4652] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.71.192/26 handle="k8s-pod-network.ae4799696207bc0c07a24257baec6c831bd4632eb3f0a58fefcfc011fa9ac020" host="ip-172-31-24-92" Dec 16 02:10:01.842939 containerd[1908]: 2025-12-16 02:10:01.541 [INFO][4652] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.71.193/26] block=192.168.71.192/26 handle="k8s-pod-network.ae4799696207bc0c07a24257baec6c831bd4632eb3f0a58fefcfc011fa9ac020" host="ip-172-31-24-92" Dec 16 02:10:01.842939 containerd[1908]: 2025-12-16 02:10:01.541 [INFO][4652] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.71.193/26] handle="k8s-pod-network.ae4799696207bc0c07a24257baec6c831bd4632eb3f0a58fefcfc011fa9ac020" host="ip-172-31-24-92" Dec 16 02:10:01.842939 containerd[1908]: 2025-12-16 02:10:01.541 [INFO][4652] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 02:10:01.842939 containerd[1908]: 2025-12-16 02:10:01.543 [INFO][4652] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.71.193/26] IPv6=[] ContainerID="ae4799696207bc0c07a24257baec6c831bd4632eb3f0a58fefcfc011fa9ac020" HandleID="k8s-pod-network.ae4799696207bc0c07a24257baec6c831bd4632eb3f0a58fefcfc011fa9ac020" Workload="ip--172--31--24--92-k8s-whisker--77f9546868--lgh2z-eth0" Dec 16 02:10:01.843304 containerd[1908]: 2025-12-16 02:10:01.557 [INFO][4556] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ae4799696207bc0c07a24257baec6c831bd4632eb3f0a58fefcfc011fa9ac020" Namespace="calico-system" Pod="whisker-77f9546868-lgh2z" WorkloadEndpoint="ip--172--31--24--92-k8s-whisker--77f9546868--lgh2z-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--24--92-k8s-whisker--77f9546868--lgh2z-eth0", GenerateName:"whisker-77f9546868-", Namespace:"calico-system", SelfLink:"", UID:"b08348aa-b9db-4017-ab2d-63cae97b2a73", ResourceVersion:"951", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 2, 9, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"77f9546868", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-24-92", ContainerID:"", Pod:"whisker-77f9546868-lgh2z", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.71.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali5c40471f5b2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 02:10:01.843304 containerd[1908]: 2025-12-16 02:10:01.558 [INFO][4556] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.71.193/32] ContainerID="ae4799696207bc0c07a24257baec6c831bd4632eb3f0a58fefcfc011fa9ac020" Namespace="calico-system" Pod="whisker-77f9546868-lgh2z" WorkloadEndpoint="ip--172--31--24--92-k8s-whisker--77f9546868--lgh2z-eth0" Dec 16 02:10:01.845384 containerd[1908]: 2025-12-16 02:10:01.558 [INFO][4556] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5c40471f5b2 ContainerID="ae4799696207bc0c07a24257baec6c831bd4632eb3f0a58fefcfc011fa9ac020" Namespace="calico-system" Pod="whisker-77f9546868-lgh2z" WorkloadEndpoint="ip--172--31--24--92-k8s-whisker--77f9546868--lgh2z-eth0" Dec 16 02:10:01.845384 containerd[1908]: 2025-12-16 02:10:01.758 [INFO][4556] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ae4799696207bc0c07a24257baec6c831bd4632eb3f0a58fefcfc011fa9ac020" Namespace="calico-system" Pod="whisker-77f9546868-lgh2z" WorkloadEndpoint="ip--172--31--24--92-k8s-whisker--77f9546868--lgh2z-eth0" Dec 16 02:10:01.845585 containerd[1908]: 2025-12-16 02:10:01.760 [INFO][4556] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ae4799696207bc0c07a24257baec6c831bd4632eb3f0a58fefcfc011fa9ac020" Namespace="calico-system" Pod="whisker-77f9546868-lgh2z" WorkloadEndpoint="ip--172--31--24--92-k8s-whisker--77f9546868--lgh2z-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--24--92-k8s-whisker--77f9546868--lgh2z-eth0", GenerateName:"whisker-77f9546868-", Namespace:"calico-system", SelfLink:"", UID:"b08348aa-b9db-4017-ab2d-63cae97b2a73", ResourceVersion:"951", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 2, 9, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"77f9546868", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-24-92", ContainerID:"ae4799696207bc0c07a24257baec6c831bd4632eb3f0a58fefcfc011fa9ac020", Pod:"whisker-77f9546868-lgh2z", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.71.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali5c40471f5b2", MAC:"92:1f:50:03:66:d8", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 02:10:01.845759 containerd[1908]: 2025-12-16 02:10:01.832 [INFO][4556] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ae4799696207bc0c07a24257baec6c831bd4632eb3f0a58fefcfc011fa9ac020" Namespace="calico-system" Pod="whisker-77f9546868-lgh2z" WorkloadEndpoint="ip--172--31--24--92-k8s-whisker--77f9546868--lgh2z-eth0" Dec 16 02:10:01.910403 containerd[1908]: time="2025-12-16T02:10:01.909866806Z" level=info msg="connecting to shim ae4799696207bc0c07a24257baec6c831bd4632eb3f0a58fefcfc011fa9ac020" address="unix:///run/containerd/s/f1a0f0089f8800606fb2edb73c99f1ac30d71ca82aa5cbaa07ba56ee251c0a39" namespace=k8s.io protocol=ttrpc version=3 Dec 16 02:10:02.000627 containerd[1908]: time="2025-12-16T02:10:01.997690486Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8495b986f5-t8ws5,Uid:6363be22-676f-4db3-afb1-0a1ce8d8def2,Namespace:calico-apiserver,Attempt:0,}" Dec 16 02:10:02.003207 containerd[1908]: time="2025-12-16T02:10:02.002640354Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-5q889,Uid:5f75e4b0-aa22-4937-a793-7da0a16c1ff9,Namespace:calico-system,Attempt:0,}" Dec 16 02:10:02.096632 systemd[1]: Started cri-containerd-ae4799696207bc0c07a24257baec6c831bd4632eb3f0a58fefcfc011fa9ac020.scope - libcontainer container ae4799696207bc0c07a24257baec6c831bd4632eb3f0a58fefcfc011fa9ac020. Dec 16 02:10:02.185000 audit: BPF prog-id=182 op=LOAD Dec 16 02:10:02.187000 audit: BPF prog-id=183 op=LOAD Dec 16 02:10:02.187000 audit[4694]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a8180 a2=98 a3=0 items=0 ppid=4682 pid=4694 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:02.187000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6165343739393639363230376263306330376132343235376261656336 Dec 16 02:10:02.190000 audit: BPF prog-id=183 op=UNLOAD Dec 16 02:10:02.190000 audit[4694]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4682 pid=4694 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:02.190000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6165343739393639363230376263306330376132343235376261656336 Dec 16 02:10:02.192000 audit: BPF prog-id=184 op=LOAD Dec 16 02:10:02.192000 audit[4694]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a83e8 a2=98 a3=0 items=0 ppid=4682 pid=4694 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:02.192000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6165343739393639363230376263306330376132343235376261656336 Dec 16 02:10:02.193000 audit: BPF prog-id=185 op=LOAD Dec 16 02:10:02.193000 audit[4694]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=40001a8168 a2=98 a3=0 items=0 ppid=4682 pid=4694 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:02.193000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6165343739393639363230376263306330376132343235376261656336 Dec 16 02:10:02.195000 audit: BPF prog-id=185 op=UNLOAD Dec 16 02:10:02.195000 audit[4694]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4682 pid=4694 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:02.195000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6165343739393639363230376263306330376132343235376261656336 Dec 16 02:10:02.196000 audit: BPF prog-id=184 op=UNLOAD Dec 16 02:10:02.196000 audit[4694]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4682 pid=4694 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:02.196000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6165343739393639363230376263306330376132343235376261656336 Dec 16 02:10:02.197000 audit: BPF prog-id=186 op=LOAD Dec 16 02:10:02.197000 audit[4694]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a8648 a2=98 a3=0 items=0 ppid=4682 pid=4694 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:02.197000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6165343739393639363230376263306330376132343235376261656336 Dec 16 02:10:02.521106 containerd[1908]: time="2025-12-16T02:10:02.521006013Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-77f9546868-lgh2z,Uid:b08348aa-b9db-4017-ab2d-63cae97b2a73,Namespace:calico-system,Attempt:0,} returns sandbox id \"ae4799696207bc0c07a24257baec6c831bd4632eb3f0a58fefcfc011fa9ac020\"" Dec 16 02:10:02.530424 containerd[1908]: time="2025-12-16T02:10:02.530071809Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Dec 16 02:10:02.635734 systemd-networkd[1478]: cali510804c834f: Link UP Dec 16 02:10:02.636309 systemd-networkd[1478]: cali510804c834f: Gained carrier Dec 16 02:10:02.637399 (udev-worker)[4506]: Network interface NamePolicy= disabled on kernel command line. Dec 16 02:10:02.709540 containerd[1908]: 2025-12-16 02:10:02.227 [INFO][4710] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Dec 16 02:10:02.709540 containerd[1908]: 2025-12-16 02:10:02.284 [INFO][4710] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--24--92-k8s-goldmane--7c778bb748--5q889-eth0 goldmane-7c778bb748- calico-system 5f75e4b0-aa22-4937-a793-7da0a16c1ff9 881 0 2025-12-16 02:09:33 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:7c778bb748 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ip-172-31-24-92 goldmane-7c778bb748-5q889 eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali510804c834f [] [] }} ContainerID="ddecaef92efeca7336ecbe3e2422f232fe5548b2ef2332fcd38b319e74c2ff9d" Namespace="calico-system" Pod="goldmane-7c778bb748-5q889" WorkloadEndpoint="ip--172--31--24--92-k8s-goldmane--7c778bb748--5q889-" Dec 16 02:10:02.709540 containerd[1908]: 2025-12-16 02:10:02.284 [INFO][4710] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ddecaef92efeca7336ecbe3e2422f232fe5548b2ef2332fcd38b319e74c2ff9d" Namespace="calico-system" Pod="goldmane-7c778bb748-5q889" WorkloadEndpoint="ip--172--31--24--92-k8s-goldmane--7c778bb748--5q889-eth0" Dec 16 02:10:02.709540 containerd[1908]: 2025-12-16 02:10:02.456 [INFO][4746] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ddecaef92efeca7336ecbe3e2422f232fe5548b2ef2332fcd38b319e74c2ff9d" HandleID="k8s-pod-network.ddecaef92efeca7336ecbe3e2422f232fe5548b2ef2332fcd38b319e74c2ff9d" Workload="ip--172--31--24--92-k8s-goldmane--7c778bb748--5q889-eth0" Dec 16 02:10:02.710636 containerd[1908]: 2025-12-16 02:10:02.459 [INFO][4746] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="ddecaef92efeca7336ecbe3e2422f232fe5548b2ef2332fcd38b319e74c2ff9d" HandleID="k8s-pod-network.ddecaef92efeca7336ecbe3e2422f232fe5548b2ef2332fcd38b319e74c2ff9d" Workload="ip--172--31--24--92-k8s-goldmane--7c778bb748--5q889-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400034aa90), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-24-92", "pod":"goldmane-7c778bb748-5q889", "timestamp":"2025-12-16 02:10:02.456919268 +0000 UTC"}, Hostname:"ip-172-31-24-92", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 02:10:02.710636 containerd[1908]: 2025-12-16 02:10:02.461 [INFO][4746] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 02:10:02.710636 containerd[1908]: 2025-12-16 02:10:02.462 [INFO][4746] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 02:10:02.710636 containerd[1908]: 2025-12-16 02:10:02.462 [INFO][4746] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-24-92' Dec 16 02:10:02.710636 containerd[1908]: 2025-12-16 02:10:02.503 [INFO][4746] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.ddecaef92efeca7336ecbe3e2422f232fe5548b2ef2332fcd38b319e74c2ff9d" host="ip-172-31-24-92" Dec 16 02:10:02.710636 containerd[1908]: 2025-12-16 02:10:02.527 [INFO][4746] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-24-92" Dec 16 02:10:02.710636 containerd[1908]: 2025-12-16 02:10:02.555 [INFO][4746] ipam/ipam.go 511: Trying affinity for 192.168.71.192/26 host="ip-172-31-24-92" Dec 16 02:10:02.710636 containerd[1908]: 2025-12-16 02:10:02.566 [INFO][4746] ipam/ipam.go 158: Attempting to load block cidr=192.168.71.192/26 host="ip-172-31-24-92" Dec 16 02:10:02.710636 containerd[1908]: 2025-12-16 02:10:02.575 [INFO][4746] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.71.192/26 host="ip-172-31-24-92" Dec 16 02:10:02.711124 containerd[1908]: 2025-12-16 02:10:02.575 [INFO][4746] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.71.192/26 handle="k8s-pod-network.ddecaef92efeca7336ecbe3e2422f232fe5548b2ef2332fcd38b319e74c2ff9d" host="ip-172-31-24-92" Dec 16 02:10:02.711124 containerd[1908]: 2025-12-16 02:10:02.581 [INFO][4746] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.ddecaef92efeca7336ecbe3e2422f232fe5548b2ef2332fcd38b319e74c2ff9d Dec 16 02:10:02.711124 containerd[1908]: 2025-12-16 02:10:02.600 [INFO][4746] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.71.192/26 handle="k8s-pod-network.ddecaef92efeca7336ecbe3e2422f232fe5548b2ef2332fcd38b319e74c2ff9d" host="ip-172-31-24-92" Dec 16 02:10:02.711124 containerd[1908]: 2025-12-16 02:10:02.615 [INFO][4746] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.71.194/26] block=192.168.71.192/26 handle="k8s-pod-network.ddecaef92efeca7336ecbe3e2422f232fe5548b2ef2332fcd38b319e74c2ff9d" host="ip-172-31-24-92" Dec 16 02:10:02.711124 containerd[1908]: 2025-12-16 02:10:02.616 [INFO][4746] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.71.194/26] handle="k8s-pod-network.ddecaef92efeca7336ecbe3e2422f232fe5548b2ef2332fcd38b319e74c2ff9d" host="ip-172-31-24-92" Dec 16 02:10:02.711124 containerd[1908]: 2025-12-16 02:10:02.616 [INFO][4746] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 02:10:02.711124 containerd[1908]: 2025-12-16 02:10:02.617 [INFO][4746] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.71.194/26] IPv6=[] ContainerID="ddecaef92efeca7336ecbe3e2422f232fe5548b2ef2332fcd38b319e74c2ff9d" HandleID="k8s-pod-network.ddecaef92efeca7336ecbe3e2422f232fe5548b2ef2332fcd38b319e74c2ff9d" Workload="ip--172--31--24--92-k8s-goldmane--7c778bb748--5q889-eth0" Dec 16 02:10:02.716800 containerd[1908]: 2025-12-16 02:10:02.626 [INFO][4710] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ddecaef92efeca7336ecbe3e2422f232fe5548b2ef2332fcd38b319e74c2ff9d" Namespace="calico-system" Pod="goldmane-7c778bb748-5q889" WorkloadEndpoint="ip--172--31--24--92-k8s-goldmane--7c778bb748--5q889-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--24--92-k8s-goldmane--7c778bb748--5q889-eth0", GenerateName:"goldmane-7c778bb748-", Namespace:"calico-system", SelfLink:"", UID:"5f75e4b0-aa22-4937-a793-7da0a16c1ff9", ResourceVersion:"881", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 2, 9, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7c778bb748", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-24-92", ContainerID:"", Pod:"goldmane-7c778bb748-5q889", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.71.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali510804c834f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 02:10:02.716800 containerd[1908]: 2025-12-16 02:10:02.626 [INFO][4710] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.71.194/32] ContainerID="ddecaef92efeca7336ecbe3e2422f232fe5548b2ef2332fcd38b319e74c2ff9d" Namespace="calico-system" Pod="goldmane-7c778bb748-5q889" WorkloadEndpoint="ip--172--31--24--92-k8s-goldmane--7c778bb748--5q889-eth0" Dec 16 02:10:02.717049 containerd[1908]: 2025-12-16 02:10:02.627 [INFO][4710] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali510804c834f ContainerID="ddecaef92efeca7336ecbe3e2422f232fe5548b2ef2332fcd38b319e74c2ff9d" Namespace="calico-system" Pod="goldmane-7c778bb748-5q889" WorkloadEndpoint="ip--172--31--24--92-k8s-goldmane--7c778bb748--5q889-eth0" Dec 16 02:10:02.717049 containerd[1908]: 2025-12-16 02:10:02.633 [INFO][4710] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ddecaef92efeca7336ecbe3e2422f232fe5548b2ef2332fcd38b319e74c2ff9d" Namespace="calico-system" Pod="goldmane-7c778bb748-5q889" WorkloadEndpoint="ip--172--31--24--92-k8s-goldmane--7c778bb748--5q889-eth0" Dec 16 02:10:02.719280 containerd[1908]: 2025-12-16 02:10:02.633 [INFO][4710] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ddecaef92efeca7336ecbe3e2422f232fe5548b2ef2332fcd38b319e74c2ff9d" Namespace="calico-system" Pod="goldmane-7c778bb748-5q889" WorkloadEndpoint="ip--172--31--24--92-k8s-goldmane--7c778bb748--5q889-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--24--92-k8s-goldmane--7c778bb748--5q889-eth0", GenerateName:"goldmane-7c778bb748-", Namespace:"calico-system", SelfLink:"", UID:"5f75e4b0-aa22-4937-a793-7da0a16c1ff9", ResourceVersion:"881", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 2, 9, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7c778bb748", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-24-92", ContainerID:"ddecaef92efeca7336ecbe3e2422f232fe5548b2ef2332fcd38b319e74c2ff9d", Pod:"goldmane-7c778bb748-5q889", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.71.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali510804c834f", MAC:"2e:c2:c8:ca:f9:f1", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 02:10:02.721580 containerd[1908]: 2025-12-16 02:10:02.691 [INFO][4710] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ddecaef92efeca7336ecbe3e2422f232fe5548b2ef2332fcd38b319e74c2ff9d" Namespace="calico-system" Pod="goldmane-7c778bb748-5q889" WorkloadEndpoint="ip--172--31--24--92-k8s-goldmane--7c778bb748--5q889-eth0" Dec 16 02:10:02.828330 containerd[1908]: time="2025-12-16T02:10:02.825748642Z" level=info msg="connecting to shim ddecaef92efeca7336ecbe3e2422f232fe5548b2ef2332fcd38b319e74c2ff9d" address="unix:///run/containerd/s/b1186adb3272ad77243a0b09d869fcb9b86f994165e2e6d85d9a0edd558cf352" namespace=k8s.io protocol=ttrpc version=3 Dec 16 02:10:02.836498 systemd-networkd[1478]: cali63151274c7e: Link UP Dec 16 02:10:02.838806 systemd-networkd[1478]: cali63151274c7e: Gained carrier Dec 16 02:10:02.843605 containerd[1908]: time="2025-12-16T02:10:02.843452002Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 02:10:02.851074 containerd[1908]: time="2025-12-16T02:10:02.849405214Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Dec 16 02:10:02.851780 containerd[1908]: time="2025-12-16T02:10:02.851678086Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Dec 16 02:10:02.852787 kubelet[3500]: E1216 02:10:02.852643 3500 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 02:10:02.852787 kubelet[3500]: E1216 02:10:02.852736 3500 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 02:10:02.858754 kubelet[3500]: E1216 02:10:02.852869 3500 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-77f9546868-lgh2z_calico-system(b08348aa-b9db-4017-ab2d-63cae97b2a73): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Dec 16 02:10:02.864228 containerd[1908]: time="2025-12-16T02:10:02.862290850Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Dec 16 02:10:02.930000 audit: BPF prog-id=187 op=LOAD Dec 16 02:10:02.930000 audit[4816]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffecb847d8 a2=98 a3=ffffecb847c8 items=0 ppid=4585 pid=4816 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:02.930000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Dec 16 02:10:02.932019 containerd[1908]: 2025-12-16 02:10:02.272 [INFO][4705] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Dec 16 02:10:02.932019 containerd[1908]: 2025-12-16 02:10:02.347 [INFO][4705] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--24--92-k8s-calico--apiserver--8495b986f5--t8ws5-eth0 calico-apiserver-8495b986f5- calico-apiserver 6363be22-676f-4db3-afb1-0a1ce8d8def2 878 0 2025-12-16 02:09:24 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:8495b986f5 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-24-92 calico-apiserver-8495b986f5-t8ws5 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali63151274c7e [] [] }} ContainerID="a262b5285ad8c95573f84447ec3853b645715efbebe4eecb5cba294fe47dcbd7" Namespace="calico-apiserver" Pod="calico-apiserver-8495b986f5-t8ws5" WorkloadEndpoint="ip--172--31--24--92-k8s-calico--apiserver--8495b986f5--t8ws5-" Dec 16 02:10:02.932019 containerd[1908]: 2025-12-16 02:10:02.348 [INFO][4705] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a262b5285ad8c95573f84447ec3853b645715efbebe4eecb5cba294fe47dcbd7" Namespace="calico-apiserver" Pod="calico-apiserver-8495b986f5-t8ws5" WorkloadEndpoint="ip--172--31--24--92-k8s-calico--apiserver--8495b986f5--t8ws5-eth0" Dec 16 02:10:02.932019 containerd[1908]: 2025-12-16 02:10:02.533 [INFO][4754] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a262b5285ad8c95573f84447ec3853b645715efbebe4eecb5cba294fe47dcbd7" HandleID="k8s-pod-network.a262b5285ad8c95573f84447ec3853b645715efbebe4eecb5cba294fe47dcbd7" Workload="ip--172--31--24--92-k8s-calico--apiserver--8495b986f5--t8ws5-eth0" Dec 16 02:10:02.931000 audit: BPF prog-id=187 op=UNLOAD Dec 16 02:10:02.931000 audit[4816]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=ffffecb847a8 a3=0 items=0 ppid=4585 pid=4816 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:02.931000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Dec 16 02:10:02.932000 audit: BPF prog-id=188 op=LOAD Dec 16 02:10:02.933350 containerd[1908]: 2025-12-16 02:10:02.536 [INFO][4754] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="a262b5285ad8c95573f84447ec3853b645715efbebe4eecb5cba294fe47dcbd7" HandleID="k8s-pod-network.a262b5285ad8c95573f84447ec3853b645715efbebe4eecb5cba294fe47dcbd7" Workload="ip--172--31--24--92-k8s-calico--apiserver--8495b986f5--t8ws5-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002cbe40), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ip-172-31-24-92", "pod":"calico-apiserver-8495b986f5-t8ws5", "timestamp":"2025-12-16 02:10:02.533921589 +0000 UTC"}, Hostname:"ip-172-31-24-92", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 02:10:02.933350 containerd[1908]: 2025-12-16 02:10:02.537 [INFO][4754] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 02:10:02.933350 containerd[1908]: 2025-12-16 02:10:02.616 [INFO][4754] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 02:10:02.933350 containerd[1908]: 2025-12-16 02:10:02.616 [INFO][4754] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-24-92' Dec 16 02:10:02.933350 containerd[1908]: 2025-12-16 02:10:02.660 [INFO][4754] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.a262b5285ad8c95573f84447ec3853b645715efbebe4eecb5cba294fe47dcbd7" host="ip-172-31-24-92" Dec 16 02:10:02.933350 containerd[1908]: 2025-12-16 02:10:02.704 [INFO][4754] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-24-92" Dec 16 02:10:02.933350 containerd[1908]: 2025-12-16 02:10:02.724 [INFO][4754] ipam/ipam.go 511: Trying affinity for 192.168.71.192/26 host="ip-172-31-24-92" Dec 16 02:10:02.933350 containerd[1908]: 2025-12-16 02:10:02.733 [INFO][4754] ipam/ipam.go 158: Attempting to load block cidr=192.168.71.192/26 host="ip-172-31-24-92" Dec 16 02:10:02.933350 containerd[1908]: 2025-12-16 02:10:02.742 [INFO][4754] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.71.192/26 host="ip-172-31-24-92" Dec 16 02:10:02.934189 containerd[1908]: 2025-12-16 02:10:02.742 [INFO][4754] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.71.192/26 handle="k8s-pod-network.a262b5285ad8c95573f84447ec3853b645715efbebe4eecb5cba294fe47dcbd7" host="ip-172-31-24-92" Dec 16 02:10:02.934189 containerd[1908]: 2025-12-16 02:10:02.752 [INFO][4754] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.a262b5285ad8c95573f84447ec3853b645715efbebe4eecb5cba294fe47dcbd7 Dec 16 02:10:02.934189 containerd[1908]: 2025-12-16 02:10:02.778 [INFO][4754] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.71.192/26 handle="k8s-pod-network.a262b5285ad8c95573f84447ec3853b645715efbebe4eecb5cba294fe47dcbd7" host="ip-172-31-24-92" Dec 16 02:10:02.934189 containerd[1908]: 2025-12-16 02:10:02.800 [INFO][4754] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.71.195/26] block=192.168.71.192/26 handle="k8s-pod-network.a262b5285ad8c95573f84447ec3853b645715efbebe4eecb5cba294fe47dcbd7" host="ip-172-31-24-92" Dec 16 02:10:02.934189 containerd[1908]: 2025-12-16 02:10:02.800 [INFO][4754] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.71.195/26] handle="k8s-pod-network.a262b5285ad8c95573f84447ec3853b645715efbebe4eecb5cba294fe47dcbd7" host="ip-172-31-24-92" Dec 16 02:10:02.934189 containerd[1908]: 2025-12-16 02:10:02.800 [INFO][4754] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 02:10:02.934189 containerd[1908]: 2025-12-16 02:10:02.800 [INFO][4754] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.71.195/26] IPv6=[] ContainerID="a262b5285ad8c95573f84447ec3853b645715efbebe4eecb5cba294fe47dcbd7" HandleID="k8s-pod-network.a262b5285ad8c95573f84447ec3853b645715efbebe4eecb5cba294fe47dcbd7" Workload="ip--172--31--24--92-k8s-calico--apiserver--8495b986f5--t8ws5-eth0" Dec 16 02:10:02.932000 audit[4816]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffecb84688 a2=74 a3=95 items=0 ppid=4585 pid=4816 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:02.932000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Dec 16 02:10:02.934000 audit: BPF prog-id=188 op=UNLOAD Dec 16 02:10:02.934000 audit[4816]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=74 a3=95 items=0 ppid=4585 pid=4816 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:02.934000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Dec 16 02:10:02.934000 audit: BPF prog-id=189 op=LOAD Dec 16 02:10:02.934000 audit[4816]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffecb846b8 a2=40 a3=ffffecb846e8 items=0 ppid=4585 pid=4816 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:02.934000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Dec 16 02:10:02.934000 audit: BPF prog-id=189 op=UNLOAD Dec 16 02:10:02.934000 audit[4816]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=40 a3=ffffecb846e8 items=0 ppid=4585 pid=4816 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:02.934000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Dec 16 02:10:02.939252 containerd[1908]: 2025-12-16 02:10:02.810 [INFO][4705] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a262b5285ad8c95573f84447ec3853b645715efbebe4eecb5cba294fe47dcbd7" Namespace="calico-apiserver" Pod="calico-apiserver-8495b986f5-t8ws5" WorkloadEndpoint="ip--172--31--24--92-k8s-calico--apiserver--8495b986f5--t8ws5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--24--92-k8s-calico--apiserver--8495b986f5--t8ws5-eth0", GenerateName:"calico-apiserver-8495b986f5-", Namespace:"calico-apiserver", SelfLink:"", UID:"6363be22-676f-4db3-afb1-0a1ce8d8def2", ResourceVersion:"878", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 2, 9, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"8495b986f5", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-24-92", ContainerID:"", Pod:"calico-apiserver-8495b986f5-t8ws5", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.71.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali63151274c7e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 02:10:02.939000 audit: BPF prog-id=190 op=LOAD Dec 16 02:10:02.940743 containerd[1908]: 2025-12-16 02:10:02.812 [INFO][4705] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.71.195/32] ContainerID="a262b5285ad8c95573f84447ec3853b645715efbebe4eecb5cba294fe47dcbd7" Namespace="calico-apiserver" Pod="calico-apiserver-8495b986f5-t8ws5" WorkloadEndpoint="ip--172--31--24--92-k8s-calico--apiserver--8495b986f5--t8ws5-eth0" Dec 16 02:10:02.940743 containerd[1908]: 2025-12-16 02:10:02.814 [INFO][4705] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali63151274c7e ContainerID="a262b5285ad8c95573f84447ec3853b645715efbebe4eecb5cba294fe47dcbd7" Namespace="calico-apiserver" Pod="calico-apiserver-8495b986f5-t8ws5" WorkloadEndpoint="ip--172--31--24--92-k8s-calico--apiserver--8495b986f5--t8ws5-eth0" Dec 16 02:10:02.940743 containerd[1908]: 2025-12-16 02:10:02.849 [INFO][4705] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a262b5285ad8c95573f84447ec3853b645715efbebe4eecb5cba294fe47dcbd7" Namespace="calico-apiserver" Pod="calico-apiserver-8495b986f5-t8ws5" WorkloadEndpoint="ip--172--31--24--92-k8s-calico--apiserver--8495b986f5--t8ws5-eth0" Dec 16 02:10:02.940947 containerd[1908]: 2025-12-16 02:10:02.857 [INFO][4705] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a262b5285ad8c95573f84447ec3853b645715efbebe4eecb5cba294fe47dcbd7" Namespace="calico-apiserver" Pod="calico-apiserver-8495b986f5-t8ws5" WorkloadEndpoint="ip--172--31--24--92-k8s-calico--apiserver--8495b986f5--t8ws5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--24--92-k8s-calico--apiserver--8495b986f5--t8ws5-eth0", GenerateName:"calico-apiserver-8495b986f5-", Namespace:"calico-apiserver", SelfLink:"", UID:"6363be22-676f-4db3-afb1-0a1ce8d8def2", ResourceVersion:"878", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 2, 9, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"8495b986f5", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-24-92", ContainerID:"a262b5285ad8c95573f84447ec3853b645715efbebe4eecb5cba294fe47dcbd7", Pod:"calico-apiserver-8495b986f5-t8ws5", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.71.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali63151274c7e", MAC:"3e:eb:01:1d:08:b3", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 02:10:02.941116 containerd[1908]: 2025-12-16 02:10:02.909 [INFO][4705] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a262b5285ad8c95573f84447ec3853b645715efbebe4eecb5cba294fe47dcbd7" Namespace="calico-apiserver" Pod="calico-apiserver-8495b986f5-t8ws5" WorkloadEndpoint="ip--172--31--24--92-k8s-calico--apiserver--8495b986f5--t8ws5-eth0" Dec 16 02:10:02.939000 audit[4819]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=fffffedb2b78 a2=98 a3=fffffedb2b68 items=0 ppid=4585 pid=4819 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:02.939000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 02:10:02.941000 audit: BPF prog-id=190 op=UNLOAD Dec 16 02:10:02.941000 audit[4819]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=fffffedb2b48 a3=0 items=0 ppid=4585 pid=4819 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:02.941000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 02:10:02.941000 audit: BPF prog-id=191 op=LOAD Dec 16 02:10:02.941000 audit[4819]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=fffffedb2808 a2=74 a3=95 items=0 ppid=4585 pid=4819 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:02.941000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 02:10:02.941000 audit: BPF prog-id=191 op=UNLOAD Dec 16 02:10:02.941000 audit[4819]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=4 a1=57156c a2=74 a3=95 items=0 ppid=4585 pid=4819 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:02.941000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 02:10:02.941000 audit: BPF prog-id=192 op=LOAD Dec 16 02:10:02.941000 audit[4819]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=fffffedb2868 a2=94 a3=2 items=0 ppid=4585 pid=4819 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:02.941000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 02:10:02.941000 audit: BPF prog-id=192 op=UNLOAD Dec 16 02:10:02.941000 audit[4819]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=4 a1=57156c a2=70 a3=2 items=0 ppid=4585 pid=4819 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:02.941000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 02:10:03.006852 containerd[1908]: time="2025-12-16T02:10:03.006763699Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-54647b869b-dj58v,Uid:1d7a12f8-f60f-4170-be36-168aef541297,Namespace:calico-system,Attempt:0,}" Dec 16 02:10:03.017437 containerd[1908]: time="2025-12-16T02:10:03.017269315Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-sw9cf,Uid:7c2b8467-d5c0-4053-9ef4-fa4d698ae6d3,Namespace:kube-system,Attempt:0,}" Dec 16 02:10:03.024087 containerd[1908]: time="2025-12-16T02:10:03.023707831Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-7f5sg,Uid:aaad2db4-9021-4d31-8275-e9b7ba731389,Namespace:calico-system,Attempt:0,}" Dec 16 02:10:03.032807 containerd[1908]: time="2025-12-16T02:10:03.032716123Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8495b986f5-pp87t,Uid:2d19a364-8480-43c0-bbf1-372d74633ca8,Namespace:calico-apiserver,Attempt:0,}" Dec 16 02:10:03.038143 systemd[1]: Started cri-containerd-ddecaef92efeca7336ecbe3e2422f232fe5548b2ef2332fcd38b319e74c2ff9d.scope - libcontainer container ddecaef92efeca7336ecbe3e2422f232fe5548b2ef2332fcd38b319e74c2ff9d. Dec 16 02:10:03.166000 audit: BPF prog-id=193 op=LOAD Dec 16 02:10:03.178000 audit: BPF prog-id=194 op=LOAD Dec 16 02:10:03.178000 audit[4813]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=400018c180 a2=98 a3=0 items=0 ppid=4797 pid=4813 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:03.178000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6464656361656639326566656361373333366563626533653234323266 Dec 16 02:10:03.178000 audit: BPF prog-id=194 op=UNLOAD Dec 16 02:10:03.178000 audit[4813]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4797 pid=4813 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:03.178000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6464656361656639326566656361373333366563626533653234323266 Dec 16 02:10:03.181000 audit: BPF prog-id=195 op=LOAD Dec 16 02:10:03.181000 audit[4813]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=400018c3e8 a2=98 a3=0 items=0 ppid=4797 pid=4813 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:03.188877 containerd[1908]: time="2025-12-16T02:10:03.188675516Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 02:10:03.181000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6464656361656639326566656361373333366563626533653234323266 Dec 16 02:10:03.193567 containerd[1908]: time="2025-12-16T02:10:03.193220756Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Dec 16 02:10:03.197016 containerd[1908]: time="2025-12-16T02:10:03.194738324Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Dec 16 02:10:03.199614 kubelet[3500]: E1216 02:10:03.199264 3500 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 02:10:03.199614 kubelet[3500]: E1216 02:10:03.199379 3500 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 02:10:03.199917 kubelet[3500]: E1216 02:10:03.199854 3500 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-77f9546868-lgh2z_calico-system(b08348aa-b9db-4017-ab2d-63cae97b2a73): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Dec 16 02:10:03.203255 kubelet[3500]: E1216 02:10:03.200485 3500 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-77f9546868-lgh2z" podUID="b08348aa-b9db-4017-ab2d-63cae97b2a73" Dec 16 02:10:03.208602 kernel: kauditd_printk_skb: 73 callbacks suppressed Dec 16 02:10:03.208798 kernel: audit: type=1334 audit(1765851003.194:612): prog-id=196 op=LOAD Dec 16 02:10:03.194000 audit: BPF prog-id=196 op=LOAD Dec 16 02:10:03.217073 kernel: audit: type=1300 audit(1765851003.194:612): arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=400018c168 a2=98 a3=0 items=0 ppid=4797 pid=4813 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:03.194000 audit[4813]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=400018c168 a2=98 a3=0 items=0 ppid=4797 pid=4813 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:03.217404 containerd[1908]: time="2025-12-16T02:10:03.211546664Z" level=info msg="connecting to shim a262b5285ad8c95573f84447ec3853b645715efbebe4eecb5cba294fe47dcbd7" address="unix:///run/containerd/s/86d02644e8eec229474baa8b200f25a7ca528938199bf0b87534e44bae7c1537" namespace=k8s.io protocol=ttrpc version=3 Dec 16 02:10:03.232143 kernel: audit: type=1327 audit(1765851003.194:612): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6464656361656639326566656361373333366563626533653234323266 Dec 16 02:10:03.194000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6464656361656639326566656361373333366563626533653234323266 Dec 16 02:10:03.205000 audit: BPF prog-id=196 op=UNLOAD Dec 16 02:10:03.237070 kernel: audit: type=1334 audit(1765851003.205:613): prog-id=196 op=UNLOAD Dec 16 02:10:03.205000 audit[4813]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4797 pid=4813 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:03.205000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6464656361656639326566656361373333366563626533653234323266 Dec 16 02:10:03.244458 kernel: audit: type=1300 audit(1765851003.205:613): arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4797 pid=4813 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:03.254229 kernel: audit: type=1327 audit(1765851003.205:613): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6464656361656639326566656361373333366563626533653234323266 Dec 16 02:10:03.254381 kernel: audit: type=1334 audit(1765851003.205:614): prog-id=195 op=UNLOAD Dec 16 02:10:03.260678 kernel: audit: type=1300 audit(1765851003.205:614): arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4797 pid=4813 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:03.205000 audit: BPF prog-id=195 op=UNLOAD Dec 16 02:10:03.205000 audit[4813]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4797 pid=4813 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:03.269582 kernel: audit: type=1327 audit(1765851003.205:614): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6464656361656639326566656361373333366563626533653234323266 Dec 16 02:10:03.205000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6464656361656639326566656361373333366563626533653234323266 Dec 16 02:10:03.275287 kernel: audit: type=1334 audit(1765851003.205:615): prog-id=197 op=LOAD Dec 16 02:10:03.205000 audit: BPF prog-id=197 op=LOAD Dec 16 02:10:03.205000 audit[4813]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=400018c648 a2=98 a3=0 items=0 ppid=4797 pid=4813 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:03.205000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6464656361656639326566656361373333366563626533653234323266 Dec 16 02:10:03.495906 systemd[1]: Started cri-containerd-a262b5285ad8c95573f84447ec3853b645715efbebe4eecb5cba294fe47dcbd7.scope - libcontainer container a262b5285ad8c95573f84447ec3853b645715efbebe4eecb5cba294fe47dcbd7. Dec 16 02:10:03.550880 kubelet[3500]: E1216 02:10:03.550762 3500 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-77f9546868-lgh2z" podUID="b08348aa-b9db-4017-ab2d-63cae97b2a73" Dec 16 02:10:03.599000 audit[4934]: NETFILTER_CFG table=filter:119 family=2 entries=20 op=nft_register_rule pid=4934 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 02:10:03.599000 audit[4934]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=7480 a0=3 a1=ffffed842120 a2=0 a3=1 items=0 ppid=3608 pid=4934 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:03.599000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 02:10:03.617709 systemd-networkd[1478]: cali5c40471f5b2: Gained IPv6LL Dec 16 02:10:03.616000 audit[4934]: NETFILTER_CFG table=nat:120 family=2 entries=14 op=nft_register_rule pid=4934 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 02:10:03.616000 audit[4934]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=3468 a0=3 a1=ffffed842120 a2=0 a3=1 items=0 ppid=3608 pid=4934 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:03.616000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 02:10:03.851000 audit: BPF prog-id=198 op=LOAD Dec 16 02:10:03.851000 audit[4819]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=fffffedb2828 a2=40 a3=fffffedb2858 items=0 ppid=4585 pid=4819 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:03.851000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 02:10:03.852000 audit: BPF prog-id=198 op=UNLOAD Dec 16 02:10:03.852000 audit[4819]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=4 a1=57156c a2=40 a3=fffffedb2858 items=0 ppid=4585 pid=4819 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:03.852000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 02:10:03.869924 containerd[1908]: time="2025-12-16T02:10:03.869705471Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-5q889,Uid:5f75e4b0-aa22-4937-a793-7da0a16c1ff9,Namespace:calico-system,Attempt:0,} returns sandbox id \"ddecaef92efeca7336ecbe3e2422f232fe5548b2ef2332fcd38b319e74c2ff9d\"" Dec 16 02:10:03.891368 containerd[1908]: time="2025-12-16T02:10:03.891279911Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Dec 16 02:10:03.937848 systemd-networkd[1478]: cali510804c834f: Gained IPv6LL Dec 16 02:10:03.938000 audit: BPF prog-id=199 op=LOAD Dec 16 02:10:03.938000 audit[4819]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=5 a1=fffffedb2838 a2=94 a3=4 items=0 ppid=4585 pid=4819 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:03.938000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 02:10:03.939000 audit: BPF prog-id=199 op=UNLOAD Dec 16 02:10:03.939000 audit[4819]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=5 a1=57156c a2=70 a3=4 items=0 ppid=4585 pid=4819 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:03.939000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 02:10:03.943000 audit: BPF prog-id=200 op=LOAD Dec 16 02:10:03.943000 audit[4819]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=fffffedb2678 a2=94 a3=5 items=0 ppid=4585 pid=4819 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:03.943000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 02:10:03.943000 audit: BPF prog-id=200 op=UNLOAD Dec 16 02:10:03.943000 audit[4819]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=6 a1=57156c a2=70 a3=5 items=0 ppid=4585 pid=4819 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:03.943000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 02:10:03.943000 audit: BPF prog-id=201 op=LOAD Dec 16 02:10:03.943000 audit[4819]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=5 a1=fffffedb28a8 a2=94 a3=6 items=0 ppid=4585 pid=4819 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:03.943000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 02:10:03.955000 audit: BPF prog-id=201 op=UNLOAD Dec 16 02:10:03.955000 audit[4819]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=5 a1=57156c a2=70 a3=6 items=0 ppid=4585 pid=4819 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:03.955000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 02:10:03.957000 audit: BPF prog-id=202 op=LOAD Dec 16 02:10:03.957000 audit[4819]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=5 a1=fffffedb2078 a2=94 a3=83 items=0 ppid=4585 pid=4819 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:03.957000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 02:10:03.958000 audit: BPF prog-id=203 op=LOAD Dec 16 02:10:03.958000 audit[4819]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=7 a0=5 a1=fffffedb1e38 a2=94 a3=2 items=0 ppid=4585 pid=4819 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:03.958000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 02:10:03.961000 audit: BPF prog-id=203 op=UNLOAD Dec 16 02:10:03.961000 audit[4819]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=7 a1=57156c a2=c a3=0 items=0 ppid=4585 pid=4819 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:03.961000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 02:10:03.992000 audit: BPF prog-id=202 op=UNLOAD Dec 16 02:10:03.992000 audit[4819]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=5 a1=57156c a2=14b57620 a3=14b4ab00 items=0 ppid=4585 pid=4819 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:03.992000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 02:10:04.054611 systemd-networkd[1478]: cali751c9540b22: Link UP Dec 16 02:10:04.060773 systemd-networkd[1478]: cali751c9540b22: Gained carrier Dec 16 02:10:04.079000 audit: BPF prog-id=204 op=LOAD Dec 16 02:10:04.081000 audit: BPF prog-id=205 op=LOAD Dec 16 02:10:04.081000 audit[4905]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130180 a2=98 a3=0 items=0 ppid=4882 pid=4905 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:04.081000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6132363262353238356164386339353537336638343434376563333835 Dec 16 02:10:04.084000 audit: BPF prog-id=205 op=UNLOAD Dec 16 02:10:04.084000 audit[4905]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4882 pid=4905 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:04.084000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6132363262353238356164386339353537336638343434376563333835 Dec 16 02:10:04.085000 audit: BPF prog-id=206 op=LOAD Dec 16 02:10:04.085000 audit[4905]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001303e8 a2=98 a3=0 items=0 ppid=4882 pid=4905 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:04.085000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6132363262353238356164386339353537336638343434376563333835 Dec 16 02:10:04.086000 audit: BPF prog-id=207 op=LOAD Dec 16 02:10:04.086000 audit[4905]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000130168 a2=98 a3=0 items=0 ppid=4882 pid=4905 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:04.086000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6132363262353238356164386339353537336638343434376563333835 Dec 16 02:10:04.088000 audit: BPF prog-id=207 op=UNLOAD Dec 16 02:10:04.088000 audit[4905]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4882 pid=4905 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:04.088000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6132363262353238356164386339353537336638343434376563333835 Dec 16 02:10:04.089000 audit: BPF prog-id=206 op=UNLOAD Dec 16 02:10:04.089000 audit[4905]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4882 pid=4905 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:04.089000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6132363262353238356164386339353537336638343434376563333835 Dec 16 02:10:04.092000 audit: BPF prog-id=208 op=LOAD Dec 16 02:10:04.092000 audit[4905]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130648 a2=98 a3=0 items=0 ppid=4882 pid=4905 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:04.092000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6132363262353238356164386339353537336638343434376563333835 Dec 16 02:10:04.120151 containerd[1908]: 2025-12-16 02:10:03.382 [INFO][4836] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--24--92-k8s-calico--kube--controllers--54647b869b--dj58v-eth0 calico-kube-controllers-54647b869b- calico-system 1d7a12f8-f60f-4170-be36-168aef541297 877 0 2025-12-16 02:09:39 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:54647b869b projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ip-172-31-24-92 calico-kube-controllers-54647b869b-dj58v eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali751c9540b22 [] [] }} ContainerID="cbb1ab7ae3f5e5f17ae40fb16d873e0586f3d5f14476b748de0541c70695f24a" Namespace="calico-system" Pod="calico-kube-controllers-54647b869b-dj58v" WorkloadEndpoint="ip--172--31--24--92-k8s-calico--kube--controllers--54647b869b--dj58v-" Dec 16 02:10:04.120151 containerd[1908]: 2025-12-16 02:10:03.383 [INFO][4836] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="cbb1ab7ae3f5e5f17ae40fb16d873e0586f3d5f14476b748de0541c70695f24a" Namespace="calico-system" Pod="calico-kube-controllers-54647b869b-dj58v" WorkloadEndpoint="ip--172--31--24--92-k8s-calico--kube--controllers--54647b869b--dj58v-eth0" Dec 16 02:10:04.120151 containerd[1908]: 2025-12-16 02:10:03.773 [INFO][4918] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="cbb1ab7ae3f5e5f17ae40fb16d873e0586f3d5f14476b748de0541c70695f24a" HandleID="k8s-pod-network.cbb1ab7ae3f5e5f17ae40fb16d873e0586f3d5f14476b748de0541c70695f24a" Workload="ip--172--31--24--92-k8s-calico--kube--controllers--54647b869b--dj58v-eth0" Dec 16 02:10:04.120969 containerd[1908]: 2025-12-16 02:10:03.773 [INFO][4918] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="cbb1ab7ae3f5e5f17ae40fb16d873e0586f3d5f14476b748de0541c70695f24a" HandleID="k8s-pod-network.cbb1ab7ae3f5e5f17ae40fb16d873e0586f3d5f14476b748de0541c70695f24a" Workload="ip--172--31--24--92-k8s-calico--kube--controllers--54647b869b--dj58v-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003338f0), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-24-92", "pod":"calico-kube-controllers-54647b869b-dj58v", "timestamp":"2025-12-16 02:10:03.772998107 +0000 UTC"}, Hostname:"ip-172-31-24-92", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 02:10:04.120969 containerd[1908]: 2025-12-16 02:10:03.773 [INFO][4918] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 02:10:04.120969 containerd[1908]: 2025-12-16 02:10:03.773 [INFO][4918] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 02:10:04.120969 containerd[1908]: 2025-12-16 02:10:03.773 [INFO][4918] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-24-92' Dec 16 02:10:04.120969 containerd[1908]: 2025-12-16 02:10:03.823 [INFO][4918] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.cbb1ab7ae3f5e5f17ae40fb16d873e0586f3d5f14476b748de0541c70695f24a" host="ip-172-31-24-92" Dec 16 02:10:04.120969 containerd[1908]: 2025-12-16 02:10:03.853 [INFO][4918] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-24-92" Dec 16 02:10:04.120969 containerd[1908]: 2025-12-16 02:10:03.917 [INFO][4918] ipam/ipam.go 511: Trying affinity for 192.168.71.192/26 host="ip-172-31-24-92" Dec 16 02:10:04.120969 containerd[1908]: 2025-12-16 02:10:03.925 [INFO][4918] ipam/ipam.go 158: Attempting to load block cidr=192.168.71.192/26 host="ip-172-31-24-92" Dec 16 02:10:04.120969 containerd[1908]: 2025-12-16 02:10:03.936 [INFO][4918] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.71.192/26 host="ip-172-31-24-92" Dec 16 02:10:04.123002 containerd[1908]: 2025-12-16 02:10:03.936 [INFO][4918] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.71.192/26 handle="k8s-pod-network.cbb1ab7ae3f5e5f17ae40fb16d873e0586f3d5f14476b748de0541c70695f24a" host="ip-172-31-24-92" Dec 16 02:10:04.123002 containerd[1908]: 2025-12-16 02:10:03.946 [INFO][4918] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.cbb1ab7ae3f5e5f17ae40fb16d873e0586f3d5f14476b748de0541c70695f24a Dec 16 02:10:04.123002 containerd[1908]: 2025-12-16 02:10:03.969 [INFO][4918] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.71.192/26 handle="k8s-pod-network.cbb1ab7ae3f5e5f17ae40fb16d873e0586f3d5f14476b748de0541c70695f24a" host="ip-172-31-24-92" Dec 16 02:10:04.123002 containerd[1908]: 2025-12-16 02:10:03.988 [INFO][4918] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.71.196/26] block=192.168.71.192/26 handle="k8s-pod-network.cbb1ab7ae3f5e5f17ae40fb16d873e0586f3d5f14476b748de0541c70695f24a" host="ip-172-31-24-92" Dec 16 02:10:04.123002 containerd[1908]: 2025-12-16 02:10:03.988 [INFO][4918] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.71.196/26] handle="k8s-pod-network.cbb1ab7ae3f5e5f17ae40fb16d873e0586f3d5f14476b748de0541c70695f24a" host="ip-172-31-24-92" Dec 16 02:10:04.123002 containerd[1908]: 2025-12-16 02:10:03.990 [INFO][4918] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 02:10:04.123002 containerd[1908]: 2025-12-16 02:10:03.992 [INFO][4918] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.71.196/26] IPv6=[] ContainerID="cbb1ab7ae3f5e5f17ae40fb16d873e0586f3d5f14476b748de0541c70695f24a" HandleID="k8s-pod-network.cbb1ab7ae3f5e5f17ae40fb16d873e0586f3d5f14476b748de0541c70695f24a" Workload="ip--172--31--24--92-k8s-calico--kube--controllers--54647b869b--dj58v-eth0" Dec 16 02:10:04.124197 containerd[1908]: 2025-12-16 02:10:04.014 [INFO][4836] cni-plugin/k8s.go 418: Populated endpoint ContainerID="cbb1ab7ae3f5e5f17ae40fb16d873e0586f3d5f14476b748de0541c70695f24a" Namespace="calico-system" Pod="calico-kube-controllers-54647b869b-dj58v" WorkloadEndpoint="ip--172--31--24--92-k8s-calico--kube--controllers--54647b869b--dj58v-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--24--92-k8s-calico--kube--controllers--54647b869b--dj58v-eth0", GenerateName:"calico-kube-controllers-54647b869b-", Namespace:"calico-system", SelfLink:"", UID:"1d7a12f8-f60f-4170-be36-168aef541297", ResourceVersion:"877", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 2, 9, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"54647b869b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-24-92", ContainerID:"", Pod:"calico-kube-controllers-54647b869b-dj58v", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.71.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali751c9540b22", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 02:10:04.124376 containerd[1908]: 2025-12-16 02:10:04.015 [INFO][4836] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.71.196/32] ContainerID="cbb1ab7ae3f5e5f17ae40fb16d873e0586f3d5f14476b748de0541c70695f24a" Namespace="calico-system" Pod="calico-kube-controllers-54647b869b-dj58v" WorkloadEndpoint="ip--172--31--24--92-k8s-calico--kube--controllers--54647b869b--dj58v-eth0" Dec 16 02:10:04.124376 containerd[1908]: 2025-12-16 02:10:04.015 [INFO][4836] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali751c9540b22 ContainerID="cbb1ab7ae3f5e5f17ae40fb16d873e0586f3d5f14476b748de0541c70695f24a" Namespace="calico-system" Pod="calico-kube-controllers-54647b869b-dj58v" WorkloadEndpoint="ip--172--31--24--92-k8s-calico--kube--controllers--54647b869b--dj58v-eth0" Dec 16 02:10:04.124376 containerd[1908]: 2025-12-16 02:10:04.080 [INFO][4836] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="cbb1ab7ae3f5e5f17ae40fb16d873e0586f3d5f14476b748de0541c70695f24a" Namespace="calico-system" Pod="calico-kube-controllers-54647b869b-dj58v" WorkloadEndpoint="ip--172--31--24--92-k8s-calico--kube--controllers--54647b869b--dj58v-eth0" Dec 16 02:10:04.125258 containerd[1908]: 2025-12-16 02:10:04.088 [INFO][4836] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="cbb1ab7ae3f5e5f17ae40fb16d873e0586f3d5f14476b748de0541c70695f24a" Namespace="calico-system" Pod="calico-kube-controllers-54647b869b-dj58v" WorkloadEndpoint="ip--172--31--24--92-k8s-calico--kube--controllers--54647b869b--dj58v-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--24--92-k8s-calico--kube--controllers--54647b869b--dj58v-eth0", GenerateName:"calico-kube-controllers-54647b869b-", Namespace:"calico-system", SelfLink:"", UID:"1d7a12f8-f60f-4170-be36-168aef541297", ResourceVersion:"877", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 2, 9, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"54647b869b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-24-92", ContainerID:"cbb1ab7ae3f5e5f17ae40fb16d873e0586f3d5f14476b748de0541c70695f24a", Pod:"calico-kube-controllers-54647b869b-dj58v", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.71.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali751c9540b22", MAC:"f6:e4:01:56:5e:16", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 02:10:04.126023 containerd[1908]: 2025-12-16 02:10:04.114 [INFO][4836] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="cbb1ab7ae3f5e5f17ae40fb16d873e0586f3d5f14476b748de0541c70695f24a" Namespace="calico-system" Pod="calico-kube-controllers-54647b869b-dj58v" WorkloadEndpoint="ip--172--31--24--92-k8s-calico--kube--controllers--54647b869b--dj58v-eth0" Dec 16 02:10:04.212509 systemd-networkd[1478]: calibefdd505385: Link UP Dec 16 02:10:04.225957 systemd-networkd[1478]: calibefdd505385: Gained carrier Dec 16 02:10:04.266873 containerd[1908]: time="2025-12-16T02:10:04.266787525Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 02:10:04.270198 containerd[1908]: time="2025-12-16T02:10:04.269753241Z" level=info msg="connecting to shim cbb1ab7ae3f5e5f17ae40fb16d873e0586f3d5f14476b748de0541c70695f24a" address="unix:///run/containerd/s/3c8e483fb083d3380cb030ffada072234ed14af1671b0b593e19b588cbddb81d" namespace=k8s.io protocol=ttrpc version=3 Dec 16 02:10:04.270570 containerd[1908]: time="2025-12-16T02:10:04.270127497Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Dec 16 02:10:04.270875 containerd[1908]: time="2025-12-16T02:10:04.270137337Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Dec 16 02:10:04.271630 kubelet[3500]: E1216 02:10:04.271385 3500 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 16 02:10:04.273942 kubelet[3500]: E1216 02:10:04.271754 3500 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 16 02:10:04.273942 kubelet[3500]: E1216 02:10:04.272507 3500 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-5q889_calico-system(5f75e4b0-aa22-4937-a793-7da0a16c1ff9): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Dec 16 02:10:04.275477 kubelet[3500]: E1216 02:10:04.274154 3500 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-5q889" podUID="5f75e4b0-aa22-4937-a793-7da0a16c1ff9" Dec 16 02:10:04.296046 containerd[1908]: 2025-12-16 02:10:03.432 [INFO][4865] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--24--92-k8s-coredns--66bc5c9577--sw9cf-eth0 coredns-66bc5c9577- kube-system 7c2b8467-d5c0-4053-9ef4-fa4d698ae6d3 875 0 2025-12-16 02:09:09 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-24-92 coredns-66bc5c9577-sw9cf eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calibefdd505385 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="403f020a2648d3697add7054d9ac1ab87d0164d0ca36e21e4f09cbb9028bf775" Namespace="kube-system" Pod="coredns-66bc5c9577-sw9cf" WorkloadEndpoint="ip--172--31--24--92-k8s-coredns--66bc5c9577--sw9cf-" Dec 16 02:10:04.296046 containerd[1908]: 2025-12-16 02:10:03.438 [INFO][4865] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="403f020a2648d3697add7054d9ac1ab87d0164d0ca36e21e4f09cbb9028bf775" Namespace="kube-system" Pod="coredns-66bc5c9577-sw9cf" WorkloadEndpoint="ip--172--31--24--92-k8s-coredns--66bc5c9577--sw9cf-eth0" Dec 16 02:10:04.296046 containerd[1908]: 2025-12-16 02:10:03.866 [INFO][4925] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="403f020a2648d3697add7054d9ac1ab87d0164d0ca36e21e4f09cbb9028bf775" HandleID="k8s-pod-network.403f020a2648d3697add7054d9ac1ab87d0164d0ca36e21e4f09cbb9028bf775" Workload="ip--172--31--24--92-k8s-coredns--66bc5c9577--sw9cf-eth0" Dec 16 02:10:04.296452 containerd[1908]: 2025-12-16 02:10:03.867 [INFO][4925] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="403f020a2648d3697add7054d9ac1ab87d0164d0ca36e21e4f09cbb9028bf775" HandleID="k8s-pod-network.403f020a2648d3697add7054d9ac1ab87d0164d0ca36e21e4f09cbb9028bf775" Workload="ip--172--31--24--92-k8s-coredns--66bc5c9577--sw9cf-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000358230), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-24-92", "pod":"coredns-66bc5c9577-sw9cf", "timestamp":"2025-12-16 02:10:03.866941139 +0000 UTC"}, Hostname:"ip-172-31-24-92", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 02:10:04.296452 containerd[1908]: 2025-12-16 02:10:03.867 [INFO][4925] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 02:10:04.296452 containerd[1908]: 2025-12-16 02:10:03.989 [INFO][4925] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 02:10:04.296452 containerd[1908]: 2025-12-16 02:10:03.989 [INFO][4925] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-24-92' Dec 16 02:10:04.296452 containerd[1908]: 2025-12-16 02:10:04.022 [INFO][4925] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.403f020a2648d3697add7054d9ac1ab87d0164d0ca36e21e4f09cbb9028bf775" host="ip-172-31-24-92" Dec 16 02:10:04.296452 containerd[1908]: 2025-12-16 02:10:04.079 [INFO][4925] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-24-92" Dec 16 02:10:04.296452 containerd[1908]: 2025-12-16 02:10:04.106 [INFO][4925] ipam/ipam.go 511: Trying affinity for 192.168.71.192/26 host="ip-172-31-24-92" Dec 16 02:10:04.296452 containerd[1908]: 2025-12-16 02:10:04.123 [INFO][4925] ipam/ipam.go 158: Attempting to load block cidr=192.168.71.192/26 host="ip-172-31-24-92" Dec 16 02:10:04.296452 containerd[1908]: 2025-12-16 02:10:04.132 [INFO][4925] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.71.192/26 host="ip-172-31-24-92" Dec 16 02:10:04.296452 containerd[1908]: 2025-12-16 02:10:04.132 [INFO][4925] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.71.192/26 handle="k8s-pod-network.403f020a2648d3697add7054d9ac1ab87d0164d0ca36e21e4f09cbb9028bf775" host="ip-172-31-24-92" Dec 16 02:10:04.298853 containerd[1908]: 2025-12-16 02:10:04.139 [INFO][4925] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.403f020a2648d3697add7054d9ac1ab87d0164d0ca36e21e4f09cbb9028bf775 Dec 16 02:10:04.298853 containerd[1908]: 2025-12-16 02:10:04.150 [INFO][4925] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.71.192/26 handle="k8s-pod-network.403f020a2648d3697add7054d9ac1ab87d0164d0ca36e21e4f09cbb9028bf775" host="ip-172-31-24-92" Dec 16 02:10:04.298853 containerd[1908]: 2025-12-16 02:10:04.166 [INFO][4925] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.71.197/26] block=192.168.71.192/26 handle="k8s-pod-network.403f020a2648d3697add7054d9ac1ab87d0164d0ca36e21e4f09cbb9028bf775" host="ip-172-31-24-92" Dec 16 02:10:04.298853 containerd[1908]: 2025-12-16 02:10:04.168 [INFO][4925] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.71.197/26] handle="k8s-pod-network.403f020a2648d3697add7054d9ac1ab87d0164d0ca36e21e4f09cbb9028bf775" host="ip-172-31-24-92" Dec 16 02:10:04.298853 containerd[1908]: 2025-12-16 02:10:04.169 [INFO][4925] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 02:10:04.298853 containerd[1908]: 2025-12-16 02:10:04.172 [INFO][4925] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.71.197/26] IPv6=[] ContainerID="403f020a2648d3697add7054d9ac1ab87d0164d0ca36e21e4f09cbb9028bf775" HandleID="k8s-pod-network.403f020a2648d3697add7054d9ac1ab87d0164d0ca36e21e4f09cbb9028bf775" Workload="ip--172--31--24--92-k8s-coredns--66bc5c9577--sw9cf-eth0" Dec 16 02:10:04.300164 containerd[1908]: 2025-12-16 02:10:04.187 [INFO][4865] cni-plugin/k8s.go 418: Populated endpoint ContainerID="403f020a2648d3697add7054d9ac1ab87d0164d0ca36e21e4f09cbb9028bf775" Namespace="kube-system" Pod="coredns-66bc5c9577-sw9cf" WorkloadEndpoint="ip--172--31--24--92-k8s-coredns--66bc5c9577--sw9cf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--24--92-k8s-coredns--66bc5c9577--sw9cf-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"7c2b8467-d5c0-4053-9ef4-fa4d698ae6d3", ResourceVersion:"875", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 2, 9, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-24-92", ContainerID:"", Pod:"coredns-66bc5c9577-sw9cf", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.71.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calibefdd505385", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 02:10:04.300164 containerd[1908]: 2025-12-16 02:10:04.188 [INFO][4865] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.71.197/32] ContainerID="403f020a2648d3697add7054d9ac1ab87d0164d0ca36e21e4f09cbb9028bf775" Namespace="kube-system" Pod="coredns-66bc5c9577-sw9cf" WorkloadEndpoint="ip--172--31--24--92-k8s-coredns--66bc5c9577--sw9cf-eth0" Dec 16 02:10:04.300164 containerd[1908]: 2025-12-16 02:10:04.189 [INFO][4865] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calibefdd505385 ContainerID="403f020a2648d3697add7054d9ac1ab87d0164d0ca36e21e4f09cbb9028bf775" Namespace="kube-system" Pod="coredns-66bc5c9577-sw9cf" WorkloadEndpoint="ip--172--31--24--92-k8s-coredns--66bc5c9577--sw9cf-eth0" Dec 16 02:10:04.300164 containerd[1908]: 2025-12-16 02:10:04.231 [INFO][4865] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="403f020a2648d3697add7054d9ac1ab87d0164d0ca36e21e4f09cbb9028bf775" Namespace="kube-system" Pod="coredns-66bc5c9577-sw9cf" WorkloadEndpoint="ip--172--31--24--92-k8s-coredns--66bc5c9577--sw9cf-eth0" Dec 16 02:10:04.300164 containerd[1908]: 2025-12-16 02:10:04.234 [INFO][4865] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="403f020a2648d3697add7054d9ac1ab87d0164d0ca36e21e4f09cbb9028bf775" Namespace="kube-system" Pod="coredns-66bc5c9577-sw9cf" WorkloadEndpoint="ip--172--31--24--92-k8s-coredns--66bc5c9577--sw9cf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--24--92-k8s-coredns--66bc5c9577--sw9cf-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"7c2b8467-d5c0-4053-9ef4-fa4d698ae6d3", ResourceVersion:"875", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 2, 9, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-24-92", ContainerID:"403f020a2648d3697add7054d9ac1ab87d0164d0ca36e21e4f09cbb9028bf775", Pod:"coredns-66bc5c9577-sw9cf", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.71.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calibefdd505385", MAC:"5a:75:7a:fa:d3:96", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 02:10:04.300164 containerd[1908]: 2025-12-16 02:10:04.287 [INFO][4865] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="403f020a2648d3697add7054d9ac1ab87d0164d0ca36e21e4f09cbb9028bf775" Namespace="kube-system" Pod="coredns-66bc5c9577-sw9cf" WorkloadEndpoint="ip--172--31--24--92-k8s-coredns--66bc5c9577--sw9cf-eth0" Dec 16 02:10:04.348000 audit: BPF prog-id=209 op=LOAD Dec 16 02:10:04.348000 audit[5004]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=fffff474fc48 a2=98 a3=fffff474fc38 items=0 ppid=4585 pid=5004 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:04.348000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 16 02:10:04.349000 audit: BPF prog-id=209 op=UNLOAD Dec 16 02:10:04.349000 audit[5004]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=fffff474fc18 a3=0 items=0 ppid=4585 pid=5004 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:04.349000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 16 02:10:04.349000 audit: BPF prog-id=210 op=LOAD Dec 16 02:10:04.349000 audit[5004]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=fffff474faf8 a2=74 a3=95 items=0 ppid=4585 pid=5004 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:04.349000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 16 02:10:04.349000 audit: BPF prog-id=210 op=UNLOAD Dec 16 02:10:04.349000 audit[5004]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=74 a3=95 items=0 ppid=4585 pid=5004 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:04.349000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 16 02:10:04.349000 audit: BPF prog-id=211 op=LOAD Dec 16 02:10:04.349000 audit[5004]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=fffff474fb28 a2=40 a3=fffff474fb58 items=0 ppid=4585 pid=5004 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:04.349000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 16 02:10:04.352000 audit: BPF prog-id=211 op=UNLOAD Dec 16 02:10:04.352000 audit[5004]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=40 a3=fffff474fb58 items=0 ppid=4585 pid=5004 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:04.352000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 16 02:10:04.411179 systemd[1]: Started cri-containerd-cbb1ab7ae3f5e5f17ae40fb16d873e0586f3d5f14476b748de0541c70695f24a.scope - libcontainer container cbb1ab7ae3f5e5f17ae40fb16d873e0586f3d5f14476b748de0541c70695f24a. Dec 16 02:10:04.481702 systemd-networkd[1478]: cali2fae0d2f820: Link UP Dec 16 02:10:04.483943 systemd-networkd[1478]: cali2fae0d2f820: Gained carrier Dec 16 02:10:04.529023 kubelet[3500]: E1216 02:10:04.528647 3500 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-5q889" podUID="5f75e4b0-aa22-4937-a793-7da0a16c1ff9" Dec 16 02:10:04.531240 kubelet[3500]: E1216 02:10:04.530644 3500 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-77f9546868-lgh2z" podUID="b08348aa-b9db-4017-ab2d-63cae97b2a73" Dec 16 02:10:04.542090 containerd[1908]: time="2025-12-16T02:10:04.541879895Z" level=info msg="connecting to shim 403f020a2648d3697add7054d9ac1ab87d0164d0ca36e21e4f09cbb9028bf775" address="unix:///run/containerd/s/7a3af3470d1141bc17b6dd08db6cd2f887bf1e1781143a9a7eb716687a108503" namespace=k8s.io protocol=ttrpc version=3 Dec 16 02:10:04.702715 containerd[1908]: 2025-12-16 02:10:03.759 [INFO][4860] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--24--92-k8s-csi--node--driver--7f5sg-eth0 csi-node-driver- calico-system aaad2db4-9021-4d31-8275-e9b7ba731389 775 0 2025-12-16 02:09:39 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:9d99788f7 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ip-172-31-24-92 csi-node-driver-7f5sg eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali2fae0d2f820 [] [] }} ContainerID="054887364a7424fc2b1930371c2e2d0c334c23b8c406a9a06773dbe68f619a40" Namespace="calico-system" Pod="csi-node-driver-7f5sg" WorkloadEndpoint="ip--172--31--24--92-k8s-csi--node--driver--7f5sg-" Dec 16 02:10:04.702715 containerd[1908]: 2025-12-16 02:10:03.762 [INFO][4860] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="054887364a7424fc2b1930371c2e2d0c334c23b8c406a9a06773dbe68f619a40" Namespace="calico-system" Pod="csi-node-driver-7f5sg" WorkloadEndpoint="ip--172--31--24--92-k8s-csi--node--driver--7f5sg-eth0" Dec 16 02:10:04.702715 containerd[1908]: 2025-12-16 02:10:04.095 [INFO][4952] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="054887364a7424fc2b1930371c2e2d0c334c23b8c406a9a06773dbe68f619a40" HandleID="k8s-pod-network.054887364a7424fc2b1930371c2e2d0c334c23b8c406a9a06773dbe68f619a40" Workload="ip--172--31--24--92-k8s-csi--node--driver--7f5sg-eth0" Dec 16 02:10:04.702715 containerd[1908]: 2025-12-16 02:10:04.095 [INFO][4952] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="054887364a7424fc2b1930371c2e2d0c334c23b8c406a9a06773dbe68f619a40" HandleID="k8s-pod-network.054887364a7424fc2b1930371c2e2d0c334c23b8c406a9a06773dbe68f619a40" Workload="ip--172--31--24--92-k8s-csi--node--driver--7f5sg-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400049e370), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-24-92", "pod":"csi-node-driver-7f5sg", "timestamp":"2025-12-16 02:10:04.095219024 +0000 UTC"}, Hostname:"ip-172-31-24-92", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 02:10:04.702715 containerd[1908]: 2025-12-16 02:10:04.095 [INFO][4952] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 02:10:04.702715 containerd[1908]: 2025-12-16 02:10:04.168 [INFO][4952] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 02:10:04.702715 containerd[1908]: 2025-12-16 02:10:04.174 [INFO][4952] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-24-92' Dec 16 02:10:04.702715 containerd[1908]: 2025-12-16 02:10:04.240 [INFO][4952] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.054887364a7424fc2b1930371c2e2d0c334c23b8c406a9a06773dbe68f619a40" host="ip-172-31-24-92" Dec 16 02:10:04.702715 containerd[1908]: 2025-12-16 02:10:04.267 [INFO][4952] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-24-92" Dec 16 02:10:04.702715 containerd[1908]: 2025-12-16 02:10:04.304 [INFO][4952] ipam/ipam.go 511: Trying affinity for 192.168.71.192/26 host="ip-172-31-24-92" Dec 16 02:10:04.702715 containerd[1908]: 2025-12-16 02:10:04.314 [INFO][4952] ipam/ipam.go 158: Attempting to load block cidr=192.168.71.192/26 host="ip-172-31-24-92" Dec 16 02:10:04.702715 containerd[1908]: 2025-12-16 02:10:04.330 [INFO][4952] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.71.192/26 host="ip-172-31-24-92" Dec 16 02:10:04.702715 containerd[1908]: 2025-12-16 02:10:04.330 [INFO][4952] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.71.192/26 handle="k8s-pod-network.054887364a7424fc2b1930371c2e2d0c334c23b8c406a9a06773dbe68f619a40" host="ip-172-31-24-92" Dec 16 02:10:04.702715 containerd[1908]: 2025-12-16 02:10:04.334 [INFO][4952] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.054887364a7424fc2b1930371c2e2d0c334c23b8c406a9a06773dbe68f619a40 Dec 16 02:10:04.702715 containerd[1908]: 2025-12-16 02:10:04.350 [INFO][4952] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.71.192/26 handle="k8s-pod-network.054887364a7424fc2b1930371c2e2d0c334c23b8c406a9a06773dbe68f619a40" host="ip-172-31-24-92" Dec 16 02:10:04.702715 containerd[1908]: 2025-12-16 02:10:04.392 [INFO][4952] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.71.198/26] block=192.168.71.192/26 handle="k8s-pod-network.054887364a7424fc2b1930371c2e2d0c334c23b8c406a9a06773dbe68f619a40" host="ip-172-31-24-92" Dec 16 02:10:04.702715 containerd[1908]: 2025-12-16 02:10:04.392 [INFO][4952] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.71.198/26] handle="k8s-pod-network.054887364a7424fc2b1930371c2e2d0c334c23b8c406a9a06773dbe68f619a40" host="ip-172-31-24-92" Dec 16 02:10:04.702715 containerd[1908]: 2025-12-16 02:10:04.392 [INFO][4952] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 02:10:04.702715 containerd[1908]: 2025-12-16 02:10:04.392 [INFO][4952] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.71.198/26] IPv6=[] ContainerID="054887364a7424fc2b1930371c2e2d0c334c23b8c406a9a06773dbe68f619a40" HandleID="k8s-pod-network.054887364a7424fc2b1930371c2e2d0c334c23b8c406a9a06773dbe68f619a40" Workload="ip--172--31--24--92-k8s-csi--node--driver--7f5sg-eth0" Dec 16 02:10:04.705770 containerd[1908]: 2025-12-16 02:10:04.411 [INFO][4860] cni-plugin/k8s.go 418: Populated endpoint ContainerID="054887364a7424fc2b1930371c2e2d0c334c23b8c406a9a06773dbe68f619a40" Namespace="calico-system" Pod="csi-node-driver-7f5sg" WorkloadEndpoint="ip--172--31--24--92-k8s-csi--node--driver--7f5sg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--24--92-k8s-csi--node--driver--7f5sg-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"aaad2db4-9021-4d31-8275-e9b7ba731389", ResourceVersion:"775", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 2, 9, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"9d99788f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-24-92", ContainerID:"", Pod:"csi-node-driver-7f5sg", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.71.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali2fae0d2f820", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 02:10:04.705770 containerd[1908]: 2025-12-16 02:10:04.414 [INFO][4860] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.71.198/32] ContainerID="054887364a7424fc2b1930371c2e2d0c334c23b8c406a9a06773dbe68f619a40" Namespace="calico-system" Pod="csi-node-driver-7f5sg" WorkloadEndpoint="ip--172--31--24--92-k8s-csi--node--driver--7f5sg-eth0" Dec 16 02:10:04.705770 containerd[1908]: 2025-12-16 02:10:04.414 [INFO][4860] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2fae0d2f820 ContainerID="054887364a7424fc2b1930371c2e2d0c334c23b8c406a9a06773dbe68f619a40" Namespace="calico-system" Pod="csi-node-driver-7f5sg" WorkloadEndpoint="ip--172--31--24--92-k8s-csi--node--driver--7f5sg-eth0" Dec 16 02:10:04.705770 containerd[1908]: 2025-12-16 02:10:04.494 [INFO][4860] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="054887364a7424fc2b1930371c2e2d0c334c23b8c406a9a06773dbe68f619a40" Namespace="calico-system" Pod="csi-node-driver-7f5sg" WorkloadEndpoint="ip--172--31--24--92-k8s-csi--node--driver--7f5sg-eth0" Dec 16 02:10:04.705770 containerd[1908]: 2025-12-16 02:10:04.506 [INFO][4860] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="054887364a7424fc2b1930371c2e2d0c334c23b8c406a9a06773dbe68f619a40" Namespace="calico-system" Pod="csi-node-driver-7f5sg" WorkloadEndpoint="ip--172--31--24--92-k8s-csi--node--driver--7f5sg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--24--92-k8s-csi--node--driver--7f5sg-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"aaad2db4-9021-4d31-8275-e9b7ba731389", ResourceVersion:"775", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 2, 9, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"9d99788f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-24-92", ContainerID:"054887364a7424fc2b1930371c2e2d0c334c23b8c406a9a06773dbe68f619a40", Pod:"csi-node-driver-7f5sg", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.71.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali2fae0d2f820", MAC:"e6:08:26:63:94:cc", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 02:10:04.705770 containerd[1908]: 2025-12-16 02:10:04.665 [INFO][4860] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="054887364a7424fc2b1930371c2e2d0c334c23b8c406a9a06773dbe68f619a40" Namespace="calico-system" Pod="csi-node-driver-7f5sg" WorkloadEndpoint="ip--172--31--24--92-k8s-csi--node--driver--7f5sg-eth0" Dec 16 02:10:04.746201 systemd[1]: Started cri-containerd-403f020a2648d3697add7054d9ac1ab87d0164d0ca36e21e4f09cbb9028bf775.scope - libcontainer container 403f020a2648d3697add7054d9ac1ab87d0164d0ca36e21e4f09cbb9028bf775. Dec 16 02:10:04.769631 systemd-networkd[1478]: cali63151274c7e: Gained IPv6LL Dec 16 02:10:04.798049 systemd-networkd[1478]: calia2fdd483a2b: Link UP Dec 16 02:10:04.801684 systemd-networkd[1478]: calia2fdd483a2b: Gained carrier Dec 16 02:10:04.841747 containerd[1908]: time="2025-12-16T02:10:04.841513860Z" level=info msg="connecting to shim 054887364a7424fc2b1930371c2e2d0c334c23b8c406a9a06773dbe68f619a40" address="unix:///run/containerd/s/cdd08112aff430ce699b990315520b3ae97772201618c9ec276eaae0f1fd2704" namespace=k8s.io protocol=ttrpc version=3 Dec 16 02:10:04.880566 containerd[1908]: 2025-12-16 02:10:03.760 [INFO][4871] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--24--92-k8s-calico--apiserver--8495b986f5--pp87t-eth0 calico-apiserver-8495b986f5- calico-apiserver 2d19a364-8480-43c0-bbf1-372d74633ca8 879 0 2025-12-16 02:09:24 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:8495b986f5 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-24-92 calico-apiserver-8495b986f5-pp87t eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calia2fdd483a2b [] [] }} ContainerID="0a07bcb8aa192cf0aba5d58de36c5fa086c7f2aa049c5e6d646b6a1cc233db8d" Namespace="calico-apiserver" Pod="calico-apiserver-8495b986f5-pp87t" WorkloadEndpoint="ip--172--31--24--92-k8s-calico--apiserver--8495b986f5--pp87t-" Dec 16 02:10:04.880566 containerd[1908]: 2025-12-16 02:10:03.764 [INFO][4871] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="0a07bcb8aa192cf0aba5d58de36c5fa086c7f2aa049c5e6d646b6a1cc233db8d" Namespace="calico-apiserver" Pod="calico-apiserver-8495b986f5-pp87t" WorkloadEndpoint="ip--172--31--24--92-k8s-calico--apiserver--8495b986f5--pp87t-eth0" Dec 16 02:10:04.880566 containerd[1908]: 2025-12-16 02:10:04.108 [INFO][4954] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0a07bcb8aa192cf0aba5d58de36c5fa086c7f2aa049c5e6d646b6a1cc233db8d" HandleID="k8s-pod-network.0a07bcb8aa192cf0aba5d58de36c5fa086c7f2aa049c5e6d646b6a1cc233db8d" Workload="ip--172--31--24--92-k8s-calico--apiserver--8495b986f5--pp87t-eth0" Dec 16 02:10:04.880566 containerd[1908]: 2025-12-16 02:10:04.111 [INFO][4954] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="0a07bcb8aa192cf0aba5d58de36c5fa086c7f2aa049c5e6d646b6a1cc233db8d" HandleID="k8s-pod-network.0a07bcb8aa192cf0aba5d58de36c5fa086c7f2aa049c5e6d646b6a1cc233db8d" Workload="ip--172--31--24--92-k8s-calico--apiserver--8495b986f5--pp87t-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000349a50), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ip-172-31-24-92", "pod":"calico-apiserver-8495b986f5-pp87t", "timestamp":"2025-12-16 02:10:04.108782673 +0000 UTC"}, Hostname:"ip-172-31-24-92", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 02:10:04.880566 containerd[1908]: 2025-12-16 02:10:04.113 [INFO][4954] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 02:10:04.880566 containerd[1908]: 2025-12-16 02:10:04.392 [INFO][4954] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 02:10:04.880566 containerd[1908]: 2025-12-16 02:10:04.392 [INFO][4954] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-24-92' Dec 16 02:10:04.880566 containerd[1908]: 2025-12-16 02:10:04.467 [INFO][4954] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.0a07bcb8aa192cf0aba5d58de36c5fa086c7f2aa049c5e6d646b6a1cc233db8d" host="ip-172-31-24-92" Dec 16 02:10:04.880566 containerd[1908]: 2025-12-16 02:10:04.506 [INFO][4954] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-24-92" Dec 16 02:10:04.880566 containerd[1908]: 2025-12-16 02:10:04.569 [INFO][4954] ipam/ipam.go 511: Trying affinity for 192.168.71.192/26 host="ip-172-31-24-92" Dec 16 02:10:04.880566 containerd[1908]: 2025-12-16 02:10:04.608 [INFO][4954] ipam/ipam.go 158: Attempting to load block cidr=192.168.71.192/26 host="ip-172-31-24-92" Dec 16 02:10:04.880566 containerd[1908]: 2025-12-16 02:10:04.676 [INFO][4954] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.71.192/26 host="ip-172-31-24-92" Dec 16 02:10:04.880566 containerd[1908]: 2025-12-16 02:10:04.676 [INFO][4954] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.71.192/26 handle="k8s-pod-network.0a07bcb8aa192cf0aba5d58de36c5fa086c7f2aa049c5e6d646b6a1cc233db8d" host="ip-172-31-24-92" Dec 16 02:10:04.880566 containerd[1908]: 2025-12-16 02:10:04.698 [INFO][4954] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.0a07bcb8aa192cf0aba5d58de36c5fa086c7f2aa049c5e6d646b6a1cc233db8d Dec 16 02:10:04.880566 containerd[1908]: 2025-12-16 02:10:04.734 [INFO][4954] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.71.192/26 handle="k8s-pod-network.0a07bcb8aa192cf0aba5d58de36c5fa086c7f2aa049c5e6d646b6a1cc233db8d" host="ip-172-31-24-92" Dec 16 02:10:04.880566 containerd[1908]: 2025-12-16 02:10:04.779 [INFO][4954] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.71.199/26] block=192.168.71.192/26 handle="k8s-pod-network.0a07bcb8aa192cf0aba5d58de36c5fa086c7f2aa049c5e6d646b6a1cc233db8d" host="ip-172-31-24-92" Dec 16 02:10:04.880566 containerd[1908]: 2025-12-16 02:10:04.779 [INFO][4954] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.71.199/26] handle="k8s-pod-network.0a07bcb8aa192cf0aba5d58de36c5fa086c7f2aa049c5e6d646b6a1cc233db8d" host="ip-172-31-24-92" Dec 16 02:10:04.880566 containerd[1908]: 2025-12-16 02:10:04.779 [INFO][4954] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 02:10:04.880566 containerd[1908]: 2025-12-16 02:10:04.779 [INFO][4954] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.71.199/26] IPv6=[] ContainerID="0a07bcb8aa192cf0aba5d58de36c5fa086c7f2aa049c5e6d646b6a1cc233db8d" HandleID="k8s-pod-network.0a07bcb8aa192cf0aba5d58de36c5fa086c7f2aa049c5e6d646b6a1cc233db8d" Workload="ip--172--31--24--92-k8s-calico--apiserver--8495b986f5--pp87t-eth0" Dec 16 02:10:04.885896 containerd[1908]: 2025-12-16 02:10:04.791 [INFO][4871] cni-plugin/k8s.go 418: Populated endpoint ContainerID="0a07bcb8aa192cf0aba5d58de36c5fa086c7f2aa049c5e6d646b6a1cc233db8d" Namespace="calico-apiserver" Pod="calico-apiserver-8495b986f5-pp87t" WorkloadEndpoint="ip--172--31--24--92-k8s-calico--apiserver--8495b986f5--pp87t-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--24--92-k8s-calico--apiserver--8495b986f5--pp87t-eth0", GenerateName:"calico-apiserver-8495b986f5-", Namespace:"calico-apiserver", SelfLink:"", UID:"2d19a364-8480-43c0-bbf1-372d74633ca8", ResourceVersion:"879", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 2, 9, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"8495b986f5", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-24-92", ContainerID:"", Pod:"calico-apiserver-8495b986f5-pp87t", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.71.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calia2fdd483a2b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 02:10:04.885896 containerd[1908]: 2025-12-16 02:10:04.791 [INFO][4871] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.71.199/32] ContainerID="0a07bcb8aa192cf0aba5d58de36c5fa086c7f2aa049c5e6d646b6a1cc233db8d" Namespace="calico-apiserver" Pod="calico-apiserver-8495b986f5-pp87t" WorkloadEndpoint="ip--172--31--24--92-k8s-calico--apiserver--8495b986f5--pp87t-eth0" Dec 16 02:10:04.885896 containerd[1908]: 2025-12-16 02:10:04.791 [INFO][4871] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia2fdd483a2b ContainerID="0a07bcb8aa192cf0aba5d58de36c5fa086c7f2aa049c5e6d646b6a1cc233db8d" Namespace="calico-apiserver" Pod="calico-apiserver-8495b986f5-pp87t" WorkloadEndpoint="ip--172--31--24--92-k8s-calico--apiserver--8495b986f5--pp87t-eth0" Dec 16 02:10:04.885896 containerd[1908]: 2025-12-16 02:10:04.805 [INFO][4871] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0a07bcb8aa192cf0aba5d58de36c5fa086c7f2aa049c5e6d646b6a1cc233db8d" Namespace="calico-apiserver" Pod="calico-apiserver-8495b986f5-pp87t" WorkloadEndpoint="ip--172--31--24--92-k8s-calico--apiserver--8495b986f5--pp87t-eth0" Dec 16 02:10:04.885896 containerd[1908]: 2025-12-16 02:10:04.807 [INFO][4871] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="0a07bcb8aa192cf0aba5d58de36c5fa086c7f2aa049c5e6d646b6a1cc233db8d" Namespace="calico-apiserver" Pod="calico-apiserver-8495b986f5-pp87t" WorkloadEndpoint="ip--172--31--24--92-k8s-calico--apiserver--8495b986f5--pp87t-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--24--92-k8s-calico--apiserver--8495b986f5--pp87t-eth0", GenerateName:"calico-apiserver-8495b986f5-", Namespace:"calico-apiserver", SelfLink:"", UID:"2d19a364-8480-43c0-bbf1-372d74633ca8", ResourceVersion:"879", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 2, 9, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"8495b986f5", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-24-92", ContainerID:"0a07bcb8aa192cf0aba5d58de36c5fa086c7f2aa049c5e6d646b6a1cc233db8d", Pod:"calico-apiserver-8495b986f5-pp87t", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.71.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calia2fdd483a2b", MAC:"c6:98:69:af:c1:e3", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 02:10:04.885896 containerd[1908]: 2025-12-16 02:10:04.862 [INFO][4871] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="0a07bcb8aa192cf0aba5d58de36c5fa086c7f2aa049c5e6d646b6a1cc233db8d" Namespace="calico-apiserver" Pod="calico-apiserver-8495b986f5-pp87t" WorkloadEndpoint="ip--172--31--24--92-k8s-calico--apiserver--8495b986f5--pp87t-eth0" Dec 16 02:10:04.932000 audit: BPF prog-id=212 op=LOAD Dec 16 02:10:04.937000 audit: BPF prog-id=213 op=LOAD Dec 16 02:10:04.937000 audit[5053]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178180 a2=98 a3=0 items=0 ppid=5039 pid=5053 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:04.937000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3430336630323061323634386433363937616464373035346439616331 Dec 16 02:10:04.940000 audit: BPF prog-id=213 op=UNLOAD Dec 16 02:10:04.940000 audit[5053]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5039 pid=5053 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:04.940000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3430336630323061323634386433363937616464373035346439616331 Dec 16 02:10:04.940000 audit: BPF prog-id=214 op=LOAD Dec 16 02:10:04.940000 audit[5053]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=5039 pid=5053 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:04.940000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3430336630323061323634386433363937616464373035346439616331 Dec 16 02:10:04.955000 audit: BPF prog-id=215 op=LOAD Dec 16 02:10:04.955000 audit[5053]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000178168 a2=98 a3=0 items=0 ppid=5039 pid=5053 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:04.955000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3430336630323061323634386433363937616464373035346439616331 Dec 16 02:10:04.960000 audit: BPF prog-id=215 op=UNLOAD Dec 16 02:10:04.960000 audit[5053]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5039 pid=5053 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:04.960000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3430336630323061323634386433363937616464373035346439616331 Dec 16 02:10:04.966000 audit: BPF prog-id=214 op=UNLOAD Dec 16 02:10:04.966000 audit[5053]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5039 pid=5053 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:04.966000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3430336630323061323634386433363937616464373035346439616331 Dec 16 02:10:04.971000 audit: BPF prog-id=216 op=LOAD Dec 16 02:10:04.971000 audit[5053]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178648 a2=98 a3=0 items=0 ppid=5039 pid=5053 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:04.978209 systemd[1]: Started cri-containerd-054887364a7424fc2b1930371c2e2d0c334c23b8c406a9a06773dbe68f619a40.scope - libcontainer container 054887364a7424fc2b1930371c2e2d0c334c23b8c406a9a06773dbe68f619a40. Dec 16 02:10:04.971000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3430336630323061323634386433363937616464373035346439616331 Dec 16 02:10:05.029099 containerd[1908]: time="2025-12-16T02:10:05.028991145Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-8srj8,Uid:c6cccb09-9581-436e-8372-f4efd2272de1,Namespace:kube-system,Attempt:0,}" Dec 16 02:10:05.109475 containerd[1908]: time="2025-12-16T02:10:05.108116481Z" level=info msg="connecting to shim 0a07bcb8aa192cf0aba5d58de36c5fa086c7f2aa049c5e6d646b6a1cc233db8d" address="unix:///run/containerd/s/535ab480b9212df20858bb9fe6cb5e384be12aea8e35c8b2ba6611cb5082bb3a" namespace=k8s.io protocol=ttrpc version=3 Dec 16 02:10:05.155553 systemd-networkd[1478]: cali751c9540b22: Gained IPv6LL Dec 16 02:10:05.300000 audit[5167]: NETFILTER_CFG table=filter:121 family=2 entries=20 op=nft_register_rule pid=5167 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 02:10:05.300000 audit[5167]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=7480 a0=3 a1=fffff6721510 a2=0 a3=1 items=0 ppid=3608 pid=5167 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:05.300000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 02:10:05.358000 audit[5167]: NETFILTER_CFG table=nat:122 family=2 entries=14 op=nft_register_rule pid=5167 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 02:10:05.363013 systemd[1]: Started cri-containerd-0a07bcb8aa192cf0aba5d58de36c5fa086c7f2aa049c5e6d646b6a1cc233db8d.scope - libcontainer container 0a07bcb8aa192cf0aba5d58de36c5fa086c7f2aa049c5e6d646b6a1cc233db8d. Dec 16 02:10:05.358000 audit[5167]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=3468 a0=3 a1=fffff6721510 a2=0 a3=1 items=0 ppid=3608 pid=5167 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:05.358000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 02:10:05.397000 audit: BPF prog-id=217 op=LOAD Dec 16 02:10:05.399000 audit: BPF prog-id=218 op=LOAD Dec 16 02:10:05.399000 audit[5006]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000106180 a2=98 a3=0 items=0 ppid=4987 pid=5006 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:05.399000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6362623161623761653366356535663137616534306662313664383733 Dec 16 02:10:05.402000 audit: BPF prog-id=218 op=UNLOAD Dec 16 02:10:05.402000 audit[5006]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4987 pid=5006 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:05.402000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6362623161623761653366356535663137616534306662313664383733 Dec 16 02:10:05.406000 audit: BPF prog-id=219 op=LOAD Dec 16 02:10:05.406000 audit[5006]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001063e8 a2=98 a3=0 items=0 ppid=4987 pid=5006 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:05.406000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6362623161623761653366356535663137616534306662313664383733 Dec 16 02:10:05.409000 audit: BPF prog-id=220 op=LOAD Dec 16 02:10:05.409000 audit[5006]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000106168 a2=98 a3=0 items=0 ppid=4987 pid=5006 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:05.409000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6362623161623761653366356535663137616534306662313664383733 Dec 16 02:10:05.412000 audit: BPF prog-id=220 op=UNLOAD Dec 16 02:10:05.412000 audit[5006]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4987 pid=5006 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:05.412000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6362623161623761653366356535663137616534306662313664383733 Dec 16 02:10:05.413000 audit: BPF prog-id=219 op=UNLOAD Dec 16 02:10:05.413000 audit[5006]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4987 pid=5006 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:05.413000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6362623161623761653366356535663137616534306662313664383733 Dec 16 02:10:05.415000 audit: BPF prog-id=221 op=LOAD Dec 16 02:10:05.419208 containerd[1908]: time="2025-12-16T02:10:05.418976735Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-sw9cf,Uid:7c2b8467-d5c0-4053-9ef4-fa4d698ae6d3,Namespace:kube-system,Attempt:0,} returns sandbox id \"403f020a2648d3697add7054d9ac1ab87d0164d0ca36e21e4f09cbb9028bf775\"" Dec 16 02:10:05.415000 audit[5006]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000106648 a2=98 a3=0 items=0 ppid=4987 pid=5006 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:05.415000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6362623161623761653366356535663137616534306662313664383733 Dec 16 02:10:05.446683 containerd[1908]: time="2025-12-16T02:10:05.446308679Z" level=info msg="CreateContainer within sandbox \"403f020a2648d3697add7054d9ac1ab87d0164d0ca36e21e4f09cbb9028bf775\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 16 02:10:05.486439 containerd[1908]: time="2025-12-16T02:10:05.486349979Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8495b986f5-t8ws5,Uid:6363be22-676f-4db3-afb1-0a1ce8d8def2,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"a262b5285ad8c95573f84447ec3853b645715efbebe4eecb5cba294fe47dcbd7\"" Dec 16 02:10:05.498780 containerd[1908]: time="2025-12-16T02:10:05.498050651Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 02:10:05.545526 containerd[1908]: time="2025-12-16T02:10:05.544187460Z" level=info msg="Container 698118abf737f4ae373b2243bcbff769a1e5614587de186f3644e5288c274180: CDI devices from CRI Config.CDIDevices: []" Dec 16 02:10:05.571796 kubelet[3500]: E1216 02:10:05.570570 3500 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-5q889" podUID="5f75e4b0-aa22-4937-a793-7da0a16c1ff9" Dec 16 02:10:05.591695 containerd[1908]: time="2025-12-16T02:10:05.591629304Z" level=info msg="CreateContainer within sandbox \"403f020a2648d3697add7054d9ac1ab87d0164d0ca36e21e4f09cbb9028bf775\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"698118abf737f4ae373b2243bcbff769a1e5614587de186f3644e5288c274180\"" Dec 16 02:10:05.593275 containerd[1908]: time="2025-12-16T02:10:05.593112324Z" level=info msg="StartContainer for \"698118abf737f4ae373b2243bcbff769a1e5614587de186f3644e5288c274180\"" Dec 16 02:10:05.607845 containerd[1908]: time="2025-12-16T02:10:05.607696944Z" level=info msg="connecting to shim 698118abf737f4ae373b2243bcbff769a1e5614587de186f3644e5288c274180" address="unix:///run/containerd/s/7a3af3470d1141bc17b6dd08db6cd2f887bf1e1781143a9a7eb716687a108503" protocol=ttrpc version=3 Dec 16 02:10:05.689756 containerd[1908]: time="2025-12-16T02:10:05.688696248Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-54647b869b-dj58v,Uid:1d7a12f8-f60f-4170-be36-168aef541297,Namespace:calico-system,Attempt:0,} returns sandbox id \"cbb1ab7ae3f5e5f17ae40fb16d873e0586f3d5f14476b748de0541c70695f24a\"" Dec 16 02:10:05.732000 audit: BPF prog-id=222 op=LOAD Dec 16 02:10:05.733000 audit: BPF prog-id=223 op=LOAD Dec 16 02:10:05.733000 audit[5100]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a0180 a2=98 a3=0 items=0 ppid=5088 pid=5100 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:05.733000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3035343838373336346137343234666332623139333033373163326532 Dec 16 02:10:05.733000 audit: BPF prog-id=223 op=UNLOAD Dec 16 02:10:05.733000 audit[5100]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5088 pid=5100 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:05.733000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3035343838373336346137343234666332623139333033373163326532 Dec 16 02:10:05.735000 audit: BPF prog-id=224 op=LOAD Dec 16 02:10:05.735000 audit[5100]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a03e8 a2=98 a3=0 items=0 ppid=5088 pid=5100 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:05.735000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3035343838373336346137343234666332623139333033373163326532 Dec 16 02:10:05.735000 audit: BPF prog-id=225 op=LOAD Dec 16 02:10:05.735000 audit[5100]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=40001a0168 a2=98 a3=0 items=0 ppid=5088 pid=5100 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:05.735000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3035343838373336346137343234666332623139333033373163326532 Dec 16 02:10:05.736000 audit: BPF prog-id=225 op=UNLOAD Dec 16 02:10:05.736000 audit[5100]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5088 pid=5100 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:05.736000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3035343838373336346137343234666332623139333033373163326532 Dec 16 02:10:05.741000 audit: BPF prog-id=224 op=UNLOAD Dec 16 02:10:05.741000 audit[5100]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5088 pid=5100 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:05.741000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3035343838373336346137343234666332623139333033373163326532 Dec 16 02:10:05.742000 audit: BPF prog-id=226 op=LOAD Dec 16 02:10:05.742000 audit[5100]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a0648 a2=98 a3=0 items=0 ppid=5088 pid=5100 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:05.742000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3035343838373336346137343234666332623139333033373163326532 Dec 16 02:10:05.751238 systemd[1]: Started cri-containerd-698118abf737f4ae373b2243bcbff769a1e5614587de186f3644e5288c274180.scope - libcontainer container 698118abf737f4ae373b2243bcbff769a1e5614587de186f3644e5288c274180. Dec 16 02:10:05.770000 audit: BPF prog-id=227 op=LOAD Dec 16 02:10:05.773000 audit: BPF prog-id=228 op=LOAD Dec 16 02:10:05.773000 audit[5151]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a8180 a2=98 a3=0 items=0 ppid=5139 pid=5151 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:05.773000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3061303762636238616131393263663061626135643538646533366335 Dec 16 02:10:05.776000 audit: BPF prog-id=228 op=UNLOAD Dec 16 02:10:05.776000 audit[5151]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5139 pid=5151 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:05.776000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3061303762636238616131393263663061626135643538646533366335 Dec 16 02:10:05.779000 audit: BPF prog-id=229 op=LOAD Dec 16 02:10:05.779000 audit[5151]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a83e8 a2=98 a3=0 items=0 ppid=5139 pid=5151 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:05.779000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3061303762636238616131393263663061626135643538646533366335 Dec 16 02:10:05.779000 audit: BPF prog-id=230 op=LOAD Dec 16 02:10:05.779000 audit[5151]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=40001a8168 a2=98 a3=0 items=0 ppid=5139 pid=5151 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:05.779000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3061303762636238616131393263663061626135643538646533366335 Dec 16 02:10:05.780000 audit: BPF prog-id=230 op=UNLOAD Dec 16 02:10:05.780000 audit[5151]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5139 pid=5151 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:05.780000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3061303762636238616131393263663061626135643538646533366335 Dec 16 02:10:05.780000 audit: BPF prog-id=229 op=UNLOAD Dec 16 02:10:05.780000 audit[5151]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5139 pid=5151 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:05.780000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3061303762636238616131393263663061626135643538646533366335 Dec 16 02:10:05.780000 audit: BPF prog-id=231 op=LOAD Dec 16 02:10:05.780000 audit[5151]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a8648 a2=98 a3=0 items=0 ppid=5139 pid=5151 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:05.780000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3061303762636238616131393263663061626135643538646533366335 Dec 16 02:10:05.812000 audit: BPF prog-id=232 op=LOAD Dec 16 02:10:05.816000 audit: BPF prog-id=233 op=LOAD Dec 16 02:10:05.816000 audit[5204]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000128180 a2=98 a3=0 items=0 ppid=5039 pid=5204 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:05.816000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3639383131386162663733376634616533373362323234336263626666 Dec 16 02:10:05.817000 audit: BPF prog-id=233 op=UNLOAD Dec 16 02:10:05.817000 audit[5204]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5039 pid=5204 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:05.817000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3639383131386162663733376634616533373362323234336263626666 Dec 16 02:10:05.819000 audit: BPF prog-id=234 op=LOAD Dec 16 02:10:05.819000 audit[5204]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001283e8 a2=98 a3=0 items=0 ppid=5039 pid=5204 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:05.819000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3639383131386162663733376634616533373362323234336263626666 Dec 16 02:10:05.820000 audit: BPF prog-id=235 op=LOAD Dec 16 02:10:05.820000 audit[5204]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000128168 a2=98 a3=0 items=0 ppid=5039 pid=5204 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:05.820000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3639383131386162663733376634616533373362323234336263626666 Dec 16 02:10:05.820000 audit: BPF prog-id=235 op=UNLOAD Dec 16 02:10:05.821554 containerd[1908]: time="2025-12-16T02:10:05.821144557Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 02:10:05.820000 audit[5204]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5039 pid=5204 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:05.820000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3639383131386162663733376634616533373362323234336263626666 Dec 16 02:10:05.821000 audit: BPF prog-id=234 op=UNLOAD Dec 16 02:10:05.821000 audit[5204]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5039 pid=5204 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:05.821000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3639383131386162663733376634616533373362323234336263626666 Dec 16 02:10:05.823000 audit: BPF prog-id=236 op=LOAD Dec 16 02:10:05.823000 audit[5204]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000128648 a2=98 a3=0 items=0 ppid=5039 pid=5204 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:05.823000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3639383131386162663733376634616533373362323234336263626666 Dec 16 02:10:05.826791 containerd[1908]: time="2025-12-16T02:10:05.826559029Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 02:10:05.826791 containerd[1908]: time="2025-12-16T02:10:05.826703245Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 16 02:10:05.828584 kubelet[3500]: E1216 02:10:05.827966 3500 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 02:10:05.828745 kubelet[3500]: E1216 02:10:05.828601 3500 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 02:10:05.829245 kubelet[3500]: E1216 02:10:05.829157 3500 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-8495b986f5-t8ws5_calico-apiserver(6363be22-676f-4db3-afb1-0a1ce8d8def2): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 02:10:05.829245 kubelet[3500]: E1216 02:10:05.829237 3500 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-8495b986f5-t8ws5" podUID="6363be22-676f-4db3-afb1-0a1ce8d8def2" Dec 16 02:10:05.831968 containerd[1908]: time="2025-12-16T02:10:05.831767293Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Dec 16 02:10:05.987113 systemd-networkd[1478]: califedc784b2a9: Link UP Dec 16 02:10:05.992312 systemd-networkd[1478]: califedc784b2a9: Gained carrier Dec 16 02:10:06.044781 containerd[1908]: time="2025-12-16T02:10:06.044731966Z" level=info msg="StartContainer for \"698118abf737f4ae373b2243bcbff769a1e5614587de186f3644e5288c274180\" returns successfully" Dec 16 02:10:06.071187 containerd[1908]: 2025-12-16 02:10:05.379 [INFO][5126] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--24--92-k8s-coredns--66bc5c9577--8srj8-eth0 coredns-66bc5c9577- kube-system c6cccb09-9581-436e-8372-f4efd2272de1 876 0 2025-12-16 02:09:09 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-24-92 coredns-66bc5c9577-8srj8 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] califedc784b2a9 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="9c738b0c6eb83e9bac054939eb0b6045ac8899582880117a4aa18315c83f9885" Namespace="kube-system" Pod="coredns-66bc5c9577-8srj8" WorkloadEndpoint="ip--172--31--24--92-k8s-coredns--66bc5c9577--8srj8-" Dec 16 02:10:06.071187 containerd[1908]: 2025-12-16 02:10:05.379 [INFO][5126] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="9c738b0c6eb83e9bac054939eb0b6045ac8899582880117a4aa18315c83f9885" Namespace="kube-system" Pod="coredns-66bc5c9577-8srj8" WorkloadEndpoint="ip--172--31--24--92-k8s-coredns--66bc5c9577--8srj8-eth0" Dec 16 02:10:06.071187 containerd[1908]: 2025-12-16 02:10:05.717 [INFO][5194] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9c738b0c6eb83e9bac054939eb0b6045ac8899582880117a4aa18315c83f9885" HandleID="k8s-pod-network.9c738b0c6eb83e9bac054939eb0b6045ac8899582880117a4aa18315c83f9885" Workload="ip--172--31--24--92-k8s-coredns--66bc5c9577--8srj8-eth0" Dec 16 02:10:06.071187 containerd[1908]: 2025-12-16 02:10:05.718 [INFO][5194] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="9c738b0c6eb83e9bac054939eb0b6045ac8899582880117a4aa18315c83f9885" HandleID="k8s-pod-network.9c738b0c6eb83e9bac054939eb0b6045ac8899582880117a4aa18315c83f9885" Workload="ip--172--31--24--92-k8s-coredns--66bc5c9577--8srj8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000121aa0), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-24-92", "pod":"coredns-66bc5c9577-8srj8", "timestamp":"2025-12-16 02:10:05.717821437 +0000 UTC"}, Hostname:"ip-172-31-24-92", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 02:10:06.071187 containerd[1908]: 2025-12-16 02:10:05.719 [INFO][5194] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 02:10:06.071187 containerd[1908]: 2025-12-16 02:10:05.720 [INFO][5194] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 02:10:06.071187 containerd[1908]: 2025-12-16 02:10:05.720 [INFO][5194] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-24-92' Dec 16 02:10:06.071187 containerd[1908]: 2025-12-16 02:10:05.780 [INFO][5194] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.9c738b0c6eb83e9bac054939eb0b6045ac8899582880117a4aa18315c83f9885" host="ip-172-31-24-92" Dec 16 02:10:06.071187 containerd[1908]: 2025-12-16 02:10:05.796 [INFO][5194] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-24-92" Dec 16 02:10:06.071187 containerd[1908]: 2025-12-16 02:10:05.811 [INFO][5194] ipam/ipam.go 511: Trying affinity for 192.168.71.192/26 host="ip-172-31-24-92" Dec 16 02:10:06.071187 containerd[1908]: 2025-12-16 02:10:05.825 [INFO][5194] ipam/ipam.go 158: Attempting to load block cidr=192.168.71.192/26 host="ip-172-31-24-92" Dec 16 02:10:06.071187 containerd[1908]: 2025-12-16 02:10:05.860 [INFO][5194] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.71.192/26 host="ip-172-31-24-92" Dec 16 02:10:06.071187 containerd[1908]: 2025-12-16 02:10:05.863 [INFO][5194] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.71.192/26 handle="k8s-pod-network.9c738b0c6eb83e9bac054939eb0b6045ac8899582880117a4aa18315c83f9885" host="ip-172-31-24-92" Dec 16 02:10:06.071187 containerd[1908]: 2025-12-16 02:10:05.874 [INFO][5194] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.9c738b0c6eb83e9bac054939eb0b6045ac8899582880117a4aa18315c83f9885 Dec 16 02:10:06.071187 containerd[1908]: 2025-12-16 02:10:05.905 [INFO][5194] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.71.192/26 handle="k8s-pod-network.9c738b0c6eb83e9bac054939eb0b6045ac8899582880117a4aa18315c83f9885" host="ip-172-31-24-92" Dec 16 02:10:06.071187 containerd[1908]: 2025-12-16 02:10:05.935 [INFO][5194] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.71.200/26] block=192.168.71.192/26 handle="k8s-pod-network.9c738b0c6eb83e9bac054939eb0b6045ac8899582880117a4aa18315c83f9885" host="ip-172-31-24-92" Dec 16 02:10:06.071187 containerd[1908]: 2025-12-16 02:10:05.936 [INFO][5194] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.71.200/26] handle="k8s-pod-network.9c738b0c6eb83e9bac054939eb0b6045ac8899582880117a4aa18315c83f9885" host="ip-172-31-24-92" Dec 16 02:10:06.071187 containerd[1908]: 2025-12-16 02:10:05.936 [INFO][5194] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 02:10:06.071187 containerd[1908]: 2025-12-16 02:10:05.936 [INFO][5194] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.71.200/26] IPv6=[] ContainerID="9c738b0c6eb83e9bac054939eb0b6045ac8899582880117a4aa18315c83f9885" HandleID="k8s-pod-network.9c738b0c6eb83e9bac054939eb0b6045ac8899582880117a4aa18315c83f9885" Workload="ip--172--31--24--92-k8s-coredns--66bc5c9577--8srj8-eth0" Dec 16 02:10:06.072587 containerd[1908]: 2025-12-16 02:10:05.954 [INFO][5126] cni-plugin/k8s.go 418: Populated endpoint ContainerID="9c738b0c6eb83e9bac054939eb0b6045ac8899582880117a4aa18315c83f9885" Namespace="kube-system" Pod="coredns-66bc5c9577-8srj8" WorkloadEndpoint="ip--172--31--24--92-k8s-coredns--66bc5c9577--8srj8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--24--92-k8s-coredns--66bc5c9577--8srj8-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"c6cccb09-9581-436e-8372-f4efd2272de1", ResourceVersion:"876", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 2, 9, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-24-92", ContainerID:"", Pod:"coredns-66bc5c9577-8srj8", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.71.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"califedc784b2a9", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 02:10:06.072587 containerd[1908]: 2025-12-16 02:10:05.954 [INFO][5126] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.71.200/32] ContainerID="9c738b0c6eb83e9bac054939eb0b6045ac8899582880117a4aa18315c83f9885" Namespace="kube-system" Pod="coredns-66bc5c9577-8srj8" WorkloadEndpoint="ip--172--31--24--92-k8s-coredns--66bc5c9577--8srj8-eth0" Dec 16 02:10:06.072587 containerd[1908]: 2025-12-16 02:10:05.954 [INFO][5126] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to califedc784b2a9 ContainerID="9c738b0c6eb83e9bac054939eb0b6045ac8899582880117a4aa18315c83f9885" Namespace="kube-system" Pod="coredns-66bc5c9577-8srj8" WorkloadEndpoint="ip--172--31--24--92-k8s-coredns--66bc5c9577--8srj8-eth0" Dec 16 02:10:06.072587 containerd[1908]: 2025-12-16 02:10:06.011 [INFO][5126] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9c738b0c6eb83e9bac054939eb0b6045ac8899582880117a4aa18315c83f9885" Namespace="kube-system" Pod="coredns-66bc5c9577-8srj8" WorkloadEndpoint="ip--172--31--24--92-k8s-coredns--66bc5c9577--8srj8-eth0" Dec 16 02:10:06.072587 containerd[1908]: 2025-12-16 02:10:06.020 [INFO][5126] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="9c738b0c6eb83e9bac054939eb0b6045ac8899582880117a4aa18315c83f9885" Namespace="kube-system" Pod="coredns-66bc5c9577-8srj8" WorkloadEndpoint="ip--172--31--24--92-k8s-coredns--66bc5c9577--8srj8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--24--92-k8s-coredns--66bc5c9577--8srj8-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"c6cccb09-9581-436e-8372-f4efd2272de1", ResourceVersion:"876", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 2, 9, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-24-92", ContainerID:"9c738b0c6eb83e9bac054939eb0b6045ac8899582880117a4aa18315c83f9885", Pod:"coredns-66bc5c9577-8srj8", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.71.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"califedc784b2a9", MAC:"ea:66:f1:42:b0:28", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 02:10:06.072587 containerd[1908]: 2025-12-16 02:10:06.056 [INFO][5126] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="9c738b0c6eb83e9bac054939eb0b6045ac8899582880117a4aa18315c83f9885" Namespace="kube-system" Pod="coredns-66bc5c9577-8srj8" WorkloadEndpoint="ip--172--31--24--92-k8s-coredns--66bc5c9577--8srj8-eth0" Dec 16 02:10:06.096392 containerd[1908]: time="2025-12-16T02:10:06.096132538Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-7f5sg,Uid:aaad2db4-9021-4d31-8275-e9b7ba731389,Namespace:calico-system,Attempt:0,} returns sandbox id \"054887364a7424fc2b1930371c2e2d0c334c23b8c406a9a06773dbe68f619a40\"" Dec 16 02:10:06.122362 containerd[1908]: time="2025-12-16T02:10:06.121750907Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 02:10:06.128891 containerd[1908]: time="2025-12-16T02:10:06.127715591Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Dec 16 02:10:06.129224 containerd[1908]: time="2025-12-16T02:10:06.128971283Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Dec 16 02:10:06.130238 kubelet[3500]: E1216 02:10:06.130162 3500 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 16 02:10:06.130238 kubelet[3500]: E1216 02:10:06.130238 3500 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 16 02:10:06.131830 containerd[1908]: time="2025-12-16T02:10:06.131297063Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Dec 16 02:10:06.136189 kubelet[3500]: E1216 02:10:06.136105 3500 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-54647b869b-dj58v_calico-system(1d7a12f8-f60f-4170-be36-168aef541297): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Dec 16 02:10:06.136358 kubelet[3500]: E1216 02:10:06.136200 3500 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-54647b869b-dj58v" podUID="1d7a12f8-f60f-4170-be36-168aef541297" Dec 16 02:10:06.201725 containerd[1908]: time="2025-12-16T02:10:06.201538871Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8495b986f5-pp87t,Uid:2d19a364-8480-43c0-bbf1-372d74633ca8,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"0a07bcb8aa192cf0aba5d58de36c5fa086c7f2aa049c5e6d646b6a1cc233db8d\"" Dec 16 02:10:06.228470 containerd[1908]: time="2025-12-16T02:10:06.226384823Z" level=info msg="connecting to shim 9c738b0c6eb83e9bac054939eb0b6045ac8899582880117a4aa18315c83f9885" address="unix:///run/containerd/s/44399ab323a061835d40e71a7db4d4d8cfcd844d0f5f087337bce01ab1c18198" namespace=k8s.io protocol=ttrpc version=3 Dec 16 02:10:06.241703 systemd-networkd[1478]: calibefdd505385: Gained IPv6LL Dec 16 02:10:06.305648 systemd-networkd[1478]: cali2fae0d2f820: Gained IPv6LL Dec 16 02:10:06.343832 systemd[1]: Started cri-containerd-9c738b0c6eb83e9bac054939eb0b6045ac8899582880117a4aa18315c83f9885.scope - libcontainer container 9c738b0c6eb83e9bac054939eb0b6045ac8899582880117a4aa18315c83f9885. Dec 16 02:10:06.394000 audit: BPF prog-id=237 op=LOAD Dec 16 02:10:06.397000 audit: BPF prog-id=238 op=LOAD Dec 16 02:10:06.397000 audit[5284]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000138180 a2=98 a3=0 items=0 ppid=5271 pid=5284 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:06.397000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3963373338623063366562383365396261633035343933396562306236 Dec 16 02:10:06.397000 audit: BPF prog-id=238 op=UNLOAD Dec 16 02:10:06.397000 audit[5284]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5271 pid=5284 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:06.397000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3963373338623063366562383365396261633035343933396562306236 Dec 16 02:10:06.398000 audit: BPF prog-id=239 op=LOAD Dec 16 02:10:06.398000 audit[5284]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001383e8 a2=98 a3=0 items=0 ppid=5271 pid=5284 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:06.398000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3963373338623063366562383365396261633035343933396562306236 Dec 16 02:10:06.403000 audit: BPF prog-id=240 op=LOAD Dec 16 02:10:06.403000 audit[5284]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000138168 a2=98 a3=0 items=0 ppid=5271 pid=5284 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:06.403000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3963373338623063366562383365396261633035343933396562306236 Dec 16 02:10:06.407000 audit: BPF prog-id=240 op=UNLOAD Dec 16 02:10:06.407000 audit[5284]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5271 pid=5284 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:06.407000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3963373338623063366562383365396261633035343933396562306236 Dec 16 02:10:06.408000 audit: BPF prog-id=239 op=UNLOAD Dec 16 02:10:06.408000 audit[5284]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5271 pid=5284 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:06.408000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3963373338623063366562383365396261633035343933396562306236 Dec 16 02:10:06.408000 audit: BPF prog-id=241 op=LOAD Dec 16 02:10:06.408000 audit[5284]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000138648 a2=98 a3=0 items=0 ppid=5271 pid=5284 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:06.408000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3963373338623063366562383365396261633035343933396562306236 Dec 16 02:10:06.417213 systemd-networkd[1478]: vxlan.calico: Link UP Dec 16 02:10:06.417228 systemd-networkd[1478]: vxlan.calico: Gained carrier Dec 16 02:10:06.446087 containerd[1908]: time="2025-12-16T02:10:06.445896084Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 02:10:06.448855 containerd[1908]: time="2025-12-16T02:10:06.448765404Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Dec 16 02:10:06.449254 containerd[1908]: time="2025-12-16T02:10:06.448918860Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Dec 16 02:10:06.450459 kubelet[3500]: E1216 02:10:06.449708 3500 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 16 02:10:06.452990 kubelet[3500]: E1216 02:10:06.451430 3500 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 16 02:10:06.454394 kubelet[3500]: E1216 02:10:06.454278 3500 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-7f5sg_calico-system(aaad2db4-9021-4d31-8275-e9b7ba731389): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Dec 16 02:10:06.456849 containerd[1908]: time="2025-12-16T02:10:06.456614748Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 02:10:06.611981 kubelet[3500]: E1216 02:10:06.611896 3500 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-8495b986f5-t8ws5" podUID="6363be22-676f-4db3-afb1-0a1ce8d8def2" Dec 16 02:10:06.614199 kubelet[3500]: E1216 02:10:06.612398 3500 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-54647b869b-dj58v" podUID="1d7a12f8-f60f-4170-be36-168aef541297" Dec 16 02:10:06.635789 containerd[1908]: time="2025-12-16T02:10:06.635259133Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-8srj8,Uid:c6cccb09-9581-436e-8372-f4efd2272de1,Namespace:kube-system,Attempt:0,} returns sandbox id \"9c738b0c6eb83e9bac054939eb0b6045ac8899582880117a4aa18315c83f9885\"" Dec 16 02:10:06.667068 containerd[1908]: time="2025-12-16T02:10:06.666184585Z" level=info msg="CreateContainer within sandbox \"9c738b0c6eb83e9bac054939eb0b6045ac8899582880117a4aa18315c83f9885\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 16 02:10:06.675104 kubelet[3500]: I1216 02:10:06.674091 3500 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-sw9cf" podStartSLOduration=57.673918921 podStartE2EDuration="57.673918921s" podCreationTimestamp="2025-12-16 02:09:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 02:10:06.671774305 +0000 UTC m=+62.019850221" watchObservedRunningTime="2025-12-16 02:10:06.673918921 +0000 UTC m=+62.021994909" Dec 16 02:10:06.689992 systemd-networkd[1478]: calia2fdd483a2b: Gained IPv6LL Dec 16 02:10:06.705066 containerd[1908]: time="2025-12-16T02:10:06.704568469Z" level=info msg="Container a6266490190e7a73ff4fd5c3ca3b8a4187cdf8cad72521bd51a0ff6fec49eaef: CDI devices from CRI Config.CDIDevices: []" Dec 16 02:10:06.707439 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount140641410.mount: Deactivated successfully. Dec 16 02:10:06.726000 audit[5325]: NETFILTER_CFG table=filter:123 family=2 entries=20 op=nft_register_rule pid=5325 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 02:10:06.726000 audit[5325]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=7480 a0=3 a1=fffff54c39f0 a2=0 a3=1 items=0 ppid=3608 pid=5325 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:06.726000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 02:10:06.733000 audit[5325]: NETFILTER_CFG table=nat:124 family=2 entries=14 op=nft_register_rule pid=5325 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 02:10:06.733000 audit[5325]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=3468 a0=3 a1=fffff54c39f0 a2=0 a3=1 items=0 ppid=3608 pid=5325 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:06.733000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 02:10:06.740345 containerd[1908]: time="2025-12-16T02:10:06.739083698Z" level=info msg="CreateContainer within sandbox \"9c738b0c6eb83e9bac054939eb0b6045ac8899582880117a4aa18315c83f9885\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"a6266490190e7a73ff4fd5c3ca3b8a4187cdf8cad72521bd51a0ff6fec49eaef\"" Dec 16 02:10:06.742047 containerd[1908]: time="2025-12-16T02:10:06.741862046Z" level=info msg="StartContainer for \"a6266490190e7a73ff4fd5c3ca3b8a4187cdf8cad72521bd51a0ff6fec49eaef\"" Dec 16 02:10:06.745022 containerd[1908]: time="2025-12-16T02:10:06.744943718Z" level=info msg="connecting to shim a6266490190e7a73ff4fd5c3ca3b8a4187cdf8cad72521bd51a0ff6fec49eaef" address="unix:///run/containerd/s/44399ab323a061835d40e71a7db4d4d8cfcd844d0f5f087337bce01ab1c18198" protocol=ttrpc version=3 Dec 16 02:10:06.762824 containerd[1908]: time="2025-12-16T02:10:06.762759098Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 02:10:06.765290 containerd[1908]: time="2025-12-16T02:10:06.765199358Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 02:10:06.765746 containerd[1908]: time="2025-12-16T02:10:06.765345590Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 16 02:10:06.767742 kubelet[3500]: E1216 02:10:06.766130 3500 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 02:10:06.767742 kubelet[3500]: E1216 02:10:06.766194 3500 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 02:10:06.767742 kubelet[3500]: E1216 02:10:06.766475 3500 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-8495b986f5-pp87t_calico-apiserver(2d19a364-8480-43c0-bbf1-372d74633ca8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 02:10:06.767742 kubelet[3500]: E1216 02:10:06.766531 3500 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-8495b986f5-pp87t" podUID="2d19a364-8480-43c0-bbf1-372d74633ca8" Dec 16 02:10:06.770380 containerd[1908]: time="2025-12-16T02:10:06.770296322Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Dec 16 02:10:06.809924 systemd[1]: Started cri-containerd-a6266490190e7a73ff4fd5c3ca3b8a4187cdf8cad72521bd51a0ff6fec49eaef.scope - libcontainer container a6266490190e7a73ff4fd5c3ca3b8a4187cdf8cad72521bd51a0ff6fec49eaef. Dec 16 02:10:06.872000 audit: BPF prog-id=242 op=LOAD Dec 16 02:10:06.872000 audit[5350]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffd9602b28 a2=98 a3=ffffd9602b18 items=0 ppid=4585 pid=5350 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:06.872000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 02:10:06.875000 audit: BPF prog-id=242 op=UNLOAD Dec 16 02:10:06.875000 audit[5350]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=ffffd9602af8 a3=0 items=0 ppid=4585 pid=5350 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:06.875000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 02:10:06.876000 audit: BPF prog-id=243 op=LOAD Dec 16 02:10:06.876000 audit[5350]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffd9602808 a2=74 a3=95 items=0 ppid=4585 pid=5350 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:06.876000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 02:10:06.876000 audit: BPF prog-id=243 op=UNLOAD Dec 16 02:10:06.876000 audit[5350]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=74 a3=95 items=0 ppid=4585 pid=5350 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:06.876000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 02:10:06.877000 audit: BPF prog-id=244 op=LOAD Dec 16 02:10:06.877000 audit[5350]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffd9602868 a2=94 a3=2 items=0 ppid=4585 pid=5350 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:06.877000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 02:10:06.925000 audit: BPF prog-id=244 op=UNLOAD Dec 16 02:10:06.925000 audit[5350]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=70 a3=2 items=0 ppid=4585 pid=5350 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:06.925000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 02:10:06.932000 audit: BPF prog-id=245 op=LOAD Dec 16 02:10:06.927000 audit: BPF prog-id=246 op=LOAD Dec 16 02:10:06.927000 audit[5350]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=ffffd96026e8 a2=40 a3=ffffd9602718 items=0 ppid=4585 pid=5350 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:06.927000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 02:10:06.933000 audit: BPF prog-id=246 op=UNLOAD Dec 16 02:10:06.933000 audit[5350]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=6 a1=57156c a2=40 a3=ffffd9602718 items=0 ppid=4585 pid=5350 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:06.933000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 02:10:06.933000 audit: BPF prog-id=247 op=LOAD Dec 16 02:10:06.935000 audit: BPF prog-id=248 op=LOAD Dec 16 02:10:06.935000 audit[5327]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000106180 a2=98 a3=0 items=0 ppid=5271 pid=5327 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:06.935000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6136323636343930313930653761373366663466643563336361336238 Dec 16 02:10:06.935000 audit: BPF prog-id=248 op=UNLOAD Dec 16 02:10:06.935000 audit[5327]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5271 pid=5327 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:06.935000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6136323636343930313930653761373366663466643563336361336238 Dec 16 02:10:06.933000 audit[5350]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=ffffd9602838 a2=94 a3=b7 items=0 ppid=4585 pid=5350 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:06.936000 audit: BPF prog-id=249 op=LOAD Dec 16 02:10:06.936000 audit[5327]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001063e8 a2=98 a3=0 items=0 ppid=5271 pid=5327 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:06.936000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6136323636343930313930653761373366663466643563336361336238 Dec 16 02:10:06.937000 audit: BPF prog-id=250 op=LOAD Dec 16 02:10:06.933000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 02:10:06.937000 audit[5327]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000106168 a2=98 a3=0 items=0 ppid=5271 pid=5327 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:06.937000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6136323636343930313930653761373366663466643563336361336238 Dec 16 02:10:06.937000 audit: BPF prog-id=250 op=UNLOAD Dec 16 02:10:06.937000 audit[5327]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5271 pid=5327 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:06.937000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6136323636343930313930653761373366663466643563336361336238 Dec 16 02:10:06.937000 audit: BPF prog-id=249 op=UNLOAD Dec 16 02:10:06.937000 audit[5327]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5271 pid=5327 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:06.937000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6136323636343930313930653761373366663466643563336361336238 Dec 16 02:10:06.938000 audit: BPF prog-id=247 op=UNLOAD Dec 16 02:10:06.938000 audit[5350]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=6 a1=57156c a2=70 a3=b7 items=0 ppid=4585 pid=5350 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:06.938000 audit: BPF prog-id=251 op=LOAD Dec 16 02:10:06.938000 audit[5327]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000106648 a2=98 a3=0 items=0 ppid=5271 pid=5327 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:06.938000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6136323636343930313930653761373366663466643563336361336238 Dec 16 02:10:06.938000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 02:10:06.959000 audit: BPF prog-id=252 op=LOAD Dec 16 02:10:06.959000 audit[5350]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=ffffd9601ee8 a2=94 a3=2 items=0 ppid=4585 pid=5350 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:06.959000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 02:10:06.961000 audit: BPF prog-id=252 op=UNLOAD Dec 16 02:10:06.961000 audit[5350]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=6 a1=57156c a2=70 a3=2 items=0 ppid=4585 pid=5350 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:06.961000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 02:10:06.961000 audit: BPF prog-id=253 op=LOAD Dec 16 02:10:06.961000 audit[5350]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=ffffd9602078 a2=94 a3=30 items=0 ppid=4585 pid=5350 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:06.961000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 02:10:07.012900 containerd[1908]: time="2025-12-16T02:10:07.012748463Z" level=info msg="StartContainer for \"a6266490190e7a73ff4fd5c3ca3b8a4187cdf8cad72521bd51a0ff6fec49eaef\" returns successfully" Dec 16 02:10:07.022000 audit: BPF prog-id=254 op=LOAD Dec 16 02:10:07.022000 audit[5363]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffef328218 a2=98 a3=ffffef328208 items=0 ppid=4585 pid=5363 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:07.022000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 02:10:07.022000 audit: BPF prog-id=254 op=UNLOAD Dec 16 02:10:07.022000 audit[5363]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=ffffef3281e8 a3=0 items=0 ppid=4585 pid=5363 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:07.022000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 02:10:07.022000 audit: BPF prog-id=255 op=LOAD Dec 16 02:10:07.022000 audit[5363]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=ffffef327ea8 a2=74 a3=95 items=0 ppid=4585 pid=5363 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:07.022000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 02:10:07.022000 audit: BPF prog-id=255 op=UNLOAD Dec 16 02:10:07.022000 audit[5363]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=4 a1=57156c a2=74 a3=95 items=0 ppid=4585 pid=5363 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:07.022000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 02:10:07.022000 audit: BPF prog-id=256 op=LOAD Dec 16 02:10:07.022000 audit[5363]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=ffffef327f08 a2=94 a3=2 items=0 ppid=4585 pid=5363 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:07.022000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 02:10:07.024000 audit: BPF prog-id=256 op=UNLOAD Dec 16 02:10:07.024000 audit[5363]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=4 a1=57156c a2=70 a3=2 items=0 ppid=4585 pid=5363 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:07.024000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 02:10:07.088458 containerd[1908]: time="2025-12-16T02:10:07.087545939Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 02:10:07.091848 containerd[1908]: time="2025-12-16T02:10:07.091615931Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Dec 16 02:10:07.091848 containerd[1908]: time="2025-12-16T02:10:07.091775519Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Dec 16 02:10:07.093507 kubelet[3500]: E1216 02:10:07.092305 3500 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 16 02:10:07.093507 kubelet[3500]: E1216 02:10:07.092372 3500 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 16 02:10:07.094356 kubelet[3500]: E1216 02:10:07.093891 3500 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-7f5sg_calico-system(aaad2db4-9021-4d31-8275-e9b7ba731389): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Dec 16 02:10:07.094356 kubelet[3500]: E1216 02:10:07.093989 3500 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-7f5sg" podUID="aaad2db4-9021-4d31-8275-e9b7ba731389" Dec 16 02:10:07.273000 audit: BPF prog-id=257 op=LOAD Dec 16 02:10:07.273000 audit[5363]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=ffffef327ec8 a2=40 a3=ffffef327ef8 items=0 ppid=4585 pid=5363 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:07.273000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 02:10:07.273000 audit: BPF prog-id=257 op=UNLOAD Dec 16 02:10:07.273000 audit[5363]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=4 a1=57156c a2=40 a3=ffffef327ef8 items=0 ppid=4585 pid=5363 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:07.273000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 02:10:07.292000 audit: BPF prog-id=258 op=LOAD Dec 16 02:10:07.292000 audit[5363]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=5 a1=ffffef327ed8 a2=94 a3=4 items=0 ppid=4585 pid=5363 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:07.292000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 02:10:07.293000 audit: BPF prog-id=258 op=UNLOAD Dec 16 02:10:07.293000 audit[5363]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=5 a1=57156c a2=70 a3=4 items=0 ppid=4585 pid=5363 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:07.293000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 02:10:07.293000 audit: BPF prog-id=259 op=LOAD Dec 16 02:10:07.293000 audit[5363]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=ffffef327d18 a2=94 a3=5 items=0 ppid=4585 pid=5363 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:07.293000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 02:10:07.293000 audit: BPF prog-id=259 op=UNLOAD Dec 16 02:10:07.293000 audit[5363]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=6 a1=57156c a2=70 a3=5 items=0 ppid=4585 pid=5363 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:07.293000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 02:10:07.293000 audit: BPF prog-id=260 op=LOAD Dec 16 02:10:07.293000 audit[5363]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=5 a1=ffffef327f48 a2=94 a3=6 items=0 ppid=4585 pid=5363 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:07.293000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 02:10:07.293000 audit: BPF prog-id=260 op=UNLOAD Dec 16 02:10:07.293000 audit[5363]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=5 a1=57156c a2=70 a3=6 items=0 ppid=4585 pid=5363 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:07.293000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 02:10:07.294000 audit: BPF prog-id=261 op=LOAD Dec 16 02:10:07.294000 audit[5363]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=5 a1=ffffef327718 a2=94 a3=83 items=0 ppid=4585 pid=5363 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:07.294000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 02:10:07.295000 audit: BPF prog-id=262 op=LOAD Dec 16 02:10:07.295000 audit[5363]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=7 a0=5 a1=ffffef3274d8 a2=94 a3=2 items=0 ppid=4585 pid=5363 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:07.295000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 02:10:07.295000 audit: BPF prog-id=262 op=UNLOAD Dec 16 02:10:07.295000 audit[5363]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=7 a1=57156c a2=c a3=0 items=0 ppid=4585 pid=5363 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:07.295000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 02:10:07.296000 audit: BPF prog-id=261 op=UNLOAD Dec 16 02:10:07.296000 audit[5363]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=5 a1=57156c a2=c4cf620 a3=c4c2b00 items=0 ppid=4585 pid=5363 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:07.296000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 02:10:07.306000 audit: BPF prog-id=253 op=UNLOAD Dec 16 02:10:07.306000 audit[4585]: SYSCALL arch=c00000b7 syscall=35 success=yes exit=0 a0=ffffffffffffff9c a1=400047af00 a2=0 a3=0 items=0 ppid=4570 pid=4585 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="calico-node" exe="/usr/bin/calico-node" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:07.306000 audit: PROCTITLE proctitle=63616C69636F2D6E6F6465002D66656C6978 Dec 16 02:10:07.457648 systemd-networkd[1478]: califedc784b2a9: Gained IPv6LL Dec 16 02:10:07.462000 audit[5396]: NETFILTER_CFG table=nat:125 family=2 entries=15 op=nft_register_chain pid=5396 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 16 02:10:07.462000 audit[5396]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5084 a0=3 a1=ffffdd592650 a2=0 a3=ffffad2b6fa8 items=0 ppid=4585 pid=5396 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:07.462000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 16 02:10:07.469000 audit[5397]: NETFILTER_CFG table=mangle:126 family=2 entries=16 op=nft_register_chain pid=5397 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 16 02:10:07.469000 audit[5397]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=6868 a0=3 a1=ffffc6fb11b0 a2=0 a3=ffffac1abfa8 items=0 ppid=4585 pid=5397 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:07.469000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 16 02:10:07.476000 audit[5395]: NETFILTER_CFG table=raw:127 family=2 entries=21 op=nft_register_chain pid=5395 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 16 02:10:07.476000 audit[5395]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=8452 a0=3 a1=ffffef7547b0 a2=0 a3=ffff7f8e2fa8 items=0 ppid=4585 pid=5395 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:07.476000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 16 02:10:07.491000 audit[5399]: NETFILTER_CFG table=filter:128 family=2 entries=297 op=nft_register_chain pid=5399 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 16 02:10:07.491000 audit[5399]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=175548 a0=3 a1=ffffdd03e8f0 a2=0 a3=ffffb3679fa8 items=0 ppid=4585 pid=5399 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:07.491000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 16 02:10:07.585000 audit[5411]: NETFILTER_CFG table=filter:129 family=2 entries=52 op=nft_register_chain pid=5411 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 16 02:10:07.585000 audit[5411]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=23892 a0=3 a1=ffffff4af440 a2=0 a3=ffffb5eaafa8 items=0 ppid=4585 pid=5411 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:07.585000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 16 02:10:07.619970 kubelet[3500]: E1216 02:10:07.619826 3500 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-54647b869b-dj58v" podUID="1d7a12f8-f60f-4170-be36-168aef541297" Dec 16 02:10:07.623878 kubelet[3500]: E1216 02:10:07.623728 3500 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-8495b986f5-pp87t" podUID="2d19a364-8480-43c0-bbf1-372d74633ca8" Dec 16 02:10:07.626862 kubelet[3500]: E1216 02:10:07.626707 3500 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-7f5sg" podUID="aaad2db4-9021-4d31-8275-e9b7ba731389" Dec 16 02:10:07.842651 systemd-networkd[1478]: vxlan.calico: Gained IPv6LL Dec 16 02:10:07.871000 audit[5413]: NETFILTER_CFG table=filter:130 family=2 entries=17 op=nft_register_rule pid=5413 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 02:10:07.871000 audit[5413]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=ffffcd7c3550 a2=0 a3=1 items=0 ppid=3608 pid=5413 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:07.871000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 02:10:07.890000 audit[5413]: NETFILTER_CFG table=nat:131 family=2 entries=47 op=nft_register_chain pid=5413 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 02:10:07.890000 audit[5413]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=19860 a0=3 a1=ffffcd7c3550 a2=0 a3=1 items=0 ppid=3608 pid=5413 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:07.890000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 02:10:10.241641 ntpd[1843]: Listen normally on 6 vxlan.calico 192.168.71.192:123 Dec 16 02:10:10.242295 ntpd[1843]: 16 Dec 02:10:10 ntpd[1843]: Listen normally on 6 vxlan.calico 192.168.71.192:123 Dec 16 02:10:10.242295 ntpd[1843]: 16 Dec 02:10:10 ntpd[1843]: Listen normally on 7 cali5c40471f5b2 [fe80::ecee:eeff:feee:eeee%4]:123 Dec 16 02:10:10.242295 ntpd[1843]: 16 Dec 02:10:10 ntpd[1843]: Listen normally on 8 cali510804c834f [fe80::ecee:eeff:feee:eeee%5]:123 Dec 16 02:10:10.241724 ntpd[1843]: Listen normally on 7 cali5c40471f5b2 [fe80::ecee:eeff:feee:eeee%4]:123 Dec 16 02:10:10.241774 ntpd[1843]: Listen normally on 8 cali510804c834f [fe80::ecee:eeff:feee:eeee%5]:123 Dec 16 02:10:10.242553 ntpd[1843]: Listen normally on 9 cali63151274c7e [fe80::ecee:eeff:feee:eeee%6]:123 Dec 16 02:10:10.242661 ntpd[1843]: 16 Dec 02:10:10 ntpd[1843]: Listen normally on 9 cali63151274c7e [fe80::ecee:eeff:feee:eeee%6]:123 Dec 16 02:10:10.242661 ntpd[1843]: 16 Dec 02:10:10 ntpd[1843]: Listen normally on 10 cali751c9540b22 [fe80::ecee:eeff:feee:eeee%7]:123 Dec 16 02:10:10.242609 ntpd[1843]: Listen normally on 10 cali751c9540b22 [fe80::ecee:eeff:feee:eeee%7]:123 Dec 16 02:10:10.242793 ntpd[1843]: 16 Dec 02:10:10 ntpd[1843]: Listen normally on 11 calibefdd505385 [fe80::ecee:eeff:feee:eeee%8]:123 Dec 16 02:10:10.242793 ntpd[1843]: 16 Dec 02:10:10 ntpd[1843]: Listen normally on 12 cali2fae0d2f820 [fe80::ecee:eeff:feee:eeee%9]:123 Dec 16 02:10:10.242793 ntpd[1843]: 16 Dec 02:10:10 ntpd[1843]: Listen normally on 13 calia2fdd483a2b [fe80::ecee:eeff:feee:eeee%10]:123 Dec 16 02:10:10.242655 ntpd[1843]: Listen normally on 11 calibefdd505385 [fe80::ecee:eeff:feee:eeee%8]:123 Dec 16 02:10:10.242988 ntpd[1843]: 16 Dec 02:10:10 ntpd[1843]: Listen normally on 14 califedc784b2a9 [fe80::ecee:eeff:feee:eeee%11]:123 Dec 16 02:10:10.242988 ntpd[1843]: 16 Dec 02:10:10 ntpd[1843]: Listen normally on 15 vxlan.calico [fe80::6485:35ff:fefc:4ee0%12]:123 Dec 16 02:10:10.242700 ntpd[1843]: Listen normally on 12 cali2fae0d2f820 [fe80::ecee:eeff:feee:eeee%9]:123 Dec 16 02:10:10.242747 ntpd[1843]: Listen normally on 13 calia2fdd483a2b [fe80::ecee:eeff:feee:eeee%10]:123 Dec 16 02:10:10.242792 ntpd[1843]: Listen normally on 14 califedc784b2a9 [fe80::ecee:eeff:feee:eeee%11]:123 Dec 16 02:10:10.242838 ntpd[1843]: Listen normally on 15 vxlan.calico [fe80::6485:35ff:fefc:4ee0%12]:123 Dec 16 02:10:12.350808 systemd[1]: Started sshd@7-172.31.24.92:22-139.178.89.65:60572.service - OpenSSH per-connection server daemon (139.178.89.65:60572). Dec 16 02:10:12.358401 kernel: kauditd_printk_skb: 367 callbacks suppressed Dec 16 02:10:12.358542 kernel: audit: type=1130 audit(1765851012.350:743): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-172.31.24.92:22-139.178.89.65:60572 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:10:12.350000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-172.31.24.92:22-139.178.89.65:60572 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:10:12.642000 audit[5422]: USER_ACCT pid=5422 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 02:10:12.649543 sshd[5422]: Accepted publickey for core from 139.178.89.65 port 60572 ssh2: RSA SHA256:GQgi8hrngD5IAzSBvjpWGNrbDxS4+WSDV6E9Am09kRw Dec 16 02:10:12.654958 sshd-session[5422]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 02:10:12.651000 audit[5422]: CRED_ACQ pid=5422 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 02:10:12.660900 kernel: audit: type=1101 audit(1765851012.642:744): pid=5422 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 02:10:12.661002 kernel: audit: type=1103 audit(1765851012.651:745): pid=5422 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 02:10:12.665981 kernel: audit: type=1006 audit(1765851012.651:746): pid=5422 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=9 res=1 Dec 16 02:10:12.651000 audit[5422]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=fffff0e11e20 a2=3 a3=0 items=0 ppid=1 pid=5422 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=9 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:12.672227 kernel: audit: type=1300 audit(1765851012.651:746): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=fffff0e11e20 a2=3 a3=0 items=0 ppid=1 pid=5422 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=9 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:12.651000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 02:10:12.675049 kernel: audit: type=1327 audit(1765851012.651:746): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 02:10:12.682749 systemd-logind[1853]: New session 9 of user core. Dec 16 02:10:12.687881 systemd[1]: Started session-9.scope - Session 9 of User core. Dec 16 02:10:12.695000 audit[5422]: USER_START pid=5422 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 02:10:12.703468 kernel: audit: type=1105 audit(1765851012.695:747): pid=5422 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 02:10:12.704000 audit[5434]: CRED_ACQ pid=5434 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 02:10:12.710475 kernel: audit: type=1103 audit(1765851012.704:748): pid=5434 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 02:10:13.052681 sshd[5434]: Connection closed by 139.178.89.65 port 60572 Dec 16 02:10:13.054252 sshd-session[5422]: pam_unix(sshd:session): session closed for user core Dec 16 02:10:13.057000 audit[5422]: USER_END pid=5422 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 02:10:13.064681 systemd[1]: sshd@7-172.31.24.92:22-139.178.89.65:60572.service: Deactivated successfully. Dec 16 02:10:13.058000 audit[5422]: CRED_DISP pid=5422 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 02:10:13.068983 systemd[1]: session-9.scope: Deactivated successfully. Dec 16 02:10:13.070437 kernel: audit: type=1106 audit(1765851013.057:749): pid=5422 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 02:10:13.070547 kernel: audit: type=1104 audit(1765851013.058:750): pid=5422 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 02:10:13.065000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-172.31.24.92:22-139.178.89.65:60572 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:10:13.073837 systemd-logind[1853]: Session 9 logged out. Waiting for processes to exit. Dec 16 02:10:13.079088 systemd-logind[1853]: Removed session 9. Dec 16 02:10:16.994100 containerd[1908]: time="2025-12-16T02:10:16.993751909Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Dec 16 02:10:17.243895 containerd[1908]: time="2025-12-16T02:10:17.243750670Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 02:10:17.248818 containerd[1908]: time="2025-12-16T02:10:17.248304538Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Dec 16 02:10:17.248818 containerd[1908]: time="2025-12-16T02:10:17.248365654Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Dec 16 02:10:17.250097 kubelet[3500]: E1216 02:10:17.249078 3500 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 02:10:17.250097 kubelet[3500]: E1216 02:10:17.249169 3500 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 02:10:17.250097 kubelet[3500]: E1216 02:10:17.249345 3500 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-77f9546868-lgh2z_calico-system(b08348aa-b9db-4017-ab2d-63cae97b2a73): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Dec 16 02:10:17.253631 containerd[1908]: time="2025-12-16T02:10:17.253480054Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Dec 16 02:10:17.513828 containerd[1908]: time="2025-12-16T02:10:17.513666215Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 02:10:17.516753 containerd[1908]: time="2025-12-16T02:10:17.516657143Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Dec 16 02:10:17.516928 containerd[1908]: time="2025-12-16T02:10:17.516830963Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Dec 16 02:10:17.517268 kubelet[3500]: E1216 02:10:17.517196 3500 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 02:10:17.517467 kubelet[3500]: E1216 02:10:17.517274 3500 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 02:10:17.517541 kubelet[3500]: E1216 02:10:17.517455 3500 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-77f9546868-lgh2z_calico-system(b08348aa-b9db-4017-ab2d-63cae97b2a73): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Dec 16 02:10:17.517653 kubelet[3500]: E1216 02:10:17.517530 3500 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-77f9546868-lgh2z" podUID="b08348aa-b9db-4017-ab2d-63cae97b2a73" Dec 16 02:10:17.992820 containerd[1908]: time="2025-12-16T02:10:17.992690497Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 02:10:18.019067 kubelet[3500]: I1216 02:10:18.017350 3500 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-8srj8" podStartSLOduration=69.017327386 podStartE2EDuration="1m9.017327386s" podCreationTimestamp="2025-12-16 02:09:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 02:10:07.785374731 +0000 UTC m=+63.133450671" watchObservedRunningTime="2025-12-16 02:10:18.017327386 +0000 UTC m=+73.365403338" Dec 16 02:10:18.095121 systemd[1]: Started sshd@8-172.31.24.92:22-139.178.89.65:60576.service - OpenSSH per-connection server daemon (139.178.89.65:60576). Dec 16 02:10:18.095000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-172.31.24.92:22-139.178.89.65:60576 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:10:18.098524 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 16 02:10:18.098694 kernel: audit: type=1130 audit(1765851018.095:752): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-172.31.24.92:22-139.178.89.65:60576 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:10:18.270063 containerd[1908]: time="2025-12-16T02:10:18.269924195Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 02:10:18.273190 containerd[1908]: time="2025-12-16T02:10:18.273001415Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 02:10:18.273190 containerd[1908]: time="2025-12-16T02:10:18.273124043Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 16 02:10:18.273722 kubelet[3500]: E1216 02:10:18.273660 3500 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 02:10:18.274673 kubelet[3500]: E1216 02:10:18.273729 3500 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 02:10:18.274673 kubelet[3500]: E1216 02:10:18.273860 3500 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-8495b986f5-t8ws5_calico-apiserver(6363be22-676f-4db3-afb1-0a1ce8d8def2): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 02:10:18.274673 kubelet[3500]: E1216 02:10:18.273921 3500 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-8495b986f5-t8ws5" podUID="6363be22-676f-4db3-afb1-0a1ce8d8def2" Dec 16 02:10:18.298000 audit[5455]: USER_ACCT pid=5455 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 02:10:18.300825 sshd[5455]: Accepted publickey for core from 139.178.89.65 port 60576 ssh2: RSA SHA256:GQgi8hrngD5IAzSBvjpWGNrbDxS4+WSDV6E9Am09kRw Dec 16 02:10:18.305467 kernel: audit: type=1101 audit(1765851018.298:753): pid=5455 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 02:10:18.305000 audit[5455]: CRED_ACQ pid=5455 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 02:10:18.308304 sshd-session[5455]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 02:10:18.315812 kernel: audit: type=1103 audit(1765851018.305:754): pid=5455 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 02:10:18.315929 kernel: audit: type=1006 audit(1765851018.305:755): pid=5455 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=10 res=1 Dec 16 02:10:18.305000 audit[5455]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffe986f2b0 a2=3 a3=0 items=0 ppid=1 pid=5455 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:18.323948 kernel: audit: type=1300 audit(1765851018.305:755): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffe986f2b0 a2=3 a3=0 items=0 ppid=1 pid=5455 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:18.324089 kernel: audit: type=1327 audit(1765851018.305:755): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 02:10:18.305000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 02:10:18.330072 systemd-logind[1853]: New session 10 of user core. Dec 16 02:10:18.339726 systemd[1]: Started session-10.scope - Session 10 of User core. Dec 16 02:10:18.348000 audit[5455]: USER_START pid=5455 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 02:10:18.360000 audit[5459]: CRED_ACQ pid=5459 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 02:10:18.366887 kernel: audit: type=1105 audit(1765851018.348:756): pid=5455 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 02:10:18.367105 kernel: audit: type=1103 audit(1765851018.360:757): pid=5459 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 02:10:18.564755 sshd[5459]: Connection closed by 139.178.89.65 port 60576 Dec 16 02:10:18.566522 sshd-session[5455]: pam_unix(sshd:session): session closed for user core Dec 16 02:10:18.570000 audit[5455]: USER_END pid=5455 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 02:10:18.579446 systemd[1]: sshd@8-172.31.24.92:22-139.178.89.65:60576.service: Deactivated successfully. Dec 16 02:10:18.573000 audit[5455]: CRED_DISP pid=5455 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 02:10:18.587301 kernel: audit: type=1106 audit(1765851018.570:758): pid=5455 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 02:10:18.587803 kernel: audit: type=1104 audit(1765851018.573:759): pid=5455 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 02:10:18.589039 systemd[1]: session-10.scope: Deactivated successfully. Dec 16 02:10:18.579000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-172.31.24.92:22-139.178.89.65:60576 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:10:18.594009 systemd-logind[1853]: Session 10 logged out. Waiting for processes to exit. Dec 16 02:10:18.599318 systemd-logind[1853]: Removed session 10. Dec 16 02:10:19.003985 containerd[1908]: time="2025-12-16T02:10:19.003896878Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Dec 16 02:10:19.261751 containerd[1908]: time="2025-12-16T02:10:19.261555744Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 02:10:19.264237 containerd[1908]: time="2025-12-16T02:10:19.264158892Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Dec 16 02:10:19.264385 containerd[1908]: time="2025-12-16T02:10:19.264300780Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Dec 16 02:10:19.264634 kubelet[3500]: E1216 02:10:19.264559 3500 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 16 02:10:19.264765 kubelet[3500]: E1216 02:10:19.264646 3500 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 16 02:10:19.264842 kubelet[3500]: E1216 02:10:19.264766 3500 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-5q889_calico-system(5f75e4b0-aa22-4937-a793-7da0a16c1ff9): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Dec 16 02:10:19.264842 kubelet[3500]: E1216 02:10:19.264821 3500 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-5q889" podUID="5f75e4b0-aa22-4937-a793-7da0a16c1ff9" Dec 16 02:10:20.993947 containerd[1908]: time="2025-12-16T02:10:20.993816904Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Dec 16 02:10:21.275315 containerd[1908]: time="2025-12-16T02:10:21.275123510Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 02:10:21.278618 containerd[1908]: time="2025-12-16T02:10:21.278095190Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Dec 16 02:10:21.278618 containerd[1908]: time="2025-12-16T02:10:21.278231450Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Dec 16 02:10:21.281526 kubelet[3500]: E1216 02:10:21.279300 3500 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 16 02:10:21.281526 kubelet[3500]: E1216 02:10:21.279367 3500 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 16 02:10:21.281526 kubelet[3500]: E1216 02:10:21.279624 3500 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-7f5sg_calico-system(aaad2db4-9021-4d31-8275-e9b7ba731389): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Dec 16 02:10:21.284270 containerd[1908]: time="2025-12-16T02:10:21.283277666Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Dec 16 02:10:21.537784 containerd[1908]: time="2025-12-16T02:10:21.537639519Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 02:10:21.540445 containerd[1908]: time="2025-12-16T02:10:21.540205995Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Dec 16 02:10:21.540445 containerd[1908]: time="2025-12-16T02:10:21.540343203Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Dec 16 02:10:21.540901 kubelet[3500]: E1216 02:10:21.540835 3500 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 16 02:10:21.541105 kubelet[3500]: E1216 02:10:21.541062 3500 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 16 02:10:21.541871 kubelet[3500]: E1216 02:10:21.541589 3500 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-54647b869b-dj58v_calico-system(1d7a12f8-f60f-4170-be36-168aef541297): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Dec 16 02:10:21.542040 kubelet[3500]: E1216 02:10:21.541995 3500 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-54647b869b-dj58v" podUID="1d7a12f8-f60f-4170-be36-168aef541297" Dec 16 02:10:21.542985 containerd[1908]: time="2025-12-16T02:10:21.542119779Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Dec 16 02:10:21.817345 containerd[1908]: time="2025-12-16T02:10:21.817147444Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 02:10:21.820007 containerd[1908]: time="2025-12-16T02:10:21.819896716Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Dec 16 02:10:21.820234 containerd[1908]: time="2025-12-16T02:10:21.820055740Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Dec 16 02:10:21.820390 kubelet[3500]: E1216 02:10:21.820310 3500 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 16 02:10:21.820613 kubelet[3500]: E1216 02:10:21.820386 3500 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 16 02:10:21.821492 kubelet[3500]: E1216 02:10:21.820676 3500 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-7f5sg_calico-system(aaad2db4-9021-4d31-8275-e9b7ba731389): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Dec 16 02:10:21.821492 kubelet[3500]: E1216 02:10:21.820779 3500 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-7f5sg" podUID="aaad2db4-9021-4d31-8275-e9b7ba731389" Dec 16 02:10:21.991332 containerd[1908]: time="2025-12-16T02:10:21.991254557Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 02:10:22.224455 containerd[1908]: time="2025-12-16T02:10:22.224361542Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 02:10:22.227248 containerd[1908]: time="2025-12-16T02:10:22.227029083Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 02:10:22.227248 containerd[1908]: time="2025-12-16T02:10:22.227084811Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 16 02:10:22.227595 kubelet[3500]: E1216 02:10:22.227472 3500 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 02:10:22.227595 kubelet[3500]: E1216 02:10:22.227543 3500 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 02:10:22.227764 kubelet[3500]: E1216 02:10:22.227656 3500 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-8495b986f5-pp87t_calico-apiserver(2d19a364-8480-43c0-bbf1-372d74633ca8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 02:10:22.227764 kubelet[3500]: E1216 02:10:22.227716 3500 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-8495b986f5-pp87t" podUID="2d19a364-8480-43c0-bbf1-372d74633ca8" Dec 16 02:10:23.600564 systemd[1]: Started sshd@9-172.31.24.92:22-139.178.89.65:51116.service - OpenSSH per-connection server daemon (139.178.89.65:51116). Dec 16 02:10:23.599000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-172.31.24.92:22-139.178.89.65:51116 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:10:23.603047 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 16 02:10:23.603157 kernel: audit: type=1130 audit(1765851023.599:761): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-172.31.24.92:22-139.178.89.65:51116 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:10:23.816000 audit[5474]: USER_ACCT pid=5474 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 02:10:23.818356 sshd[5474]: Accepted publickey for core from 139.178.89.65 port 51116 ssh2: RSA SHA256:GQgi8hrngD5IAzSBvjpWGNrbDxS4+WSDV6E9Am09kRw Dec 16 02:10:23.825446 kernel: audit: type=1101 audit(1765851023.816:762): pid=5474 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 02:10:23.824000 audit[5474]: CRED_ACQ pid=5474 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 02:10:23.829785 sshd-session[5474]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 02:10:23.835887 kernel: audit: type=1103 audit(1765851023.824:763): pid=5474 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 02:10:23.836015 kernel: audit: type=1006 audit(1765851023.824:764): pid=5474 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=11 res=1 Dec 16 02:10:23.824000 audit[5474]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=fffff809f3c0 a2=3 a3=0 items=0 ppid=1 pid=5474 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:23.842649 kernel: audit: type=1300 audit(1765851023.824:764): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=fffff809f3c0 a2=3 a3=0 items=0 ppid=1 pid=5474 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:23.824000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 02:10:23.844231 kernel: audit: type=1327 audit(1765851023.824:764): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 02:10:23.851523 systemd-logind[1853]: New session 11 of user core. Dec 16 02:10:23.859841 systemd[1]: Started session-11.scope - Session 11 of User core. Dec 16 02:10:23.865000 audit[5474]: USER_START pid=5474 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 02:10:23.875614 kernel: audit: type=1105 audit(1765851023.865:765): pid=5474 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 02:10:23.874000 audit[5478]: CRED_ACQ pid=5478 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 02:10:23.882562 kernel: audit: type=1103 audit(1765851023.874:766): pid=5478 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 02:10:24.103224 sshd[5478]: Connection closed by 139.178.89.65 port 51116 Dec 16 02:10:24.104796 sshd-session[5474]: pam_unix(sshd:session): session closed for user core Dec 16 02:10:24.132000 audit[5474]: USER_END pid=5474 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 02:10:24.141745 systemd[1]: Started sshd@10-172.31.24.92:22-139.178.89.65:51120.service - OpenSSH per-connection server daemon (139.178.89.65:51120). Dec 16 02:10:24.145276 systemd[1]: sshd@9-172.31.24.92:22-139.178.89.65:51116.service: Deactivated successfully. Dec 16 02:10:24.132000 audit[5474]: CRED_DISP pid=5474 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 02:10:24.152794 kernel: audit: type=1106 audit(1765851024.132:767): pid=5474 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 02:10:24.153133 kernel: audit: type=1104 audit(1765851024.132:768): pid=5474 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 02:10:24.156277 systemd[1]: session-11.scope: Deactivated successfully. Dec 16 02:10:24.139000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-172.31.24.92:22-139.178.89.65:51120 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:10:24.145000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-172.31.24.92:22-139.178.89.65:51116 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:10:24.164204 systemd-logind[1853]: Session 11 logged out. Waiting for processes to exit. Dec 16 02:10:24.167182 systemd-logind[1853]: Removed session 11. Dec 16 02:10:24.345000 audit[5487]: USER_ACCT pid=5487 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 02:10:24.347666 sshd[5487]: Accepted publickey for core from 139.178.89.65 port 51120 ssh2: RSA SHA256:GQgi8hrngD5IAzSBvjpWGNrbDxS4+WSDV6E9Am09kRw Dec 16 02:10:24.348000 audit[5487]: CRED_ACQ pid=5487 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 02:10:24.348000 audit[5487]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffc45df1d0 a2=3 a3=0 items=0 ppid=1 pid=5487 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:24.348000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 02:10:24.351573 sshd-session[5487]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 02:10:24.362539 systemd-logind[1853]: New session 12 of user core. Dec 16 02:10:24.367751 systemd[1]: Started session-12.scope - Session 12 of User core. Dec 16 02:10:24.373000 audit[5487]: USER_START pid=5487 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 02:10:24.379000 audit[5494]: CRED_ACQ pid=5494 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 02:10:24.648900 sshd[5494]: Connection closed by 139.178.89.65 port 51120 Dec 16 02:10:24.648488 sshd-session[5487]: pam_unix(sshd:session): session closed for user core Dec 16 02:10:24.656000 audit[5487]: USER_END pid=5487 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 02:10:24.657000 audit[5487]: CRED_DISP pid=5487 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 02:10:24.664943 systemd[1]: sshd@10-172.31.24.92:22-139.178.89.65:51120.service: Deactivated successfully. Dec 16 02:10:24.666000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-172.31.24.92:22-139.178.89.65:51120 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:10:24.673766 systemd[1]: session-12.scope: Deactivated successfully. Dec 16 02:10:24.678093 systemd-logind[1853]: Session 12 logged out. Waiting for processes to exit. Dec 16 02:10:24.708000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-172.31.24.92:22-139.178.89.65:51128 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:10:24.709288 systemd[1]: Started sshd@11-172.31.24.92:22-139.178.89.65:51128.service - OpenSSH per-connection server daemon (139.178.89.65:51128). Dec 16 02:10:24.714615 systemd-logind[1853]: Removed session 12. Dec 16 02:10:24.918000 audit[5504]: USER_ACCT pid=5504 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 02:10:24.920693 sshd[5504]: Accepted publickey for core from 139.178.89.65 port 51128 ssh2: RSA SHA256:GQgi8hrngD5IAzSBvjpWGNrbDxS4+WSDV6E9Am09kRw Dec 16 02:10:24.921000 audit[5504]: CRED_ACQ pid=5504 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 02:10:24.921000 audit[5504]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffe41fe300 a2=3 a3=0 items=0 ppid=1 pid=5504 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:24.921000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 02:10:24.924813 sshd-session[5504]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 02:10:24.934367 systemd-logind[1853]: New session 13 of user core. Dec 16 02:10:24.947772 systemd[1]: Started session-13.scope - Session 13 of User core. Dec 16 02:10:24.953000 audit[5504]: USER_START pid=5504 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 02:10:24.957000 audit[5508]: CRED_ACQ pid=5508 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 02:10:25.160493 sshd[5508]: Connection closed by 139.178.89.65 port 51128 Dec 16 02:10:25.161697 sshd-session[5504]: pam_unix(sshd:session): session closed for user core Dec 16 02:10:25.163000 audit[5504]: USER_END pid=5504 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 02:10:25.163000 audit[5504]: CRED_DISP pid=5504 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 02:10:25.170208 systemd[1]: sshd@11-172.31.24.92:22-139.178.89.65:51128.service: Deactivated successfully. Dec 16 02:10:25.169000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-172.31.24.92:22-139.178.89.65:51128 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:10:25.175559 systemd[1]: session-13.scope: Deactivated successfully. Dec 16 02:10:25.179617 systemd-logind[1853]: Session 13 logged out. Waiting for processes to exit. Dec 16 02:10:25.183304 systemd-logind[1853]: Removed session 13. Dec 16 02:10:30.213856 kernel: kauditd_printk_skb: 23 callbacks suppressed Dec 16 02:10:30.214052 kernel: audit: type=1130 audit(1765851030.205:788): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-172.31.24.92:22-139.178.89.65:48220 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:10:30.205000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-172.31.24.92:22-139.178.89.65:48220 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:10:30.206281 systemd[1]: Started sshd@12-172.31.24.92:22-139.178.89.65:48220.service - OpenSSH per-connection server daemon (139.178.89.65:48220). Dec 16 02:10:30.459000 audit[5551]: USER_ACCT pid=5551 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 02:10:30.467273 sshd[5551]: Accepted publickey for core from 139.178.89.65 port 48220 ssh2: RSA SHA256:GQgi8hrngD5IAzSBvjpWGNrbDxS4+WSDV6E9Am09kRw Dec 16 02:10:30.474870 kernel: audit: type=1101 audit(1765851030.459:789): pid=5551 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 02:10:30.474947 kernel: audit: type=1103 audit(1765851030.466:790): pid=5551 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 02:10:30.466000 audit[5551]: CRED_ACQ pid=5551 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 02:10:30.472007 sshd-session[5551]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 02:10:30.481912 kernel: audit: type=1006 audit(1765851030.466:791): pid=5551 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=14 res=1 Dec 16 02:10:30.488565 kernel: audit: type=1300 audit(1765851030.466:791): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffe108aa80 a2=3 a3=0 items=0 ppid=1 pid=5551 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:30.466000 audit[5551]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffe108aa80 a2=3 a3=0 items=0 ppid=1 pid=5551 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:30.466000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 02:10:30.504129 kernel: audit: type=1327 audit(1765851030.466:791): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 02:10:30.501199 systemd-logind[1853]: New session 14 of user core. Dec 16 02:10:30.509659 systemd[1]: Started session-14.scope - Session 14 of User core. Dec 16 02:10:30.518000 audit[5551]: USER_START pid=5551 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 02:10:30.530000 audit[5555]: CRED_ACQ pid=5555 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 02:10:30.538751 kernel: audit: type=1105 audit(1765851030.518:792): pid=5551 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 02:10:30.538898 kernel: audit: type=1103 audit(1765851030.530:793): pid=5555 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 02:10:30.805772 sshd[5555]: Connection closed by 139.178.89.65 port 48220 Dec 16 02:10:30.808766 sshd-session[5551]: pam_unix(sshd:session): session closed for user core Dec 16 02:10:30.812000 audit[5551]: USER_END pid=5551 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 02:10:30.813000 audit[5551]: CRED_DISP pid=5551 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 02:10:30.829475 kernel: audit: type=1106 audit(1765851030.812:794): pid=5551 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 02:10:30.829628 kernel: audit: type=1104 audit(1765851030.813:795): pid=5551 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 02:10:30.831088 systemd[1]: sshd@12-172.31.24.92:22-139.178.89.65:48220.service: Deactivated successfully. Dec 16 02:10:30.830000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-172.31.24.92:22-139.178.89.65:48220 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:10:30.837390 systemd[1]: session-14.scope: Deactivated successfully. Dec 16 02:10:30.841369 systemd-logind[1853]: Session 14 logged out. Waiting for processes to exit. Dec 16 02:10:30.848808 systemd-logind[1853]: Removed session 14. Dec 16 02:10:30.997888 kubelet[3500]: E1216 02:10:30.997577 3500 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-77f9546868-lgh2z" podUID="b08348aa-b9db-4017-ab2d-63cae97b2a73" Dec 16 02:10:31.992514 kubelet[3500]: E1216 02:10:31.992393 3500 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-5q889" podUID="5f75e4b0-aa22-4937-a793-7da0a16c1ff9" Dec 16 02:10:31.993343 kubelet[3500]: E1216 02:10:31.992923 3500 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-54647b869b-dj58v" podUID="1d7a12f8-f60f-4170-be36-168aef541297" Dec 16 02:10:32.994859 kubelet[3500]: E1216 02:10:32.994773 3500 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-8495b986f5-t8ws5" podUID="6363be22-676f-4db3-afb1-0a1ce8d8def2" Dec 16 02:10:35.863496 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 16 02:10:35.863663 kernel: audit: type=1130 audit(1765851035.855:797): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-172.31.24.92:22-139.178.89.65:48230 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:10:35.855000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-172.31.24.92:22-139.178.89.65:48230 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:10:35.856980 systemd[1]: Started sshd@13-172.31.24.92:22-139.178.89.65:48230.service - OpenSSH per-connection server daemon (139.178.89.65:48230). Dec 16 02:10:36.001489 kubelet[3500]: E1216 02:10:36.001340 3500 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-7f5sg" podUID="aaad2db4-9021-4d31-8275-e9b7ba731389" Dec 16 02:10:36.056000 audit[5574]: USER_ACCT pid=5574 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 02:10:36.064713 sshd[5574]: Accepted publickey for core from 139.178.89.65 port 48230 ssh2: RSA SHA256:GQgi8hrngD5IAzSBvjpWGNrbDxS4+WSDV6E9Am09kRw Dec 16 02:10:36.069404 sshd-session[5574]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 02:10:36.065000 audit[5574]: CRED_ACQ pid=5574 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 02:10:36.078435 kernel: audit: type=1101 audit(1765851036.056:798): pid=5574 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 02:10:36.078589 kernel: audit: type=1103 audit(1765851036.065:799): pid=5574 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 02:10:36.091971 kernel: audit: type=1006 audit(1765851036.065:800): pid=5574 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=15 res=1 Dec 16 02:10:36.101585 kernel: audit: type=1300 audit(1765851036.065:800): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffcaa096c0 a2=3 a3=0 items=0 ppid=1 pid=5574 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:36.065000 audit[5574]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffcaa096c0 a2=3 a3=0 items=0 ppid=1 pid=5574 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:36.103639 systemd-logind[1853]: New session 15 of user core. Dec 16 02:10:36.065000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 02:10:36.106556 kernel: audit: type=1327 audit(1765851036.065:800): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 02:10:36.111061 systemd[1]: Started session-15.scope - Session 15 of User core. Dec 16 02:10:36.120000 audit[5574]: USER_START pid=5574 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 02:10:36.130000 audit[5578]: CRED_ACQ pid=5578 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 02:10:36.135610 kernel: audit: type=1105 audit(1765851036.120:801): pid=5574 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 02:10:36.142452 kernel: audit: type=1103 audit(1765851036.130:802): pid=5578 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 02:10:36.331344 sshd[5578]: Connection closed by 139.178.89.65 port 48230 Dec 16 02:10:36.331762 sshd-session[5574]: pam_unix(sshd:session): session closed for user core Dec 16 02:10:36.334000 audit[5574]: USER_END pid=5574 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 02:10:36.348841 kernel: audit: type=1106 audit(1765851036.334:803): pid=5574 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 02:10:36.342000 audit[5574]: CRED_DISP pid=5574 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 02:10:36.351558 systemd[1]: sshd@13-172.31.24.92:22-139.178.89.65:48230.service: Deactivated successfully. Dec 16 02:10:36.357587 systemd-logind[1853]: Session 15 logged out. Waiting for processes to exit. Dec 16 02:10:36.358166 systemd[1]: session-15.scope: Deactivated successfully. Dec 16 02:10:36.349000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-172.31.24.92:22-139.178.89.65:48230 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:10:36.359483 kernel: audit: type=1104 audit(1765851036.342:804): pid=5574 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 02:10:36.364084 systemd-logind[1853]: Removed session 15. Dec 16 02:10:36.995466 kubelet[3500]: E1216 02:10:36.994226 3500 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-8495b986f5-pp87t" podUID="2d19a364-8480-43c0-bbf1-372d74633ca8" Dec 16 02:10:41.377662 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 16 02:10:41.377870 kernel: audit: type=1130 audit(1765851041.372:806): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-172.31.24.92:22-139.178.89.65:39054 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:10:41.372000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-172.31.24.92:22-139.178.89.65:39054 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:10:41.374018 systemd[1]: Started sshd@14-172.31.24.92:22-139.178.89.65:39054.service - OpenSSH per-connection server daemon (139.178.89.65:39054). Dec 16 02:10:41.608000 audit[5593]: USER_ACCT pid=5593 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 02:10:41.617271 sshd[5593]: Accepted publickey for core from 139.178.89.65 port 39054 ssh2: RSA SHA256:GQgi8hrngD5IAzSBvjpWGNrbDxS4+WSDV6E9Am09kRw Dec 16 02:10:41.619615 sshd-session[5593]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 02:10:41.616000 audit[5593]: CRED_ACQ pid=5593 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 02:10:41.626909 kernel: audit: type=1101 audit(1765851041.608:807): pid=5593 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 02:10:41.627051 kernel: audit: type=1103 audit(1765851041.616:808): pid=5593 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 02:10:41.631247 kernel: audit: type=1006 audit(1765851041.616:809): pid=5593 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=16 res=1 Dec 16 02:10:41.616000 audit[5593]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=fffff7055ed0 a2=3 a3=0 items=0 ppid=1 pid=5593 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:41.638015 kernel: audit: type=1300 audit(1765851041.616:809): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=fffff7055ed0 a2=3 a3=0 items=0 ppid=1 pid=5593 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:41.638740 kernel: audit: type=1327 audit(1765851041.616:809): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 02:10:41.616000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 02:10:41.655325 systemd-logind[1853]: New session 16 of user core. Dec 16 02:10:41.665080 systemd[1]: Started session-16.scope - Session 16 of User core. Dec 16 02:10:41.673000 audit[5593]: USER_START pid=5593 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 02:10:41.683488 kernel: audit: type=1105 audit(1765851041.673:810): pid=5593 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 02:10:41.684000 audit[5597]: CRED_ACQ pid=5597 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 02:10:41.692463 kernel: audit: type=1103 audit(1765851041.684:811): pid=5597 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 02:10:41.940728 sshd[5597]: Connection closed by 139.178.89.65 port 39054 Dec 16 02:10:41.942364 sshd-session[5593]: pam_unix(sshd:session): session closed for user core Dec 16 02:10:41.946000 audit[5593]: USER_END pid=5593 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 02:10:41.954000 audit[5593]: CRED_DISP pid=5593 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 02:10:41.966644 kernel: audit: type=1106 audit(1765851041.946:812): pid=5593 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 02:10:41.966924 kernel: audit: type=1104 audit(1765851041.954:813): pid=5593 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 02:10:41.963258 systemd[1]: sshd@14-172.31.24.92:22-139.178.89.65:39054.service: Deactivated successfully. Dec 16 02:10:41.965000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-172.31.24.92:22-139.178.89.65:39054 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:10:41.973198 systemd[1]: session-16.scope: Deactivated successfully. Dec 16 02:10:41.981879 systemd-logind[1853]: Session 16 logged out. Waiting for processes to exit. Dec 16 02:10:41.987374 systemd-logind[1853]: Removed session 16. Dec 16 02:10:42.994938 containerd[1908]: time="2025-12-16T02:10:42.994205510Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Dec 16 02:10:43.252821 containerd[1908]: time="2025-12-16T02:10:43.252179027Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 02:10:43.254989 containerd[1908]: time="2025-12-16T02:10:43.254804291Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Dec 16 02:10:43.254989 containerd[1908]: time="2025-12-16T02:10:43.254893943Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Dec 16 02:10:43.255427 kubelet[3500]: E1216 02:10:43.255326 3500 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 16 02:10:43.257098 kubelet[3500]: E1216 02:10:43.255442 3500 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 16 02:10:43.257098 kubelet[3500]: E1216 02:10:43.255609 3500 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-5q889_calico-system(5f75e4b0-aa22-4937-a793-7da0a16c1ff9): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Dec 16 02:10:43.257098 kubelet[3500]: E1216 02:10:43.255670 3500 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-5q889" podUID="5f75e4b0-aa22-4937-a793-7da0a16c1ff9" Dec 16 02:10:45.993645 containerd[1908]: time="2025-12-16T02:10:45.993234965Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 02:10:46.304857 containerd[1908]: time="2025-12-16T02:10:46.304676642Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 02:10:46.307123 containerd[1908]: time="2025-12-16T02:10:46.307017878Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 02:10:46.307661 containerd[1908]: time="2025-12-16T02:10:46.307087058Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 16 02:10:46.307735 kubelet[3500]: E1216 02:10:46.307371 3500 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 02:10:46.307735 kubelet[3500]: E1216 02:10:46.307485 3500 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 02:10:46.310604 kubelet[3500]: E1216 02:10:46.308687 3500 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-8495b986f5-t8ws5_calico-apiserver(6363be22-676f-4db3-afb1-0a1ce8d8def2): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 02:10:46.310604 kubelet[3500]: E1216 02:10:46.308774 3500 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-8495b986f5-t8ws5" podUID="6363be22-676f-4db3-afb1-0a1ce8d8def2" Dec 16 02:10:46.310867 containerd[1908]: time="2025-12-16T02:10:46.309692366Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Dec 16 02:10:46.571363 containerd[1908]: time="2025-12-16T02:10:46.571190427Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 02:10:46.574445 containerd[1908]: time="2025-12-16T02:10:46.574319823Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Dec 16 02:10:46.574607 containerd[1908]: time="2025-12-16T02:10:46.574404711Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Dec 16 02:10:46.575359 kubelet[3500]: E1216 02:10:46.575221 3500 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 02:10:46.575551 kubelet[3500]: E1216 02:10:46.575376 3500 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 02:10:46.575640 kubelet[3500]: E1216 02:10:46.575574 3500 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-77f9546868-lgh2z_calico-system(b08348aa-b9db-4017-ab2d-63cae97b2a73): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Dec 16 02:10:46.579400 containerd[1908]: time="2025-12-16T02:10:46.578592015Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Dec 16 02:10:46.841021 containerd[1908]: time="2025-12-16T02:10:46.839081549Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 02:10:46.843691 containerd[1908]: time="2025-12-16T02:10:46.843511217Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Dec 16 02:10:46.843691 containerd[1908]: time="2025-12-16T02:10:46.843592733Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Dec 16 02:10:46.843969 kubelet[3500]: E1216 02:10:46.843844 3500 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 02:10:46.843969 kubelet[3500]: E1216 02:10:46.843910 3500 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 02:10:46.844113 kubelet[3500]: E1216 02:10:46.844020 3500 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-77f9546868-lgh2z_calico-system(b08348aa-b9db-4017-ab2d-63cae97b2a73): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Dec 16 02:10:46.844183 kubelet[3500]: E1216 02:10:46.844094 3500 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-77f9546868-lgh2z" podUID="b08348aa-b9db-4017-ab2d-63cae97b2a73" Dec 16 02:10:46.985616 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 16 02:10:46.986589 kernel: audit: type=1130 audit(1765851046.980:815): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-172.31.24.92:22-139.178.89.65:39060 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:10:46.980000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-172.31.24.92:22-139.178.89.65:39060 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:10:46.982071 systemd[1]: Started sshd@15-172.31.24.92:22-139.178.89.65:39060.service - OpenSSH per-connection server daemon (139.178.89.65:39060). Dec 16 02:10:47.002222 containerd[1908]: time="2025-12-16T02:10:47.002048294Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Dec 16 02:10:47.219000 audit[5611]: USER_ACCT pid=5611 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 02:10:47.226802 sshd[5611]: Accepted publickey for core from 139.178.89.65 port 39060 ssh2: RSA SHA256:GQgi8hrngD5IAzSBvjpWGNrbDxS4+WSDV6E9Am09kRw Dec 16 02:10:47.227476 kernel: audit: type=1101 audit(1765851047.219:816): pid=5611 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 02:10:47.226000 audit[5611]: CRED_ACQ pid=5611 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 02:10:47.230739 sshd-session[5611]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 02:10:47.238237 kernel: audit: type=1103 audit(1765851047.226:817): pid=5611 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 02:10:47.238386 kernel: audit: type=1006 audit(1765851047.226:818): pid=5611 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=17 res=1 Dec 16 02:10:47.226000 audit[5611]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffeef03710 a2=3 a3=0 items=0 ppid=1 pid=5611 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:47.245017 kernel: audit: type=1300 audit(1765851047.226:818): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffeef03710 a2=3 a3=0 items=0 ppid=1 pid=5611 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:47.226000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 02:10:47.247959 kernel: audit: type=1327 audit(1765851047.226:818): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 02:10:47.259340 systemd-logind[1853]: New session 17 of user core. Dec 16 02:10:47.265857 systemd[1]: Started session-17.scope - Session 17 of User core. Dec 16 02:10:47.273000 audit[5611]: USER_START pid=5611 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 02:10:47.282560 kernel: audit: type=1105 audit(1765851047.273:819): pid=5611 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 02:10:47.283000 audit[5615]: CRED_ACQ pid=5615 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 02:10:47.290462 kernel: audit: type=1103 audit(1765851047.283:820): pid=5615 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 02:10:47.321065 containerd[1908]: time="2025-12-16T02:10:47.320965119Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 02:10:47.323447 containerd[1908]: time="2025-12-16T02:10:47.323293587Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Dec 16 02:10:47.323739 containerd[1908]: time="2025-12-16T02:10:47.323377467Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Dec 16 02:10:47.323877 kubelet[3500]: E1216 02:10:47.323679 3500 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 16 02:10:47.323877 kubelet[3500]: E1216 02:10:47.323743 3500 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 16 02:10:47.326480 kubelet[3500]: E1216 02:10:47.324156 3500 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-54647b869b-dj58v_calico-system(1d7a12f8-f60f-4170-be36-168aef541297): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Dec 16 02:10:47.326480 kubelet[3500]: E1216 02:10:47.324220 3500 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-54647b869b-dj58v" podUID="1d7a12f8-f60f-4170-be36-168aef541297" Dec 16 02:10:47.489632 sshd[5615]: Connection closed by 139.178.89.65 port 39060 Dec 16 02:10:47.491390 sshd-session[5611]: pam_unix(sshd:session): session closed for user core Dec 16 02:10:47.495000 audit[5611]: USER_END pid=5611 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 02:10:47.495000 audit[5611]: CRED_DISP pid=5611 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 02:10:47.514333 kernel: audit: type=1106 audit(1765851047.495:821): pid=5611 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 02:10:47.514498 kernel: audit: type=1104 audit(1765851047.495:822): pid=5611 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 02:10:47.509389 systemd[1]: sshd@15-172.31.24.92:22-139.178.89.65:39060.service: Deactivated successfully. Dec 16 02:10:47.509000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-172.31.24.92:22-139.178.89.65:39060 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:10:47.518047 systemd[1]: session-17.scope: Deactivated successfully. Dec 16 02:10:47.527935 systemd-logind[1853]: Session 17 logged out. Waiting for processes to exit. Dec 16 02:10:47.551759 systemd[1]: Started sshd@16-172.31.24.92:22-139.178.89.65:39076.service - OpenSSH per-connection server daemon (139.178.89.65:39076). Dec 16 02:10:47.550000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-172.31.24.92:22-139.178.89.65:39076 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:10:47.556592 systemd-logind[1853]: Removed session 17. Dec 16 02:10:47.747000 audit[5634]: USER_ACCT pid=5634 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 02:10:47.750269 sshd[5634]: Accepted publickey for core from 139.178.89.65 port 39076 ssh2: RSA SHA256:GQgi8hrngD5IAzSBvjpWGNrbDxS4+WSDV6E9Am09kRw Dec 16 02:10:47.750000 audit[5634]: CRED_ACQ pid=5634 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 02:10:47.750000 audit[5634]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffe908e170 a2=3 a3=0 items=0 ppid=1 pid=5634 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:47.750000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 02:10:47.754800 sshd-session[5634]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 02:10:47.766998 systemd-logind[1853]: New session 18 of user core. Dec 16 02:10:47.772872 systemd[1]: Started session-18.scope - Session 18 of User core. Dec 16 02:10:47.779000 audit[5634]: USER_START pid=5634 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 02:10:47.785000 audit[5640]: CRED_ACQ pid=5640 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 02:10:48.235389 sshd[5640]: Connection closed by 139.178.89.65 port 39076 Dec 16 02:10:48.236394 sshd-session[5634]: pam_unix(sshd:session): session closed for user core Dec 16 02:10:48.239000 audit[5634]: USER_END pid=5634 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 02:10:48.239000 audit[5634]: CRED_DISP pid=5634 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 02:10:48.246187 systemd[1]: sshd@16-172.31.24.92:22-139.178.89.65:39076.service: Deactivated successfully. Dec 16 02:10:48.245000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-172.31.24.92:22-139.178.89.65:39076 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:10:48.251278 systemd[1]: session-18.scope: Deactivated successfully. Dec 16 02:10:48.254681 systemd-logind[1853]: Session 18 logged out. Waiting for processes to exit. Dec 16 02:10:48.285000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-172.31.24.92:22-139.178.89.65:39084 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:10:48.285614 systemd[1]: Started sshd@17-172.31.24.92:22-139.178.89.65:39084.service - OpenSSH per-connection server daemon (139.178.89.65:39084). Dec 16 02:10:48.291217 systemd-logind[1853]: Removed session 18. Dec 16 02:10:48.508000 audit[5650]: USER_ACCT pid=5650 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 02:10:48.510191 sshd[5650]: Accepted publickey for core from 139.178.89.65 port 39084 ssh2: RSA SHA256:GQgi8hrngD5IAzSBvjpWGNrbDxS4+WSDV6E9Am09kRw Dec 16 02:10:48.510000 audit[5650]: CRED_ACQ pid=5650 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 02:10:48.511000 audit[5650]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffda8c4ef0 a2=3 a3=0 items=0 ppid=1 pid=5650 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:48.511000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 02:10:48.514168 sshd-session[5650]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 02:10:48.525143 systemd-logind[1853]: New session 19 of user core. Dec 16 02:10:48.530932 systemd[1]: Started session-19.scope - Session 19 of User core. Dec 16 02:10:48.540000 audit[5650]: USER_START pid=5650 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 02:10:48.545000 audit[5654]: CRED_ACQ pid=5654 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 02:10:48.999535 containerd[1908]: time="2025-12-16T02:10:48.998993791Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Dec 16 02:10:49.285239 containerd[1908]: time="2025-12-16T02:10:49.285045593Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 02:10:49.287746 containerd[1908]: time="2025-12-16T02:10:49.287652449Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Dec 16 02:10:49.287897 containerd[1908]: time="2025-12-16T02:10:49.287702777Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Dec 16 02:10:49.288274 kubelet[3500]: E1216 02:10:49.288154 3500 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 16 02:10:49.289988 kubelet[3500]: E1216 02:10:49.288319 3500 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 16 02:10:49.289988 kubelet[3500]: E1216 02:10:49.288516 3500 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-7f5sg_calico-system(aaad2db4-9021-4d31-8275-e9b7ba731389): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Dec 16 02:10:49.293110 containerd[1908]: time="2025-12-16T02:10:49.292810205Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Dec 16 02:10:49.581907 containerd[1908]: time="2025-12-16T02:10:49.581717922Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 02:10:49.584644 containerd[1908]: time="2025-12-16T02:10:49.584537706Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Dec 16 02:10:49.584810 containerd[1908]: time="2025-12-16T02:10:49.584553678Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Dec 16 02:10:49.584986 kubelet[3500]: E1216 02:10:49.584912 3500 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 16 02:10:49.585072 kubelet[3500]: E1216 02:10:49.584995 3500 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 16 02:10:49.585144 kubelet[3500]: E1216 02:10:49.585100 3500 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-7f5sg_calico-system(aaad2db4-9021-4d31-8275-e9b7ba731389): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Dec 16 02:10:49.585262 kubelet[3500]: E1216 02:10:49.585164 3500 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-7f5sg" podUID="aaad2db4-9021-4d31-8275-e9b7ba731389" Dec 16 02:10:49.957000 audit[5680]: NETFILTER_CFG table=filter:132 family=2 entries=26 op=nft_register_rule pid=5680 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 02:10:49.957000 audit[5680]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=14176 a0=3 a1=ffffcdce9390 a2=0 a3=1 items=0 ppid=3608 pid=5680 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:49.957000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 02:10:49.973362 sshd[5654]: Connection closed by 139.178.89.65 port 39084 Dec 16 02:10:49.973206 sshd-session[5650]: pam_unix(sshd:session): session closed for user core Dec 16 02:10:49.979000 audit[5650]: USER_END pid=5650 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 02:10:49.979000 audit[5650]: CRED_DISP pid=5650 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 02:10:49.984000 audit[5680]: NETFILTER_CFG table=nat:133 family=2 entries=20 op=nft_register_rule pid=5680 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 02:10:49.984000 audit[5680]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5772 a0=3 a1=ffffcdce9390 a2=0 a3=1 items=0 ppid=3608 pid=5680 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:49.984000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 02:10:49.993614 systemd[1]: sshd@17-172.31.24.92:22-139.178.89.65:39084.service: Deactivated successfully. Dec 16 02:10:49.993000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-172.31.24.92:22-139.178.89.65:39084 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:10:49.995229 containerd[1908]: time="2025-12-16T02:10:49.993623228Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 02:10:50.003334 systemd[1]: session-19.scope: Deactivated successfully. Dec 16 02:10:50.007633 systemd-logind[1853]: Session 19 logged out. Waiting for processes to exit. Dec 16 02:10:50.035000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-172.31.24.92:22-139.178.89.65:39098 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:10:50.037082 systemd[1]: Started sshd@18-172.31.24.92:22-139.178.89.65:39098.service - OpenSSH per-connection server daemon (139.178.89.65:39098). Dec 16 02:10:50.048104 systemd-logind[1853]: Removed session 19. Dec 16 02:10:50.261000 audit[5685]: USER_ACCT pid=5685 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 02:10:50.263985 sshd[5685]: Accepted publickey for core from 139.178.89.65 port 39098 ssh2: RSA SHA256:GQgi8hrngD5IAzSBvjpWGNrbDxS4+WSDV6E9Am09kRw Dec 16 02:10:50.264000 audit[5685]: CRED_ACQ pid=5685 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 02:10:50.265000 audit[5685]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffd08b4030 a2=3 a3=0 items=0 ppid=1 pid=5685 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=20 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:50.265000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 02:10:50.268458 sshd-session[5685]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 02:10:50.279585 systemd-logind[1853]: New session 20 of user core. Dec 16 02:10:50.286802 systemd[1]: Started session-20.scope - Session 20 of User core. Dec 16 02:10:50.292000 audit[5685]: USER_START pid=5685 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 02:10:50.300362 containerd[1908]: time="2025-12-16T02:10:50.300119826Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 02:10:50.297000 audit[5689]: CRED_ACQ pid=5689 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 02:10:50.303842 containerd[1908]: time="2025-12-16T02:10:50.303622782Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 02:10:50.303842 containerd[1908]: time="2025-12-16T02:10:50.303757266Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 16 02:10:50.304207 kubelet[3500]: E1216 02:10:50.304099 3500 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 02:10:50.304777 kubelet[3500]: E1216 02:10:50.304216 3500 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 02:10:50.304777 kubelet[3500]: E1216 02:10:50.304531 3500 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-8495b986f5-pp87t_calico-apiserver(2d19a364-8480-43c0-bbf1-372d74633ca8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 02:10:50.304777 kubelet[3500]: E1216 02:10:50.304604 3500 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-8495b986f5-pp87t" podUID="2d19a364-8480-43c0-bbf1-372d74633ca8" Dec 16 02:10:50.812469 sshd[5689]: Connection closed by 139.178.89.65 port 39098 Dec 16 02:10:50.811336 sshd-session[5685]: pam_unix(sshd:session): session closed for user core Dec 16 02:10:50.813000 audit[5685]: USER_END pid=5685 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 02:10:50.813000 audit[5685]: CRED_DISP pid=5685 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 02:10:50.824119 systemd[1]: sshd@18-172.31.24.92:22-139.178.89.65:39098.service: Deactivated successfully. Dec 16 02:10:50.825000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-172.31.24.92:22-139.178.89.65:39098 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:10:50.832360 systemd[1]: session-20.scope: Deactivated successfully. Dec 16 02:10:50.839090 systemd-logind[1853]: Session 20 logged out. Waiting for processes to exit. Dec 16 02:10:50.860000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-172.31.24.92:22-139.178.89.65:35772 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:10:50.861804 systemd[1]: Started sshd@19-172.31.24.92:22-139.178.89.65:35772.service - OpenSSH per-connection server daemon (139.178.89.65:35772). Dec 16 02:10:50.863926 systemd-logind[1853]: Removed session 20. Dec 16 02:10:51.029000 audit[5703]: NETFILTER_CFG table=filter:134 family=2 entries=38 op=nft_register_rule pid=5703 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 02:10:51.029000 audit[5703]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=14176 a0=3 a1=fffff14bffc0 a2=0 a3=1 items=0 ppid=3608 pid=5703 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:51.029000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 02:10:51.037000 audit[5703]: NETFILTER_CFG table=nat:135 family=2 entries=20 op=nft_register_rule pid=5703 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 02:10:51.037000 audit[5703]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5772 a0=3 a1=fffff14bffc0 a2=0 a3=1 items=0 ppid=3608 pid=5703 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:51.037000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 02:10:51.063000 audit[5699]: USER_ACCT pid=5699 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 02:10:51.065589 sshd[5699]: Accepted publickey for core from 139.178.89.65 port 35772 ssh2: RSA SHA256:GQgi8hrngD5IAzSBvjpWGNrbDxS4+WSDV6E9Am09kRw Dec 16 02:10:51.066000 audit[5699]: CRED_ACQ pid=5699 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 02:10:51.066000 audit[5699]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=fffff62af900 a2=3 a3=0 items=0 ppid=1 pid=5699 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=21 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:51.066000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 02:10:51.070105 sshd-session[5699]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 02:10:51.084552 systemd-logind[1853]: New session 21 of user core. Dec 16 02:10:51.090826 systemd[1]: Started session-21.scope - Session 21 of User core. Dec 16 02:10:51.097000 audit[5699]: USER_START pid=5699 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 02:10:51.101000 audit[5705]: CRED_ACQ pid=5705 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 02:10:51.296489 sshd[5705]: Connection closed by 139.178.89.65 port 35772 Dec 16 02:10:51.297593 sshd-session[5699]: pam_unix(sshd:session): session closed for user core Dec 16 02:10:51.300000 audit[5699]: USER_END pid=5699 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 02:10:51.300000 audit[5699]: CRED_DISP pid=5699 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 02:10:51.307320 systemd[1]: sshd@19-172.31.24.92:22-139.178.89.65:35772.service: Deactivated successfully. Dec 16 02:10:51.307000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-172.31.24.92:22-139.178.89.65:35772 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:10:51.313042 systemd[1]: session-21.scope: Deactivated successfully. Dec 16 02:10:51.318503 systemd-logind[1853]: Session 21 logged out. Waiting for processes to exit. Dec 16 02:10:51.323237 systemd-logind[1853]: Removed session 21. Dec 16 02:10:56.338637 systemd[1]: Started sshd@20-172.31.24.92:22-139.178.89.65:35776.service - OpenSSH per-connection server daemon (139.178.89.65:35776). Dec 16 02:10:56.346689 kernel: kauditd_printk_skb: 57 callbacks suppressed Dec 16 02:10:56.346849 kernel: audit: type=1130 audit(1765851056.338:864): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-172.31.24.92:22-139.178.89.65:35776 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:10:56.338000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-172.31.24.92:22-139.178.89.65:35776 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:10:56.560000 audit[5719]: USER_ACCT pid=5719 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 02:10:56.568115 sshd[5719]: Accepted publickey for core from 139.178.89.65 port 35776 ssh2: RSA SHA256:GQgi8hrngD5IAzSBvjpWGNrbDxS4+WSDV6E9Am09kRw Dec 16 02:10:56.566000 audit[5719]: CRED_ACQ pid=5719 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 02:10:56.574721 kernel: audit: type=1101 audit(1765851056.560:865): pid=5719 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 02:10:56.574886 kernel: audit: type=1103 audit(1765851056.566:866): pid=5719 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 02:10:56.570330 sshd-session[5719]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 02:10:56.578858 kernel: audit: type=1006 audit(1765851056.567:867): pid=5719 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=22 res=1 Dec 16 02:10:56.567000 audit[5719]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffe54f98b0 a2=3 a3=0 items=0 ppid=1 pid=5719 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:56.585439 kernel: audit: type=1300 audit(1765851056.567:867): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffe54f98b0 a2=3 a3=0 items=0 ppid=1 pid=5719 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:56.588796 kernel: audit: type=1327 audit(1765851056.567:867): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 02:10:56.567000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 02:10:56.602964 systemd-logind[1853]: New session 22 of user core. Dec 16 02:10:56.612837 systemd[1]: Started session-22.scope - Session 22 of User core. Dec 16 02:10:56.620000 audit[5719]: USER_START pid=5719 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 02:10:56.631567 kernel: audit: type=1105 audit(1765851056.620:868): pid=5719 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 02:10:56.632000 audit[5723]: CRED_ACQ pid=5723 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 02:10:56.640484 kernel: audit: type=1103 audit(1765851056.632:869): pid=5723 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 02:10:56.818819 sshd[5723]: Connection closed by 139.178.89.65 port 35776 Dec 16 02:10:56.821652 sshd-session[5719]: pam_unix(sshd:session): session closed for user core Dec 16 02:10:56.826000 audit[5719]: USER_END pid=5719 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 02:10:56.826000 audit[5719]: CRED_DISP pid=5719 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 02:10:56.841514 kernel: audit: type=1106 audit(1765851056.826:870): pid=5719 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 02:10:56.841666 kernel: audit: type=1104 audit(1765851056.826:871): pid=5719 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 02:10:56.843288 systemd[1]: sshd@20-172.31.24.92:22-139.178.89.65:35776.service: Deactivated successfully. Dec 16 02:10:56.843000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-172.31.24.92:22-139.178.89.65:35776 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:10:56.852169 systemd[1]: session-22.scope: Deactivated successfully. Dec 16 02:10:56.859128 systemd-logind[1853]: Session 22 logged out. Waiting for processes to exit. Dec 16 02:10:56.867126 systemd-logind[1853]: Removed session 22. Dec 16 02:10:56.998352 kubelet[3500]: E1216 02:10:56.998236 3500 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-5q889" podUID="5f75e4b0-aa22-4937-a793-7da0a16c1ff9" Dec 16 02:10:57.452000 audit[5734]: NETFILTER_CFG table=filter:136 family=2 entries=26 op=nft_register_rule pid=5734 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 02:10:57.452000 audit[5734]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=ffffe0adc6a0 a2=0 a3=1 items=0 ppid=3608 pid=5734 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:57.452000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 02:10:57.468000 audit[5734]: NETFILTER_CFG table=nat:137 family=2 entries=104 op=nft_register_chain pid=5734 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 02:10:57.468000 audit[5734]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=48684 a0=3 a1=ffffe0adc6a0 a2=0 a3=1 items=0 ppid=3608 pid=5734 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:10:57.468000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 02:10:57.994330 kubelet[3500]: E1216 02:10:57.994205 3500 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-8495b986f5-t8ws5" podUID="6363be22-676f-4db3-afb1-0a1ce8d8def2" Dec 16 02:10:57.996865 kubelet[3500]: E1216 02:10:57.996658 3500 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-77f9546868-lgh2z" podUID="b08348aa-b9db-4017-ab2d-63cae97b2a73" Dec 16 02:11:00.998647 kubelet[3500]: E1216 02:11:00.997866 3500 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-54647b869b-dj58v" podUID="1d7a12f8-f60f-4170-be36-168aef541297" Dec 16 02:11:01.003108 kubelet[3500]: E1216 02:11:01.002872 3500 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-7f5sg" podUID="aaad2db4-9021-4d31-8275-e9b7ba731389" Dec 16 02:11:01.865800 kernel: kauditd_printk_skb: 7 callbacks suppressed Dec 16 02:11:01.865961 kernel: audit: type=1130 audit(1765851061.859:875): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-172.31.24.92:22-139.178.89.65:49218 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:11:01.859000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-172.31.24.92:22-139.178.89.65:49218 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:11:01.861056 systemd[1]: Started sshd@21-172.31.24.92:22-139.178.89.65:49218.service - OpenSSH per-connection server daemon (139.178.89.65:49218). Dec 16 02:11:01.996663 kubelet[3500]: E1216 02:11:01.996521 3500 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-8495b986f5-pp87t" podUID="2d19a364-8480-43c0-bbf1-372d74633ca8" Dec 16 02:11:02.072000 audit[5761]: USER_ACCT pid=5761 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 02:11:02.079625 sshd[5761]: Accepted publickey for core from 139.178.89.65 port 49218 ssh2: RSA SHA256:GQgi8hrngD5IAzSBvjpWGNrbDxS4+WSDV6E9Am09kRw Dec 16 02:11:02.080000 audit[5761]: CRED_ACQ pid=5761 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 02:11:02.090015 kernel: audit: type=1101 audit(1765851062.072:876): pid=5761 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 02:11:02.090125 kernel: audit: type=1103 audit(1765851062.080:877): pid=5761 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 02:11:02.090852 sshd-session[5761]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 02:11:02.095366 kernel: audit: type=1006 audit(1765851062.081:878): pid=5761 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=23 res=1 Dec 16 02:11:02.081000 audit[5761]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=fffff2d6c0e0 a2=3 a3=0 items=0 ppid=1 pid=5761 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:11:02.101825 kernel: audit: type=1300 audit(1765851062.081:878): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=fffff2d6c0e0 a2=3 a3=0 items=0 ppid=1 pid=5761 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:11:02.081000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 02:11:02.104773 kernel: audit: type=1327 audit(1765851062.081:878): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 02:11:02.113914 systemd-logind[1853]: New session 23 of user core. Dec 16 02:11:02.121138 systemd[1]: Started session-23.scope - Session 23 of User core. Dec 16 02:11:02.128000 audit[5761]: USER_START pid=5761 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 02:11:02.138698 kernel: audit: type=1105 audit(1765851062.128:879): pid=5761 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 02:11:02.141000 audit[5765]: CRED_ACQ pid=5765 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 02:11:02.148486 kernel: audit: type=1103 audit(1765851062.141:880): pid=5765 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 02:11:02.353491 sshd[5765]: Connection closed by 139.178.89.65 port 49218 Dec 16 02:11:02.353700 sshd-session[5761]: pam_unix(sshd:session): session closed for user core Dec 16 02:11:02.357000 audit[5761]: USER_END pid=5761 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 02:11:02.369116 systemd[1]: sshd@21-172.31.24.92:22-139.178.89.65:49218.service: Deactivated successfully. Dec 16 02:11:02.357000 audit[5761]: CRED_DISP pid=5761 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 02:11:02.379977 systemd[1]: session-23.scope: Deactivated successfully. Dec 16 02:11:02.380629 kernel: audit: type=1106 audit(1765851062.357:881): pid=5761 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 02:11:02.381088 kernel: audit: type=1104 audit(1765851062.357:882): pid=5761 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 02:11:02.368000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-172.31.24.92:22-139.178.89.65:49218 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:11:02.392875 systemd-logind[1853]: Session 23 logged out. Waiting for processes to exit. Dec 16 02:11:02.398972 systemd-logind[1853]: Removed session 23. Dec 16 02:11:07.398189 systemd[1]: Started sshd@22-172.31.24.92:22-139.178.89.65:49226.service - OpenSSH per-connection server daemon (139.178.89.65:49226). Dec 16 02:11:07.407385 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 16 02:11:07.407486 kernel: audit: type=1130 audit(1765851067.398:884): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-172.31.24.92:22-139.178.89.65:49226 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:11:07.398000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-172.31.24.92:22-139.178.89.65:49226 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:11:07.631000 audit[5780]: USER_ACCT pid=5780 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 02:11:07.633270 sshd[5780]: Accepted publickey for core from 139.178.89.65 port 49226 ssh2: RSA SHA256:GQgi8hrngD5IAzSBvjpWGNrbDxS4+WSDV6E9Am09kRw Dec 16 02:11:07.643000 audit[5780]: CRED_ACQ pid=5780 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 02:11:07.651079 kernel: audit: type=1101 audit(1765851067.631:885): pid=5780 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 02:11:07.651211 kernel: audit: type=1103 audit(1765851067.643:886): pid=5780 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 02:11:07.649769 sshd-session[5780]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 02:11:07.658977 kernel: audit: type=1006 audit(1765851067.643:887): pid=5780 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=24 res=1 Dec 16 02:11:07.643000 audit[5780]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffca5a8690 a2=3 a3=0 items=0 ppid=1 pid=5780 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:11:07.672238 kernel: audit: type=1300 audit(1765851067.643:887): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffca5a8690 a2=3 a3=0 items=0 ppid=1 pid=5780 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:11:07.678320 kernel: audit: type=1327 audit(1765851067.643:887): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 02:11:07.643000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 02:11:07.685924 systemd-logind[1853]: New session 24 of user core. Dec 16 02:11:07.692805 systemd[1]: Started session-24.scope - Session 24 of User core. Dec 16 02:11:07.713000 audit[5780]: USER_START pid=5780 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 02:11:07.724000 audit[5784]: CRED_ACQ pid=5784 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 02:11:07.733901 kernel: audit: type=1105 audit(1765851067.713:888): pid=5780 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 02:11:07.734026 kernel: audit: type=1103 audit(1765851067.724:889): pid=5784 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 02:11:08.017455 sshd[5784]: Connection closed by 139.178.89.65 port 49226 Dec 16 02:11:08.016748 sshd-session[5780]: pam_unix(sshd:session): session closed for user core Dec 16 02:11:08.020000 audit[5780]: USER_END pid=5780 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 02:11:08.020000 audit[5780]: CRED_DISP pid=5780 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 02:11:08.036378 kernel: audit: type=1106 audit(1765851068.020:890): pid=5780 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 02:11:08.036579 kernel: audit: type=1104 audit(1765851068.020:891): pid=5780 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 02:11:08.031174 systemd[1]: sshd@22-172.31.24.92:22-139.178.89.65:49226.service: Deactivated successfully. Dec 16 02:11:08.036680 systemd[1]: session-24.scope: Deactivated successfully. Dec 16 02:11:08.030000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-172.31.24.92:22-139.178.89.65:49226 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:11:08.042012 systemd-logind[1853]: Session 24 logged out. Waiting for processes to exit. Dec 16 02:11:08.048245 systemd-logind[1853]: Removed session 24. Dec 16 02:11:08.993086 kubelet[3500]: E1216 02:11:08.992720 3500 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-5q889" podUID="5f75e4b0-aa22-4937-a793-7da0a16c1ff9" Dec 16 02:11:08.997581 kubelet[3500]: E1216 02:11:08.997493 3500 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-77f9546868-lgh2z" podUID="b08348aa-b9db-4017-ab2d-63cae97b2a73" Dec 16 02:11:10.995941 kubelet[3500]: E1216 02:11:10.994913 3500 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-8495b986f5-t8ws5" podUID="6363be22-676f-4db3-afb1-0a1ce8d8def2" Dec 16 02:11:12.995305 kubelet[3500]: E1216 02:11:12.995173 3500 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-8495b986f5-pp87t" podUID="2d19a364-8480-43c0-bbf1-372d74633ca8" Dec 16 02:11:13.055252 systemd[1]: Started sshd@23-172.31.24.92:22-139.178.89.65:37138.service - OpenSSH per-connection server daemon (139.178.89.65:37138). Dec 16 02:11:13.065010 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 16 02:11:13.066167 kernel: audit: type=1130 audit(1765851073.054:893): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-172.31.24.92:22-139.178.89.65:37138 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:11:13.054000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-172.31.24.92:22-139.178.89.65:37138 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:11:13.272000 audit[5799]: USER_ACCT pid=5799 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 02:11:13.275313 sshd[5799]: Accepted publickey for core from 139.178.89.65 port 37138 ssh2: RSA SHA256:GQgi8hrngD5IAzSBvjpWGNrbDxS4+WSDV6E9Am09kRw Dec 16 02:11:13.280000 audit[5799]: CRED_ACQ pid=5799 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 02:11:13.284349 sshd-session[5799]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 02:11:13.289980 kernel: audit: type=1101 audit(1765851073.272:894): pid=5799 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 02:11:13.290120 kernel: audit: type=1103 audit(1765851073.280:895): pid=5799 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 02:11:13.297815 kernel: audit: type=1006 audit(1765851073.280:896): pid=5799 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=25 res=1 Dec 16 02:11:13.280000 audit[5799]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffe6281420 a2=3 a3=0 items=0 ppid=1 pid=5799 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:11:13.305742 kernel: audit: type=1300 audit(1765851073.280:896): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffe6281420 a2=3 a3=0 items=0 ppid=1 pid=5799 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:11:13.308457 kernel: audit: type=1327 audit(1765851073.280:896): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 02:11:13.280000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 02:11:13.314661 systemd-logind[1853]: New session 25 of user core. Dec 16 02:11:13.321832 systemd[1]: Started session-25.scope - Session 25 of User core. Dec 16 02:11:13.331000 audit[5799]: USER_START pid=5799 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 02:11:13.339000 audit[5803]: CRED_ACQ pid=5803 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 02:11:13.346437 kernel: audit: type=1105 audit(1765851073.331:897): pid=5799 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 02:11:13.346598 kernel: audit: type=1103 audit(1765851073.339:898): pid=5803 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 02:11:13.557113 sshd[5803]: Connection closed by 139.178.89.65 port 37138 Dec 16 02:11:13.558261 sshd-session[5799]: pam_unix(sshd:session): session closed for user core Dec 16 02:11:13.560000 audit[5799]: USER_END pid=5799 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 02:11:13.571099 systemd[1]: sshd@23-172.31.24.92:22-139.178.89.65:37138.service: Deactivated successfully. Dec 16 02:11:13.577057 systemd[1]: session-25.scope: Deactivated successfully. Dec 16 02:11:13.584941 kernel: audit: type=1106 audit(1765851073.560:899): pid=5799 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 02:11:13.585098 kernel: audit: type=1104 audit(1765851073.560:900): pid=5799 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 02:11:13.560000 audit[5799]: CRED_DISP pid=5799 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 02:11:13.586946 systemd-logind[1853]: Session 25 logged out. Waiting for processes to exit. Dec 16 02:11:13.570000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-172.31.24.92:22-139.178.89.65:37138 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:11:13.593931 systemd-logind[1853]: Removed session 25. Dec 16 02:11:15.997117 kubelet[3500]: E1216 02:11:15.997025 3500 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-7f5sg" podUID="aaad2db4-9021-4d31-8275-e9b7ba731389" Dec 16 02:11:15.998593 kubelet[3500]: E1216 02:11:15.998398 3500 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-54647b869b-dj58v" podUID="1d7a12f8-f60f-4170-be36-168aef541297" Dec 16 02:11:18.608358 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 16 02:11:18.608530 kernel: audit: type=1130 audit(1765851078.598:902): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-172.31.24.92:22-139.178.89.65:37140 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:11:18.598000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-172.31.24.92:22-139.178.89.65:37140 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:11:18.600095 systemd[1]: Started sshd@24-172.31.24.92:22-139.178.89.65:37140.service - OpenSSH per-connection server daemon (139.178.89.65:37140). Dec 16 02:11:18.843000 audit[5815]: USER_ACCT pid=5815 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 02:11:18.853530 sshd[5815]: Accepted publickey for core from 139.178.89.65 port 37140 ssh2: RSA SHA256:GQgi8hrngD5IAzSBvjpWGNrbDxS4+WSDV6E9Am09kRw Dec 16 02:11:18.857300 sshd-session[5815]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 02:11:18.851000 audit[5815]: CRED_ACQ pid=5815 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 02:11:18.863451 kernel: audit: type=1101 audit(1765851078.843:903): pid=5815 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 02:11:18.864234 kernel: audit: type=1103 audit(1765851078.851:904): pid=5815 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 02:11:18.873737 kernel: audit: type=1006 audit(1765851078.851:905): pid=5815 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=26 res=1 Dec 16 02:11:18.875607 kernel: audit: type=1300 audit(1765851078.851:905): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=fffff92b31c0 a2=3 a3=0 items=0 ppid=1 pid=5815 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=26 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:11:18.851000 audit[5815]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=fffff92b31c0 a2=3 a3=0 items=0 ppid=1 pid=5815 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=26 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:11:18.851000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 02:11:18.883875 kernel: audit: type=1327 audit(1765851078.851:905): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 02:11:18.895852 systemd-logind[1853]: New session 26 of user core. Dec 16 02:11:18.904974 systemd[1]: Started session-26.scope - Session 26 of User core. Dec 16 02:11:18.918000 audit[5815]: USER_START pid=5815 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 02:11:18.929525 kernel: audit: type=1105 audit(1765851078.918:906): pid=5815 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 02:11:18.932000 audit[5819]: CRED_ACQ pid=5819 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 02:11:18.940467 kernel: audit: type=1103 audit(1765851078.932:907): pid=5819 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 02:11:19.191519 sshd[5819]: Connection closed by 139.178.89.65 port 37140 Dec 16 02:11:19.193730 sshd-session[5815]: pam_unix(sshd:session): session closed for user core Dec 16 02:11:19.196000 audit[5815]: USER_END pid=5815 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 02:11:19.196000 audit[5815]: CRED_DISP pid=5815 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 02:11:19.213678 kernel: audit: type=1106 audit(1765851079.196:908): pid=5815 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 02:11:19.215101 systemd[1]: sshd@24-172.31.24.92:22-139.178.89.65:37140.service: Deactivated successfully. Dec 16 02:11:19.220952 systemd[1]: session-26.scope: Deactivated successfully. Dec 16 02:11:19.212000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-172.31.24.92:22-139.178.89.65:37140 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:11:19.224445 kernel: audit: type=1104 audit(1765851079.196:909): pid=5815 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 02:11:19.226385 systemd-logind[1853]: Session 26 logged out. Waiting for processes to exit. Dec 16 02:11:19.233058 systemd-logind[1853]: Removed session 26. Dec 16 02:11:19.990810 kubelet[3500]: E1216 02:11:19.990724 3500 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-5q889" podUID="5f75e4b0-aa22-4937-a793-7da0a16c1ff9" Dec 16 02:11:22.998937 kubelet[3500]: E1216 02:11:22.998852 3500 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-77f9546868-lgh2z" podUID="b08348aa-b9db-4017-ab2d-63cae97b2a73" Dec 16 02:11:24.234254 systemd[1]: Started sshd@25-172.31.24.92:22-139.178.89.65:34638.service - OpenSSH per-connection server daemon (139.178.89.65:34638). Dec 16 02:11:24.244234 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 16 02:11:24.244319 kernel: audit: type=1130 audit(1765851084.233:911): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-172.31.24.92:22-139.178.89.65:34638 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:11:24.233000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-172.31.24.92:22-139.178.89.65:34638 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:11:24.450000 audit[5832]: USER_ACCT pid=5832 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 02:11:24.457876 sshd[5832]: Accepted publickey for core from 139.178.89.65 port 34638 ssh2: RSA SHA256:GQgi8hrngD5IAzSBvjpWGNrbDxS4+WSDV6E9Am09kRw Dec 16 02:11:24.462741 sshd-session[5832]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 02:11:24.469699 kernel: audit: type=1101 audit(1765851084.450:912): pid=5832 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 02:11:24.469898 kernel: audit: type=1103 audit(1765851084.458:913): pid=5832 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 02:11:24.458000 audit[5832]: CRED_ACQ pid=5832 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 02:11:24.474127 kernel: audit: type=1006 audit(1765851084.459:914): pid=5832 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=27 res=1 Dec 16 02:11:24.459000 audit[5832]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=fffff33cdb90 a2=3 a3=0 items=0 ppid=1 pid=5832 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=27 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:11:24.490509 systemd-logind[1853]: New session 27 of user core. Dec 16 02:11:24.496714 kernel: audit: type=1300 audit(1765851084.459:914): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=fffff33cdb90 a2=3 a3=0 items=0 ppid=1 pid=5832 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=27 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:11:24.496864 kernel: audit: type=1327 audit(1765851084.459:914): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 02:11:24.459000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 02:11:24.501104 systemd[1]: Started session-27.scope - Session 27 of User core. Dec 16 02:11:24.508000 audit[5832]: USER_START pid=5832 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 02:11:24.519515 kernel: audit: type=1105 audit(1765851084.508:915): pid=5832 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 02:11:24.522000 audit[5836]: CRED_ACQ pid=5836 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 02:11:24.530516 kernel: audit: type=1103 audit(1765851084.522:916): pid=5836 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 02:11:24.777471 sshd[5836]: Connection closed by 139.178.89.65 port 34638 Dec 16 02:11:24.779144 sshd-session[5832]: pam_unix(sshd:session): session closed for user core Dec 16 02:11:24.783000 audit[5832]: USER_END pid=5832 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 02:11:24.797366 systemd[1]: sshd@25-172.31.24.92:22-139.178.89.65:34638.service: Deactivated successfully. Dec 16 02:11:24.783000 audit[5832]: CRED_DISP pid=5832 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 02:11:24.809453 kernel: audit: type=1106 audit(1765851084.783:917): pid=5832 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 02:11:24.809593 kernel: audit: type=1104 audit(1765851084.783:918): pid=5832 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 02:11:24.806526 systemd[1]: session-27.scope: Deactivated successfully. Dec 16 02:11:24.796000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-172.31.24.92:22-139.178.89.65:34638 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:11:24.817204 systemd-logind[1853]: Session 27 logged out. Waiting for processes to exit. Dec 16 02:11:24.825666 systemd-logind[1853]: Removed session 27. Dec 16 02:11:24.995760 kubelet[3500]: E1216 02:11:24.995624 3500 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-8495b986f5-t8ws5" podUID="6363be22-676f-4db3-afb1-0a1ce8d8def2" Dec 16 02:11:25.993633 kubelet[3500]: E1216 02:11:25.993554 3500 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-8495b986f5-pp87t" podUID="2d19a364-8480-43c0-bbf1-372d74633ca8" Dec 16 02:11:28.993712 kubelet[3500]: E1216 02:11:28.993369 3500 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-7f5sg" podUID="aaad2db4-9021-4d31-8275-e9b7ba731389" Dec 16 02:11:30.992485 containerd[1908]: time="2025-12-16T02:11:30.992088192Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Dec 16 02:11:31.257135 containerd[1908]: time="2025-12-16T02:11:31.256947081Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 02:11:31.259396 containerd[1908]: time="2025-12-16T02:11:31.259251501Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Dec 16 02:11:31.259396 containerd[1908]: time="2025-12-16T02:11:31.259328397Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Dec 16 02:11:31.259879 kubelet[3500]: E1216 02:11:31.259605 3500 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 16 02:11:31.259879 kubelet[3500]: E1216 02:11:31.259663 3500 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 16 02:11:31.259879 kubelet[3500]: E1216 02:11:31.259800 3500 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-54647b869b-dj58v_calico-system(1d7a12f8-f60f-4170-be36-168aef541297): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Dec 16 02:11:31.259879 kubelet[3500]: E1216 02:11:31.259853 3500 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-54647b869b-dj58v" podUID="1d7a12f8-f60f-4170-be36-168aef541297" Dec 16 02:11:33.993533 containerd[1908]: time="2025-12-16T02:11:33.993469911Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Dec 16 02:11:34.271479 containerd[1908]: time="2025-12-16T02:11:34.271280208Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 02:11:34.273572 containerd[1908]: time="2025-12-16T02:11:34.273503256Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Dec 16 02:11:34.273705 containerd[1908]: time="2025-12-16T02:11:34.273626256Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Dec 16 02:11:34.273916 kubelet[3500]: E1216 02:11:34.273859 3500 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 16 02:11:34.274583 kubelet[3500]: E1216 02:11:34.273938 3500 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 16 02:11:34.274583 kubelet[3500]: E1216 02:11:34.274077 3500 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-5q889_calico-system(5f75e4b0-aa22-4937-a793-7da0a16c1ff9): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Dec 16 02:11:34.274583 kubelet[3500]: E1216 02:11:34.274131 3500 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-5q889" podUID="5f75e4b0-aa22-4937-a793-7da0a16c1ff9" Dec 16 02:11:35.993344 containerd[1908]: time="2025-12-16T02:11:35.992681741Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Dec 16 02:11:36.244233 containerd[1908]: time="2025-12-16T02:11:36.244028366Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 02:11:36.246700 containerd[1908]: time="2025-12-16T02:11:36.246506078Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Dec 16 02:11:36.246700 containerd[1908]: time="2025-12-16T02:11:36.246541718Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Dec 16 02:11:36.246970 kubelet[3500]: E1216 02:11:36.246868 3500 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 02:11:36.246970 kubelet[3500]: E1216 02:11:36.246926 3500 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 02:11:36.247768 kubelet[3500]: E1216 02:11:36.247032 3500 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-77f9546868-lgh2z_calico-system(b08348aa-b9db-4017-ab2d-63cae97b2a73): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Dec 16 02:11:36.249114 containerd[1908]: time="2025-12-16T02:11:36.249037046Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Dec 16 02:11:36.532952 containerd[1908]: time="2025-12-16T02:11:36.532732264Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 02:11:36.534966 containerd[1908]: time="2025-12-16T02:11:36.534890116Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Dec 16 02:11:36.535721 containerd[1908]: time="2025-12-16T02:11:36.535024912Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Dec 16 02:11:36.535850 kubelet[3500]: E1216 02:11:36.535342 3500 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 02:11:36.535850 kubelet[3500]: E1216 02:11:36.535404 3500 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 02:11:36.535850 kubelet[3500]: E1216 02:11:36.535581 3500 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-77f9546868-lgh2z_calico-system(b08348aa-b9db-4017-ab2d-63cae97b2a73): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Dec 16 02:11:36.535850 kubelet[3500]: E1216 02:11:36.535650 3500 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-77f9546868-lgh2z" podUID="b08348aa-b9db-4017-ab2d-63cae97b2a73" Dec 16 02:11:37.991375 containerd[1908]: time="2025-12-16T02:11:37.991301071Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 02:11:38.278260 containerd[1908]: time="2025-12-16T02:11:38.278073640Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 02:11:38.280298 containerd[1908]: time="2025-12-16T02:11:38.280231804Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 02:11:38.280473 containerd[1908]: time="2025-12-16T02:11:38.280348360Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 16 02:11:38.280700 kubelet[3500]: E1216 02:11:38.280633 3500 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 02:11:38.281311 kubelet[3500]: E1216 02:11:38.280701 3500 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 02:11:38.281311 kubelet[3500]: E1216 02:11:38.280858 3500 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-8495b986f5-t8ws5_calico-apiserver(6363be22-676f-4db3-afb1-0a1ce8d8def2): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 02:11:38.281311 kubelet[3500]: E1216 02:11:38.280953 3500 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-8495b986f5-t8ws5" podUID="6363be22-676f-4db3-afb1-0a1ce8d8def2" Dec 16 02:11:38.850271 systemd[1]: cri-containerd-618860c6776a82bb73cfa045239b6e1b346c5352c8829b8ebbcc6193f4e451aa.scope: Deactivated successfully. Dec 16 02:11:38.852539 systemd[1]: cri-containerd-618860c6776a82bb73cfa045239b6e1b346c5352c8829b8ebbcc6193f4e451aa.scope: Consumed 25.662s CPU time, 101.4M memory peak. Dec 16 02:11:38.854000 audit: BPF prog-id=153 op=UNLOAD Dec 16 02:11:38.857480 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 16 02:11:38.857656 kernel: audit: type=1334 audit(1765851098.854:920): prog-id=153 op=UNLOAD Dec 16 02:11:38.854000 audit: BPF prog-id=157 op=UNLOAD Dec 16 02:11:38.860289 kernel: audit: type=1334 audit(1765851098.854:921): prog-id=157 op=UNLOAD Dec 16 02:11:38.861395 containerd[1908]: time="2025-12-16T02:11:38.861173731Z" level=info msg="received container exit event container_id:\"618860c6776a82bb73cfa045239b6e1b346c5352c8829b8ebbcc6193f4e451aa\" id:\"618860c6776a82bb73cfa045239b6e1b346c5352c8829b8ebbcc6193f4e451aa\" pid:3826 exit_status:1 exited_at:{seconds:1765851098 nanos:854472175}" Dec 16 02:11:38.905287 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-618860c6776a82bb73cfa045239b6e1b346c5352c8829b8ebbcc6193f4e451aa-rootfs.mount: Deactivated successfully. Dec 16 02:11:39.028394 kubelet[3500]: I1216 02:11:39.028170 3500 scope.go:117] "RemoveContainer" containerID="618860c6776a82bb73cfa045239b6e1b346c5352c8829b8ebbcc6193f4e451aa" Dec 16 02:11:39.035880 containerd[1908]: time="2025-12-16T02:11:39.035799700Z" level=info msg="CreateContainer within sandbox \"279e37c9ba6f84db4d530ea8bfda8b5db2c2a65c17dcfb515495dcefd42eb0c4\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Dec 16 02:11:39.055321 containerd[1908]: time="2025-12-16T02:11:39.054320572Z" level=info msg="Container b8d2c346f2437decd5fd97da09495f28f46ffc19b3edf0e43e4dc73e7bb13070: CDI devices from CRI Config.CDIDevices: []" Dec 16 02:11:39.072484 containerd[1908]: time="2025-12-16T02:11:39.072361864Z" level=info msg="CreateContainer within sandbox \"279e37c9ba6f84db4d530ea8bfda8b5db2c2a65c17dcfb515495dcefd42eb0c4\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"b8d2c346f2437decd5fd97da09495f28f46ffc19b3edf0e43e4dc73e7bb13070\"" Dec 16 02:11:39.074208 containerd[1908]: time="2025-12-16T02:11:39.073213144Z" level=info msg="StartContainer for \"b8d2c346f2437decd5fd97da09495f28f46ffc19b3edf0e43e4dc73e7bb13070\"" Dec 16 02:11:39.075655 containerd[1908]: time="2025-12-16T02:11:39.075600964Z" level=info msg="connecting to shim b8d2c346f2437decd5fd97da09495f28f46ffc19b3edf0e43e4dc73e7bb13070" address="unix:///run/containerd/s/5a72882a5edc2d3952d612fdcb706c3dc91188d6333ab1c3f1b8dc912f658d5a" protocol=ttrpc version=3 Dec 16 02:11:39.120824 systemd[1]: Started cri-containerd-b8d2c346f2437decd5fd97da09495f28f46ffc19b3edf0e43e4dc73e7bb13070.scope - libcontainer container b8d2c346f2437decd5fd97da09495f28f46ffc19b3edf0e43e4dc73e7bb13070. Dec 16 02:11:39.146000 audit: BPF prog-id=263 op=LOAD Dec 16 02:11:39.150508 kernel: audit: type=1334 audit(1765851099.146:922): prog-id=263 op=LOAD Dec 16 02:11:39.149000 audit: BPF prog-id=264 op=LOAD Dec 16 02:11:39.149000 audit[5910]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000138180 a2=98 a3=0 items=0 ppid=3634 pid=5910 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:11:39.160245 kernel: audit: type=1334 audit(1765851099.149:923): prog-id=264 op=LOAD Dec 16 02:11:39.160399 kernel: audit: type=1300 audit(1765851099.149:923): arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000138180 a2=98 a3=0 items=0 ppid=3634 pid=5910 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:11:39.167447 kernel: audit: type=1327 audit(1765851099.149:923): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6238643263333436663234333764656364356664393764613039343935 Dec 16 02:11:39.149000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6238643263333436663234333764656364356664393764613039343935 Dec 16 02:11:39.149000 audit: BPF prog-id=264 op=UNLOAD Dec 16 02:11:39.149000 audit[5910]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3634 pid=5910 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:11:39.177181 kernel: audit: type=1334 audit(1765851099.149:924): prog-id=264 op=UNLOAD Dec 16 02:11:39.177490 kernel: audit: type=1300 audit(1765851099.149:924): arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3634 pid=5910 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:11:39.149000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6238643263333436663234333764656364356664393764613039343935 Dec 16 02:11:39.184547 kernel: audit: type=1327 audit(1765851099.149:924): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6238643263333436663234333764656364356664393764613039343935 Dec 16 02:11:39.151000 audit: BPF prog-id=265 op=LOAD Dec 16 02:11:39.188310 kernel: audit: type=1334 audit(1765851099.151:925): prog-id=265 op=LOAD Dec 16 02:11:39.151000 audit[5910]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001383e8 a2=98 a3=0 items=0 ppid=3634 pid=5910 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:11:39.151000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6238643263333436663234333764656364356664393764613039343935 Dec 16 02:11:39.151000 audit: BPF prog-id=266 op=LOAD Dec 16 02:11:39.151000 audit[5910]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000138168 a2=98 a3=0 items=0 ppid=3634 pid=5910 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:11:39.151000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6238643263333436663234333764656364356664393764613039343935 Dec 16 02:11:39.151000 audit: BPF prog-id=266 op=UNLOAD Dec 16 02:11:39.151000 audit[5910]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3634 pid=5910 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:11:39.151000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6238643263333436663234333764656364356664393764613039343935 Dec 16 02:11:39.151000 audit: BPF prog-id=265 op=UNLOAD Dec 16 02:11:39.151000 audit[5910]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3634 pid=5910 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:11:39.151000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6238643263333436663234333764656364356664393764613039343935 Dec 16 02:11:39.152000 audit: BPF prog-id=267 op=LOAD Dec 16 02:11:39.152000 audit[5910]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000138648 a2=98 a3=0 items=0 ppid=3634 pid=5910 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:11:39.152000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6238643263333436663234333764656364356664393764613039343935 Dec 16 02:11:39.238741 containerd[1908]: time="2025-12-16T02:11:39.238613393Z" level=info msg="StartContainer for \"b8d2c346f2437decd5fd97da09495f28f46ffc19b3edf0e43e4dc73e7bb13070\" returns successfully" Dec 16 02:11:39.593031 systemd[1]: cri-containerd-55dcd13e07b733e2ad30aca831ddd8992bea516a34d8a1a1d367617b2150c84a.scope: Deactivated successfully. Dec 16 02:11:39.594713 systemd[1]: cri-containerd-55dcd13e07b733e2ad30aca831ddd8992bea516a34d8a1a1d367617b2150c84a.scope: Consumed 7.176s CPU time, 65.8M memory peak. Dec 16 02:11:39.595000 audit: BPF prog-id=268 op=LOAD Dec 16 02:11:39.595000 audit: BPF prog-id=96 op=UNLOAD Dec 16 02:11:39.597000 audit: BPF prog-id=110 op=UNLOAD Dec 16 02:11:39.597000 audit: BPF prog-id=115 op=UNLOAD Dec 16 02:11:39.601740 containerd[1908]: time="2025-12-16T02:11:39.601681171Z" level=info msg="received container exit event container_id:\"55dcd13e07b733e2ad30aca831ddd8992bea516a34d8a1a1d367617b2150c84a\" id:\"55dcd13e07b733e2ad30aca831ddd8992bea516a34d8a1a1d367617b2150c84a\" pid:3070 exit_status:1 exited_at:{seconds:1765851099 nanos:600318955}" Dec 16 02:11:39.654964 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-55dcd13e07b733e2ad30aca831ddd8992bea516a34d8a1a1d367617b2150c84a-rootfs.mount: Deactivated successfully. Dec 16 02:11:40.036961 kubelet[3500]: I1216 02:11:40.036880 3500 scope.go:117] "RemoveContainer" containerID="55dcd13e07b733e2ad30aca831ddd8992bea516a34d8a1a1d367617b2150c84a" Dec 16 02:11:40.048629 containerd[1908]: time="2025-12-16T02:11:40.048339737Z" level=info msg="CreateContainer within sandbox \"7c5a46af1ee70f640ea2bec31a267bff282fc53d2013ffb78e775424ee2ce8ec\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Dec 16 02:11:40.074494 containerd[1908]: time="2025-12-16T02:11:40.074397245Z" level=info msg="Container ffe4fed7039b5cf1d130f443f81ef98d126fa7836e73e875bb951a62fe10cd1a: CDI devices from CRI Config.CDIDevices: []" Dec 16 02:11:40.101616 containerd[1908]: time="2025-12-16T02:11:40.101530577Z" level=info msg="CreateContainer within sandbox \"7c5a46af1ee70f640ea2bec31a267bff282fc53d2013ffb78e775424ee2ce8ec\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"ffe4fed7039b5cf1d130f443f81ef98d126fa7836e73e875bb951a62fe10cd1a\"" Dec 16 02:11:40.103438 containerd[1908]: time="2025-12-16T02:11:40.103009301Z" level=info msg="StartContainer for \"ffe4fed7039b5cf1d130f443f81ef98d126fa7836e73e875bb951a62fe10cd1a\"" Dec 16 02:11:40.105721 containerd[1908]: time="2025-12-16T02:11:40.105651473Z" level=info msg="connecting to shim ffe4fed7039b5cf1d130f443f81ef98d126fa7836e73e875bb951a62fe10cd1a" address="unix:///run/containerd/s/27055cffc1e913dd5a2eced25dbef41d7b248f358ae25fd53466422364128939" protocol=ttrpc version=3 Dec 16 02:11:40.146763 systemd[1]: Started cri-containerd-ffe4fed7039b5cf1d130f443f81ef98d126fa7836e73e875bb951a62fe10cd1a.scope - libcontainer container ffe4fed7039b5cf1d130f443f81ef98d126fa7836e73e875bb951a62fe10cd1a. Dec 16 02:11:40.174000 audit: BPF prog-id=269 op=LOAD Dec 16 02:11:40.176000 audit: BPF prog-id=270 op=LOAD Dec 16 02:11:40.176000 audit[5957]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130180 a2=98 a3=0 items=0 ppid=2911 pid=5957 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:11:40.176000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6666653466656437303339623563663164313330663434336638316566 Dec 16 02:11:40.176000 audit: BPF prog-id=270 op=UNLOAD Dec 16 02:11:40.176000 audit[5957]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2911 pid=5957 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:11:40.176000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6666653466656437303339623563663164313330663434336638316566 Dec 16 02:11:40.176000 audit: BPF prog-id=271 op=LOAD Dec 16 02:11:40.176000 audit[5957]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001303e8 a2=98 a3=0 items=0 ppid=2911 pid=5957 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:11:40.176000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6666653466656437303339623563663164313330663434336638316566 Dec 16 02:11:40.176000 audit: BPF prog-id=272 op=LOAD Dec 16 02:11:40.176000 audit[5957]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000130168 a2=98 a3=0 items=0 ppid=2911 pid=5957 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:11:40.176000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6666653466656437303339623563663164313330663434336638316566 Dec 16 02:11:40.176000 audit: BPF prog-id=272 op=UNLOAD Dec 16 02:11:40.176000 audit[5957]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2911 pid=5957 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:11:40.176000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6666653466656437303339623563663164313330663434336638316566 Dec 16 02:11:40.176000 audit: BPF prog-id=271 op=UNLOAD Dec 16 02:11:40.176000 audit[5957]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2911 pid=5957 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:11:40.176000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6666653466656437303339623563663164313330663434336638316566 Dec 16 02:11:40.177000 audit: BPF prog-id=273 op=LOAD Dec 16 02:11:40.177000 audit[5957]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130648 a2=98 a3=0 items=0 ppid=2911 pid=5957 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:11:40.177000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6666653466656437303339623563663164313330663434336638316566 Dec 16 02:11:40.248815 containerd[1908]: time="2025-12-16T02:11:40.248744310Z" level=info msg="StartContainer for \"ffe4fed7039b5cf1d130f443f81ef98d126fa7836e73e875bb951a62fe10cd1a\" returns successfully" Dec 16 02:11:40.992514 containerd[1908]: time="2025-12-16T02:11:40.991601938Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 02:11:41.280499 containerd[1908]: time="2025-12-16T02:11:41.280210147Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 02:11:41.283510 containerd[1908]: time="2025-12-16T02:11:41.283344379Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 02:11:41.283865 containerd[1908]: time="2025-12-16T02:11:41.283428115Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 16 02:11:41.284188 kubelet[3500]: E1216 02:11:41.284141 3500 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 02:11:41.285252 kubelet[3500]: E1216 02:11:41.284799 3500 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 02:11:41.285252 kubelet[3500]: E1216 02:11:41.285122 3500 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-8495b986f5-pp87t_calico-apiserver(2d19a364-8480-43c0-bbf1-372d74633ca8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 02:11:41.285513 kubelet[3500]: E1216 02:11:41.285220 3500 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-8495b986f5-pp87t" podUID="2d19a364-8480-43c0-bbf1-372d74633ca8" Dec 16 02:11:41.993236 containerd[1908]: time="2025-12-16T02:11:41.992913539Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Dec 16 02:11:42.269499 containerd[1908]: time="2025-12-16T02:11:42.269322692Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 02:11:42.273886 containerd[1908]: time="2025-12-16T02:11:42.272484260Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Dec 16 02:11:42.273886 containerd[1908]: time="2025-12-16T02:11:42.272543960Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Dec 16 02:11:42.274110 kubelet[3500]: E1216 02:11:42.272893 3500 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 16 02:11:42.274110 kubelet[3500]: E1216 02:11:42.273000 3500 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 16 02:11:42.274110 kubelet[3500]: E1216 02:11:42.273165 3500 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-7f5sg_calico-system(aaad2db4-9021-4d31-8275-e9b7ba731389): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Dec 16 02:11:42.275518 containerd[1908]: time="2025-12-16T02:11:42.275159432Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Dec 16 02:11:42.557483 containerd[1908]: time="2025-12-16T02:11:42.556838914Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 02:11:42.559098 containerd[1908]: time="2025-12-16T02:11:42.559023418Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Dec 16 02:11:42.559098 containerd[1908]: time="2025-12-16T02:11:42.559050802Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Dec 16 02:11:42.559375 kubelet[3500]: E1216 02:11:42.559323 3500 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 16 02:11:42.559881 kubelet[3500]: E1216 02:11:42.559382 3500 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 16 02:11:42.559881 kubelet[3500]: E1216 02:11:42.559514 3500 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-7f5sg_calico-system(aaad2db4-9021-4d31-8275-e9b7ba731389): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Dec 16 02:11:42.559881 kubelet[3500]: E1216 02:11:42.559581 3500 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-7f5sg" podUID="aaad2db4-9021-4d31-8275-e9b7ba731389" Dec 16 02:11:44.113172 systemd[1]: cri-containerd-2770fe300aa620ad258c140fcc98d061a6520c6d47fea5ca712e08f81e913822.scope: Deactivated successfully. Dec 16 02:11:44.115085 systemd[1]: cri-containerd-2770fe300aa620ad258c140fcc98d061a6520c6d47fea5ca712e08f81e913822.scope: Consumed 6.415s CPU time, 20.3M memory peak. Dec 16 02:11:44.116000 audit: BPF prog-id=113 op=UNLOAD Dec 16 02:11:44.120266 kernel: kauditd_printk_skb: 40 callbacks suppressed Dec 16 02:11:44.120621 kernel: audit: type=1334 audit(1765851104.116:942): prog-id=113 op=UNLOAD Dec 16 02:11:44.123043 containerd[1908]: time="2025-12-16T02:11:44.122968821Z" level=info msg="received container exit event container_id:\"2770fe300aa620ad258c140fcc98d061a6520c6d47fea5ca712e08f81e913822\" id:\"2770fe300aa620ad258c140fcc98d061a6520c6d47fea5ca712e08f81e913822\" pid:3063 exit_status:1 exited_at:{seconds:1765851104 nanos:121034073}" Dec 16 02:11:44.116000 audit: BPF prog-id=119 op=UNLOAD Dec 16 02:11:44.126104 kernel: audit: type=1334 audit(1765851104.116:943): prog-id=119 op=UNLOAD Dec 16 02:11:44.117000 audit: BPF prog-id=274 op=LOAD Dec 16 02:11:44.128365 kernel: audit: type=1334 audit(1765851104.117:944): prog-id=274 op=LOAD Dec 16 02:11:44.121000 audit: BPF prog-id=95 op=UNLOAD Dec 16 02:11:44.130306 kernel: audit: type=1334 audit(1765851104.121:945): prog-id=95 op=UNLOAD Dec 16 02:11:44.172452 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2770fe300aa620ad258c140fcc98d061a6520c6d47fea5ca712e08f81e913822-rootfs.mount: Deactivated successfully. Dec 16 02:11:45.090138 kubelet[3500]: I1216 02:11:45.089657 3500 scope.go:117] "RemoveContainer" containerID="2770fe300aa620ad258c140fcc98d061a6520c6d47fea5ca712e08f81e913822" Dec 16 02:11:45.095905 containerd[1908]: time="2025-12-16T02:11:45.095849218Z" level=info msg="CreateContainer within sandbox \"d1caf9beb499324b1b036d5f568c5d3ee884a780a0dbfc65e5e2bfb51da23cde\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Dec 16 02:11:45.116731 containerd[1908]: time="2025-12-16T02:11:45.116658694Z" level=info msg="Container 53660e14904ece8ee1b00da9276f58a5f6740ce075652baf9271f87f0dbefa01: CDI devices from CRI Config.CDIDevices: []" Dec 16 02:11:45.136428 containerd[1908]: time="2025-12-16T02:11:45.136312930Z" level=info msg="CreateContainer within sandbox \"d1caf9beb499324b1b036d5f568c5d3ee884a780a0dbfc65e5e2bfb51da23cde\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"53660e14904ece8ee1b00da9276f58a5f6740ce075652baf9271f87f0dbefa01\"" Dec 16 02:11:45.137591 containerd[1908]: time="2025-12-16T02:11:45.137178934Z" level=info msg="StartContainer for \"53660e14904ece8ee1b00da9276f58a5f6740ce075652baf9271f87f0dbefa01\"" Dec 16 02:11:45.139640 containerd[1908]: time="2025-12-16T02:11:45.139590370Z" level=info msg="connecting to shim 53660e14904ece8ee1b00da9276f58a5f6740ce075652baf9271f87f0dbefa01" address="unix:///run/containerd/s/8d622b48c57065ad352c4bb8ffdb88482360dbae0367b499342c2496e5d5ef83" protocol=ttrpc version=3 Dec 16 02:11:45.181789 systemd[1]: Started cri-containerd-53660e14904ece8ee1b00da9276f58a5f6740ce075652baf9271f87f0dbefa01.scope - libcontainer container 53660e14904ece8ee1b00da9276f58a5f6740ce075652baf9271f87f0dbefa01. Dec 16 02:11:45.207000 audit: BPF prog-id=275 op=LOAD Dec 16 02:11:45.208000 audit: BPF prog-id=276 op=LOAD Dec 16 02:11:45.212069 kernel: audit: type=1334 audit(1765851105.207:946): prog-id=275 op=LOAD Dec 16 02:11:45.212179 kernel: audit: type=1334 audit(1765851105.208:947): prog-id=276 op=LOAD Dec 16 02:11:45.218200 kernel: audit: type=1300 audit(1765851105.208:947): arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178180 a2=98 a3=0 items=0 ppid=2901 pid=6009 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:11:45.208000 audit[6009]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178180 a2=98 a3=0 items=0 ppid=2901 pid=6009 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:11:45.208000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3533363630653134393034656365386565316230306461393237366635 Dec 16 02:11:45.224906 kernel: audit: type=1327 audit(1765851105.208:947): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3533363630653134393034656365386565316230306461393237366635 Dec 16 02:11:45.209000 audit: BPF prog-id=276 op=UNLOAD Dec 16 02:11:45.232199 kernel: audit: type=1334 audit(1765851105.209:948): prog-id=276 op=UNLOAD Dec 16 02:11:45.209000 audit[6009]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2901 pid=6009 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:11:45.238700 kernel: audit: type=1300 audit(1765851105.209:948): arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2901 pid=6009 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:11:45.209000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3533363630653134393034656365386565316230306461393237366635 Dec 16 02:11:45.209000 audit: BPF prog-id=277 op=LOAD Dec 16 02:11:45.209000 audit[6009]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=2901 pid=6009 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:11:45.209000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3533363630653134393034656365386565316230306461393237366635 Dec 16 02:11:45.210000 audit: BPF prog-id=278 op=LOAD Dec 16 02:11:45.210000 audit[6009]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000178168 a2=98 a3=0 items=0 ppid=2901 pid=6009 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:11:45.210000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3533363630653134393034656365386565316230306461393237366635 Dec 16 02:11:45.217000 audit: BPF prog-id=278 op=UNLOAD Dec 16 02:11:45.217000 audit[6009]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2901 pid=6009 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:11:45.217000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3533363630653134393034656365386565316230306461393237366635 Dec 16 02:11:45.217000 audit: BPF prog-id=277 op=UNLOAD Dec 16 02:11:45.217000 audit[6009]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2901 pid=6009 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:11:45.217000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3533363630653134393034656365386565316230306461393237366635 Dec 16 02:11:45.217000 audit: BPF prog-id=279 op=LOAD Dec 16 02:11:45.217000 audit[6009]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178648 a2=98 a3=0 items=0 ppid=2901 pid=6009 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:11:45.217000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3533363630653134393034656365386565316230306461393237366635 Dec 16 02:11:45.329073 containerd[1908]: time="2025-12-16T02:11:45.328990031Z" level=info msg="StartContainer for \"53660e14904ece8ee1b00da9276f58a5f6740ce075652baf9271f87f0dbefa01\" returns successfully" Dec 16 02:11:45.991055 kubelet[3500]: E1216 02:11:45.990964 3500 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-5q889" podUID="5f75e4b0-aa22-4937-a793-7da0a16c1ff9" Dec 16 02:11:46.997665 kubelet[3500]: E1216 02:11:46.997151 3500 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-54647b869b-dj58v" podUID="1d7a12f8-f60f-4170-be36-168aef541297" Dec 16 02:11:47.001957 kubelet[3500]: E1216 02:11:47.001785 3500 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-77f9546868-lgh2z" podUID="b08348aa-b9db-4017-ab2d-63cae97b2a73" Dec 16 02:11:48.241738 kubelet[3500]: E1216 02:11:48.241119 3500 controller.go:195] "Failed to update lease" err="Put \"https://172.31.24.92:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-24-92?timeout=10s\": context deadline exceeded" Dec 16 02:11:50.715271 systemd[1]: cri-containerd-b8d2c346f2437decd5fd97da09495f28f46ffc19b3edf0e43e4dc73e7bb13070.scope: Deactivated successfully. Dec 16 02:11:50.720383 containerd[1908]: time="2025-12-16T02:11:50.720193362Z" level=info msg="received container exit event container_id:\"b8d2c346f2437decd5fd97da09495f28f46ffc19b3edf0e43e4dc73e7bb13070\" id:\"b8d2c346f2437decd5fd97da09495f28f46ffc19b3edf0e43e4dc73e7bb13070\" pid:5924 exit_status:1 exited_at:{seconds:1765851110 nanos:719330466}" Dec 16 02:11:50.722266 kernel: kauditd_printk_skb: 16 callbacks suppressed Dec 16 02:11:50.722332 kernel: audit: type=1334 audit(1765851110.719:954): prog-id=263 op=UNLOAD Dec 16 02:11:50.719000 audit: BPF prog-id=263 op=UNLOAD Dec 16 02:11:50.719000 audit: BPF prog-id=267 op=UNLOAD Dec 16 02:11:50.725349 kernel: audit: type=1334 audit(1765851110.719:955): prog-id=267 op=UNLOAD Dec 16 02:11:50.769974 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b8d2c346f2437decd5fd97da09495f28f46ffc19b3edf0e43e4dc73e7bb13070-rootfs.mount: Deactivated successfully. Dec 16 02:11:51.126057 kubelet[3500]: I1216 02:11:51.125918 3500 scope.go:117] "RemoveContainer" containerID="618860c6776a82bb73cfa045239b6e1b346c5352c8829b8ebbcc6193f4e451aa" Dec 16 02:11:51.126781 kubelet[3500]: I1216 02:11:51.126562 3500 scope.go:117] "RemoveContainer" containerID="b8d2c346f2437decd5fd97da09495f28f46ffc19b3edf0e43e4dc73e7bb13070" Dec 16 02:11:51.126855 kubelet[3500]: E1216 02:11:51.126796 3500 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tigera-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=tigera-operator pod=tigera-operator-65cdcdfd6d-ghhwl_tigera-operator(5312aea8-e4ec-4538-892c-09070271c0cd)\"" pod="tigera-operator/tigera-operator-65cdcdfd6d-ghhwl" podUID="5312aea8-e4ec-4538-892c-09070271c0cd" Dec 16 02:11:51.131468 containerd[1908]: time="2025-12-16T02:11:51.131256124Z" level=info msg="RemoveContainer for \"618860c6776a82bb73cfa045239b6e1b346c5352c8829b8ebbcc6193f4e451aa\"" Dec 16 02:11:51.140580 containerd[1908]: time="2025-12-16T02:11:51.140474452Z" level=info msg="RemoveContainer for \"618860c6776a82bb73cfa045239b6e1b346c5352c8829b8ebbcc6193f4e451aa\" returns successfully" Dec 16 02:11:51.991290 kubelet[3500]: E1216 02:11:51.991225 3500 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-8495b986f5-t8ws5" podUID="6363be22-676f-4db3-afb1-0a1ce8d8def2" Dec 16 02:11:56.991946 kubelet[3500]: E1216 02:11:56.991814 3500 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-8495b986f5-pp87t" podUID="2d19a364-8480-43c0-bbf1-372d74633ca8" Dec 16 02:11:57.991835 kubelet[3500]: E1216 02:11:57.991444 3500 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-5q889" podUID="5f75e4b0-aa22-4937-a793-7da0a16c1ff9" Dec 16 02:11:57.992976 kubelet[3500]: E1216 02:11:57.992654 3500 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-7f5sg" podUID="aaad2db4-9021-4d31-8275-e9b7ba731389" Dec 16 02:11:57.992976 kubelet[3500]: E1216 02:11:57.992871 3500 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-77f9546868-lgh2z" podUID="b08348aa-b9db-4017-ab2d-63cae97b2a73" Dec 16 02:11:58.242760 kubelet[3500]: E1216 02:11:58.241716 3500 controller.go:195] "Failed to update lease" err="Put \"https://172.31.24.92:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-24-92?timeout=10s\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)"