Jan 14 23:48:17.177662 kernel: Booting Linux on physical CPU 0x0000000000 [0x410fd083] Jan 14 23:48:17.177707 kernel: Linux version 6.12.65-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.1_p20250801 p4) 14.3.1 20250801, GNU ld (Gentoo 2.45 p3) 2.45.0) #1 SMP PREEMPT Wed Jan 14 22:02:18 -00 2026 Jan 14 23:48:17.177733 kernel: KASLR disabled due to lack of seed Jan 14 23:48:17.177751 kernel: efi: EFI v2.7 by EDK II Jan 14 23:48:17.177767 kernel: efi: SMBIOS=0x7bed0000 SMBIOS 3.0=0x7beb0000 ACPI=0x786e0000 ACPI 2.0=0x786e0014 MEMATTR=0x7a734a98 MEMRESERVE=0x78557598 Jan 14 23:48:17.177783 kernel: secureboot: Secure boot disabled Jan 14 23:48:17.178294 kernel: ACPI: Early table checksum verification disabled Jan 14 23:48:17.178323 kernel: ACPI: RSDP 0x00000000786E0014 000024 (v02 AMAZON) Jan 14 23:48:17.178341 kernel: ACPI: XSDT 0x00000000786D00E8 000064 (v01 AMAZON AMZNFACP 00000001 01000013) Jan 14 23:48:17.178365 kernel: ACPI: FACP 0x00000000786B0000 000114 (v06 AMAZON AMZNFACP 00000001 AMZN 00000001) Jan 14 23:48:17.178383 kernel: ACPI: DSDT 0x0000000078640000 0013D2 (v02 AMAZON AMZNDSDT 00000001 AMZN 00000001) Jan 14 23:48:17.178399 kernel: ACPI: FACS 0x0000000078630000 000040 Jan 14 23:48:17.178415 kernel: ACPI: APIC 0x00000000786C0000 000108 (v04 AMAZON AMZNAPIC 00000001 AMZN 00000001) Jan 14 23:48:17.178432 kernel: ACPI: SPCR 0x00000000786A0000 000050 (v02 AMAZON AMZNSPCR 00000001 AMZN 00000001) Jan 14 23:48:17.178458 kernel: ACPI: GTDT 0x0000000078690000 000060 (v02 AMAZON AMZNGTDT 00000001 AMZN 00000001) Jan 14 23:48:17.178477 kernel: ACPI: MCFG 0x0000000078680000 00003C (v02 AMAZON AMZNMCFG 00000001 AMZN 00000001) Jan 14 23:48:17.178495 kernel: ACPI: SLIT 0x0000000078670000 00002D (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Jan 14 23:48:17.178512 kernel: ACPI: IORT 0x0000000078660000 000078 (v01 AMAZON AMZNIORT 00000001 AMZN 00000001) Jan 14 23:48:17.178529 kernel: ACPI: PPTT 0x0000000078650000 0000EC (v01 AMAZON AMZNPPTT 00000001 AMZN 00000001) Jan 14 23:48:17.178546 kernel: ACPI: SPCR: console: uart,mmio,0x90a0000,115200 Jan 14 23:48:17.178563 kernel: earlycon: uart0 at MMIO 0x00000000090a0000 (options '115200') Jan 14 23:48:17.178580 kernel: printk: legacy bootconsole [uart0] enabled Jan 14 23:48:17.178597 kernel: ACPI: Use ACPI SPCR as default console: Yes Jan 14 23:48:17.178621 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000004b5ffffff] Jan 14 23:48:17.178643 kernel: NODE_DATA(0) allocated [mem 0x4b584da00-0x4b5854fff] Jan 14 23:48:17.178661 kernel: Zone ranges: Jan 14 23:48:17.178678 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Jan 14 23:48:17.178694 kernel: DMA32 empty Jan 14 23:48:17.178711 kernel: Normal [mem 0x0000000100000000-0x00000004b5ffffff] Jan 14 23:48:17.178728 kernel: Device empty Jan 14 23:48:17.178745 kernel: Movable zone start for each node Jan 14 23:48:17.178762 kernel: Early memory node ranges Jan 14 23:48:17.178778 kernel: node 0: [mem 0x0000000040000000-0x000000007862ffff] Jan 14 23:48:17.178795 kernel: node 0: [mem 0x0000000078630000-0x000000007863ffff] Jan 14 23:48:17.178812 kernel: node 0: [mem 0x0000000078640000-0x00000000786effff] Jan 14 23:48:17.178829 kernel: node 0: [mem 0x00000000786f0000-0x000000007872ffff] Jan 14 23:48:17.178850 kernel: node 0: [mem 0x0000000078730000-0x000000007bbfffff] Jan 14 23:48:17.178867 kernel: node 0: [mem 0x000000007bc00000-0x000000007bfdffff] Jan 14 23:48:17.178884 kernel: node 0: [mem 0x000000007bfe0000-0x000000007fffffff] Jan 14 23:48:17.178901 kernel: node 0: [mem 0x0000000400000000-0x00000004b5ffffff] Jan 14 23:48:17.178925 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000004b5ffffff] Jan 14 23:48:17.178947 kernel: On node 0, zone Normal: 8192 pages in unavailable ranges Jan 14 23:48:17.178965 kernel: cma: Reserved 16 MiB at 0x000000007f000000 on node -1 Jan 14 23:48:17.178983 kernel: psci: probing for conduit method from ACPI. Jan 14 23:48:17.179000 kernel: psci: PSCIv1.0 detected in firmware. Jan 14 23:48:17.179018 kernel: psci: Using standard PSCI v0.2 function IDs Jan 14 23:48:17.179035 kernel: psci: Trusted OS migration not required Jan 14 23:48:17.179053 kernel: psci: SMC Calling Convention v1.1 Jan 14 23:48:17.179070 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000001) Jan 14 23:48:17.179088 kernel: percpu: Embedded 33 pages/cpu s98200 r8192 d28776 u135168 Jan 14 23:48:17.179110 kernel: pcpu-alloc: s98200 r8192 d28776 u135168 alloc=33*4096 Jan 14 23:48:17.179129 kernel: pcpu-alloc: [0] 0 [0] 1 Jan 14 23:48:17.179146 kernel: Detected PIPT I-cache on CPU0 Jan 14 23:48:17.179164 kernel: CPU features: detected: GIC system register CPU interface Jan 14 23:48:17.179182 kernel: CPU features: detected: Spectre-v2 Jan 14 23:48:17.179199 kernel: CPU features: detected: Spectre-v3a Jan 14 23:48:17.179217 kernel: CPU features: detected: Spectre-BHB Jan 14 23:48:17.180294 kernel: CPU features: detected: ARM erratum 1742098 Jan 14 23:48:17.180322 kernel: CPU features: detected: ARM errata 1165522, 1319367, or 1530923 Jan 14 23:48:17.180340 kernel: alternatives: applying boot alternatives Jan 14 23:48:17.180360 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=e4a6d042213df6c386c00b2ef561482ef59cf24ca6770345ce520c577e366e5a Jan 14 23:48:17.180388 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 14 23:48:17.180406 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 14 23:48:17.180424 kernel: Fallback order for Node 0: 0 Jan 14 23:48:17.180442 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1007616 Jan 14 23:48:17.180459 kernel: Policy zone: Normal Jan 14 23:48:17.180477 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 14 23:48:17.180494 kernel: software IO TLB: area num 2. Jan 14 23:48:17.180512 kernel: software IO TLB: mapped [mem 0x000000006f800000-0x0000000073800000] (64MB) Jan 14 23:48:17.180530 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 14 23:48:17.180547 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 14 23:48:17.180571 kernel: rcu: RCU event tracing is enabled. Jan 14 23:48:17.180589 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 14 23:48:17.180607 kernel: Trampoline variant of Tasks RCU enabled. Jan 14 23:48:17.180625 kernel: Tracing variant of Tasks RCU enabled. Jan 14 23:48:17.180643 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 14 23:48:17.180661 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 14 23:48:17.180680 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 14 23:48:17.180699 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 14 23:48:17.180718 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jan 14 23:48:17.180736 kernel: GICv3: 96 SPIs implemented Jan 14 23:48:17.180755 kernel: GICv3: 0 Extended SPIs implemented Jan 14 23:48:17.180779 kernel: Root IRQ handler: gic_handle_irq Jan 14 23:48:17.180797 kernel: GICv3: GICv3 features: 16 PPIs Jan 14 23:48:17.180815 kernel: GICv3: GICD_CTRL.DS=1, SCR_EL3.FIQ=0 Jan 14 23:48:17.180833 kernel: GICv3: CPU0: found redistributor 0 region 0:0x0000000010200000 Jan 14 23:48:17.180851 kernel: ITS [mem 0x10080000-0x1009ffff] Jan 14 23:48:17.180868 kernel: ITS@0x0000000010080000: allocated 8192 Devices @4000f0000 (indirect, esz 8, psz 64K, shr 1) Jan 14 23:48:17.180886 kernel: ITS@0x0000000010080000: allocated 8192 Interrupt Collections @400100000 (flat, esz 8, psz 64K, shr 1) Jan 14 23:48:17.180904 kernel: GICv3: using LPI property table @0x0000000400110000 Jan 14 23:48:17.180922 kernel: ITS: Using hypervisor restricted LPI range [128] Jan 14 23:48:17.180939 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000400120000 Jan 14 23:48:17.180957 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 14 23:48:17.180980 kernel: arch_timer: cp15 timer(s) running at 83.33MHz (virt). Jan 14 23:48:17.180998 kernel: clocksource: arch_sys_counter: mask: 0x1ffffffffffffff max_cycles: 0x13381ebeec, max_idle_ns: 440795203145 ns Jan 14 23:48:17.181016 kernel: sched_clock: 57 bits at 83MHz, resolution 12ns, wraps every 4398046511100ns Jan 14 23:48:17.181034 kernel: Console: colour dummy device 80x25 Jan 14 23:48:17.181053 kernel: printk: legacy console [tty1] enabled Jan 14 23:48:17.181071 kernel: ACPI: Core revision 20240827 Jan 14 23:48:17.181090 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 166.66 BogoMIPS (lpj=83333) Jan 14 23:48:17.181109 kernel: pid_max: default: 32768 minimum: 301 Jan 14 23:48:17.181131 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Jan 14 23:48:17.181150 kernel: landlock: Up and running. Jan 14 23:48:17.181168 kernel: SELinux: Initializing. Jan 14 23:48:17.181186 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 14 23:48:17.181205 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 14 23:48:17.181223 kernel: rcu: Hierarchical SRCU implementation. Jan 14 23:48:17.181267 kernel: rcu: Max phase no-delay instances is 400. Jan 14 23:48:17.181287 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Jan 14 23:48:17.181312 kernel: Remapping and enabling EFI services. Jan 14 23:48:17.181330 kernel: smp: Bringing up secondary CPUs ... Jan 14 23:48:17.181350 kernel: Detected PIPT I-cache on CPU1 Jan 14 23:48:17.181369 kernel: GICv3: CPU1: found redistributor 1 region 0:0x0000000010220000 Jan 14 23:48:17.181387 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000400130000 Jan 14 23:48:17.181405 kernel: CPU1: Booted secondary processor 0x0000000001 [0x410fd083] Jan 14 23:48:17.181423 kernel: smp: Brought up 1 node, 2 CPUs Jan 14 23:48:17.181447 kernel: SMP: Total of 2 processors activated. Jan 14 23:48:17.181465 kernel: CPU: All CPU(s) started at EL1 Jan 14 23:48:17.181495 kernel: CPU features: detected: 32-bit EL0 Support Jan 14 23:48:17.181518 kernel: CPU features: detected: 32-bit EL1 Support Jan 14 23:48:17.181537 kernel: CPU features: detected: CRC32 instructions Jan 14 23:48:17.181556 kernel: alternatives: applying system-wide alternatives Jan 14 23:48:17.181577 kernel: Memory: 3823468K/4030464K available (11200K kernel code, 2458K rwdata, 9088K rodata, 12416K init, 1038K bss, 185652K reserved, 16384K cma-reserved) Jan 14 23:48:17.181597 kernel: devtmpfs: initialized Jan 14 23:48:17.181621 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 14 23:48:17.181640 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 14 23:48:17.181659 kernel: 23664 pages in range for non-PLT usage Jan 14 23:48:17.181678 kernel: 515184 pages in range for PLT usage Jan 14 23:48:17.181697 kernel: pinctrl core: initialized pinctrl subsystem Jan 14 23:48:17.181721 kernel: SMBIOS 3.0.0 present. Jan 14 23:48:17.181739 kernel: DMI: Amazon EC2 a1.large/, BIOS 1.0 11/1/2018 Jan 14 23:48:17.181758 kernel: DMI: Memory slots populated: 0/0 Jan 14 23:48:17.181778 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 14 23:48:17.182343 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jan 14 23:48:17.182364 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jan 14 23:48:17.182384 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jan 14 23:48:17.182412 kernel: audit: initializing netlink subsys (disabled) Jan 14 23:48:17.182431 kernel: audit: type=2000 audit(0.225:1): state=initialized audit_enabled=0 res=1 Jan 14 23:48:17.182450 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 14 23:48:17.182469 kernel: cpuidle: using governor menu Jan 14 23:48:17.182488 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jan 14 23:48:17.182508 kernel: ASID allocator initialised with 65536 entries Jan 14 23:48:17.182527 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 14 23:48:17.182551 kernel: Serial: AMBA PL011 UART driver Jan 14 23:48:17.182570 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 14 23:48:17.182590 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jan 14 23:48:17.182608 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jan 14 23:48:17.182628 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jan 14 23:48:17.182647 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 14 23:48:17.182666 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jan 14 23:48:17.182690 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jan 14 23:48:17.182709 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jan 14 23:48:17.182728 kernel: ACPI: Added _OSI(Module Device) Jan 14 23:48:17.182747 kernel: ACPI: Added _OSI(Processor Device) Jan 14 23:48:17.182766 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 14 23:48:17.182785 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 14 23:48:17.182804 kernel: ACPI: Interpreter enabled Jan 14 23:48:17.182828 kernel: ACPI: Using GIC for interrupt routing Jan 14 23:48:17.182847 kernel: ACPI: MCFG table detected, 1 entries Jan 14 23:48:17.182866 kernel: ACPI: CPU0 has been hot-added Jan 14 23:48:17.182885 kernel: ACPI: CPU1 has been hot-added Jan 14 23:48:17.182904 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00]) Jan 14 23:48:17.184816 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 14 23:48:17.191018 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Jan 14 23:48:17.191330 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Jan 14 23:48:17.191585 kernel: acpi PNP0A08:00: ECAM area [mem 0x20000000-0x200fffff] reserved by PNP0C02:00 Jan 14 23:48:17.191864 kernel: acpi PNP0A08:00: ECAM at [mem 0x20000000-0x200fffff] for [bus 00] Jan 14 23:48:17.191891 kernel: ACPI: Remapped I/O 0x000000001fff0000 to [io 0x0000-0xffff window] Jan 14 23:48:17.191912 kernel: acpiphp: Slot [1] registered Jan 14 23:48:17.191932 kernel: acpiphp: Slot [2] registered Jan 14 23:48:17.191957 kernel: acpiphp: Slot [3] registered Jan 14 23:48:17.191977 kernel: acpiphp: Slot [4] registered Jan 14 23:48:17.191996 kernel: acpiphp: Slot [5] registered Jan 14 23:48:17.192015 kernel: acpiphp: Slot [6] registered Jan 14 23:48:17.192034 kernel: acpiphp: Slot [7] registered Jan 14 23:48:17.192053 kernel: acpiphp: Slot [8] registered Jan 14 23:48:17.192072 kernel: acpiphp: Slot [9] registered Jan 14 23:48:17.192091 kernel: acpiphp: Slot [10] registered Jan 14 23:48:17.192115 kernel: acpiphp: Slot [11] registered Jan 14 23:48:17.192134 kernel: acpiphp: Slot [12] registered Jan 14 23:48:17.192153 kernel: acpiphp: Slot [13] registered Jan 14 23:48:17.192172 kernel: acpiphp: Slot [14] registered Jan 14 23:48:17.192191 kernel: acpiphp: Slot [15] registered Jan 14 23:48:17.192210 kernel: acpiphp: Slot [16] registered Jan 14 23:48:17.192280 kernel: acpiphp: Slot [17] registered Jan 14 23:48:17.192310 kernel: acpiphp: Slot [18] registered Jan 14 23:48:17.192330 kernel: acpiphp: Slot [19] registered Jan 14 23:48:17.192350 kernel: acpiphp: Slot [20] registered Jan 14 23:48:17.192370 kernel: acpiphp: Slot [21] registered Jan 14 23:48:17.192390 kernel: acpiphp: Slot [22] registered Jan 14 23:48:17.192409 kernel: acpiphp: Slot [23] registered Jan 14 23:48:17.192427 kernel: acpiphp: Slot [24] registered Jan 14 23:48:17.192450 kernel: acpiphp: Slot [25] registered Jan 14 23:48:17.192470 kernel: acpiphp: Slot [26] registered Jan 14 23:48:17.192489 kernel: acpiphp: Slot [27] registered Jan 14 23:48:17.192507 kernel: acpiphp: Slot [28] registered Jan 14 23:48:17.192526 kernel: acpiphp: Slot [29] registered Jan 14 23:48:17.192545 kernel: acpiphp: Slot [30] registered Jan 14 23:48:17.192564 kernel: acpiphp: Slot [31] registered Jan 14 23:48:17.192582 kernel: PCI host bridge to bus 0000:00 Jan 14 23:48:17.193074 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xffffffff window] Jan 14 23:48:17.193344 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Jan 14 23:48:17.193577 kernel: pci_bus 0000:00: root bus resource [mem 0x400000000000-0x407fffffffff window] Jan 14 23:48:17.193805 kernel: pci_bus 0000:00: root bus resource [bus 00] Jan 14 23:48:17.194092 kernel: pci 0000:00:00.0: [1d0f:0200] type 00 class 0x060000 conventional PCI endpoint Jan 14 23:48:17.196487 kernel: pci 0000:00:01.0: [1d0f:8250] type 00 class 0x070003 conventional PCI endpoint Jan 14 23:48:17.196770 kernel: pci 0000:00:01.0: BAR 0 [mem 0x80118000-0x80118fff] Jan 14 23:48:17.197051 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 PCIe Root Complex Integrated Endpoint Jan 14 23:48:17.198457 kernel: pci 0000:00:04.0: BAR 0 [mem 0x80114000-0x80117fff] Jan 14 23:48:17.198761 kernel: pci 0000:00:04.0: PME# supported from D0 D1 D2 D3hot D3cold Jan 14 23:48:17.199085 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 PCIe Root Complex Integrated Endpoint Jan 14 23:48:17.199439 kernel: pci 0000:00:05.0: BAR 0 [mem 0x80110000-0x80113fff] Jan 14 23:48:17.199747 kernel: pci 0000:00:05.0: BAR 2 [mem 0x80000000-0x800fffff pref] Jan 14 23:48:17.200019 kernel: pci 0000:00:05.0: BAR 4 [mem 0x80100000-0x8010ffff] Jan 14 23:48:17.200334 kernel: pci 0000:00:05.0: PME# supported from D0 D1 D2 D3hot D3cold Jan 14 23:48:17.200599 kernel: pci_bus 0000:00: resource 4 [mem 0x80000000-0xffffffff window] Jan 14 23:48:17.200868 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Jan 14 23:48:17.201110 kernel: pci_bus 0000:00: resource 6 [mem 0x400000000000-0x407fffffffff window] Jan 14 23:48:17.201138 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Jan 14 23:48:17.201159 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Jan 14 23:48:17.201181 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Jan 14 23:48:17.201202 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Jan 14 23:48:17.201221 kernel: iommu: Default domain type: Translated Jan 14 23:48:17.202424 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jan 14 23:48:17.202447 kernel: efivars: Registered efivars operations Jan 14 23:48:17.202467 kernel: vgaarb: loaded Jan 14 23:48:17.202487 kernel: clocksource: Switched to clocksource arch_sys_counter Jan 14 23:48:17.202506 kernel: VFS: Disk quotas dquot_6.6.0 Jan 14 23:48:17.202525 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 14 23:48:17.202545 kernel: pnp: PnP ACPI init Jan 14 23:48:17.204343 kernel: system 00:00: [mem 0x20000000-0x2fffffff] could not be reserved Jan 14 23:48:17.204392 kernel: pnp: PnP ACPI: found 1 devices Jan 14 23:48:17.204413 kernel: NET: Registered PF_INET protocol family Jan 14 23:48:17.204433 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 14 23:48:17.204453 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 14 23:48:17.204473 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 14 23:48:17.204493 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 14 23:48:17.204523 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 14 23:48:17.204568 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 14 23:48:17.204589 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 14 23:48:17.204608 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 14 23:48:17.204628 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 14 23:48:17.204647 kernel: PCI: CLS 0 bytes, default 64 Jan 14 23:48:17.204666 kernel: kvm [1]: HYP mode not available Jan 14 23:48:17.204692 kernel: Initialise system trusted keyrings Jan 14 23:48:17.204711 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 14 23:48:17.204730 kernel: Key type asymmetric registered Jan 14 23:48:17.204749 kernel: Asymmetric key parser 'x509' registered Jan 14 23:48:17.204768 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Jan 14 23:48:17.204788 kernel: io scheduler mq-deadline registered Jan 14 23:48:17.204807 kernel: io scheduler kyber registered Jan 14 23:48:17.204831 kernel: io scheduler bfq registered Jan 14 23:48:17.208446 kernel: pl061_gpio ARMH0061:00: PL061 GPIO chip registered Jan 14 23:48:17.208496 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Jan 14 23:48:17.208518 kernel: ACPI: button: Power Button [PWRB] Jan 14 23:48:17.208539 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input1 Jan 14 23:48:17.208560 kernel: ACPI: button: Sleep Button [SLPB] Jan 14 23:48:17.208588 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 14 23:48:17.208609 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Jan 14 23:48:17.208885 kernel: serial 0000:00:01.0: enabling device (0010 -> 0012) Jan 14 23:48:17.208913 kernel: printk: legacy console [ttyS0] disabled Jan 14 23:48:17.208933 kernel: 0000:00:01.0: ttyS0 at MMIO 0x80118000 (irq = 14, base_baud = 115200) is a 16550A Jan 14 23:48:17.208952 kernel: printk: legacy console [ttyS0] enabled Jan 14 23:48:17.208971 kernel: printk: legacy bootconsole [uart0] disabled Jan 14 23:48:17.208996 kernel: thunder_xcv, ver 1.0 Jan 14 23:48:17.209016 kernel: thunder_bgx, ver 1.0 Jan 14 23:48:17.209035 kernel: nicpf, ver 1.0 Jan 14 23:48:17.209054 kernel: nicvf, ver 1.0 Jan 14 23:48:17.211997 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jan 14 23:48:17.212318 kernel: rtc-efi rtc-efi.0: setting system clock to 2026-01-14T23:48:13 UTC (1768434493) Jan 14 23:48:17.212351 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 14 23:48:17.212382 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 3 (0,80000003) counters available Jan 14 23:48:17.212402 kernel: NET: Registered PF_INET6 protocol family Jan 14 23:48:17.212422 kernel: watchdog: NMI not fully supported Jan 14 23:48:17.212442 kernel: watchdog: Hard watchdog permanently disabled Jan 14 23:48:17.212461 kernel: Segment Routing with IPv6 Jan 14 23:48:17.212480 kernel: In-situ OAM (IOAM) with IPv6 Jan 14 23:48:17.212501 kernel: NET: Registered PF_PACKET protocol family Jan 14 23:48:17.212525 kernel: Key type dns_resolver registered Jan 14 23:48:17.212545 kernel: registered taskstats version 1 Jan 14 23:48:17.212565 kernel: Loading compiled-in X.509 certificates Jan 14 23:48:17.212584 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.65-flatcar: a690a20944211e11dad41e677dd7158a4ddc3c87' Jan 14 23:48:17.212603 kernel: Demotion targets for Node 0: null Jan 14 23:48:17.212623 kernel: Key type .fscrypt registered Jan 14 23:48:17.212641 kernel: Key type fscrypt-provisioning registered Jan 14 23:48:17.212664 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 14 23:48:17.212684 kernel: ima: Allocated hash algorithm: sha1 Jan 14 23:48:17.212703 kernel: ima: No architecture policies found Jan 14 23:48:17.212723 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jan 14 23:48:17.212743 kernel: clk: Disabling unused clocks Jan 14 23:48:17.212763 kernel: PM: genpd: Disabling unused power domains Jan 14 23:48:17.212782 kernel: Freeing unused kernel memory: 12416K Jan 14 23:48:17.212802 kernel: Run /init as init process Jan 14 23:48:17.212826 kernel: with arguments: Jan 14 23:48:17.212845 kernel: /init Jan 14 23:48:17.212866 kernel: with environment: Jan 14 23:48:17.212884 kernel: HOME=/ Jan 14 23:48:17.212904 kernel: TERM=linux Jan 14 23:48:17.212925 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Jan 14 23:48:17.213167 kernel: nvme nvme0: pci function 0000:00:04.0 Jan 14 23:48:17.218478 kernel: nvme nvme0: 2/0/0 default/read/poll queues Jan 14 23:48:17.218527 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 14 23:48:17.218548 kernel: GPT:25804799 != 33554431 Jan 14 23:48:17.218568 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 14 23:48:17.218588 kernel: GPT:25804799 != 33554431 Jan 14 23:48:17.218607 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 14 23:48:17.218638 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 14 23:48:17.218658 kernel: SCSI subsystem initialized Jan 14 23:48:17.218679 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 14 23:48:17.218699 kernel: device-mapper: uevent: version 1.0.3 Jan 14 23:48:17.218720 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Jan 14 23:48:17.218740 kernel: device-mapper: verity: sha256 using shash "sha256-ce" Jan 14 23:48:17.218760 kernel: raid6: neonx8 gen() 6555 MB/s Jan 14 23:48:17.218784 kernel: raid6: neonx4 gen() 6587 MB/s Jan 14 23:48:17.218803 kernel: raid6: neonx2 gen() 5447 MB/s Jan 14 23:48:17.218822 kernel: raid6: neonx1 gen() 3936 MB/s Jan 14 23:48:17.218841 kernel: raid6: int64x8 gen() 3581 MB/s Jan 14 23:48:17.218860 kernel: raid6: int64x4 gen() 3719 MB/s Jan 14 23:48:17.218879 kernel: raid6: int64x2 gen() 3594 MB/s Jan 14 23:48:17.218898 kernel: raid6: int64x1 gen() 2748 MB/s Jan 14 23:48:17.218921 kernel: raid6: using algorithm neonx4 gen() 6587 MB/s Jan 14 23:48:17.218941 kernel: raid6: .... xor() 4884 MB/s, rmw enabled Jan 14 23:48:17.218960 kernel: raid6: using neon recovery algorithm Jan 14 23:48:17.218979 kernel: xor: measuring software checksum speed Jan 14 23:48:17.218998 kernel: 8regs : 12940 MB/sec Jan 14 23:48:17.219017 kernel: 32regs : 13024 MB/sec Jan 14 23:48:17.219036 kernel: arm64_neon : 8904 MB/sec Jan 14 23:48:17.219059 kernel: xor: using function: 32regs (13024 MB/sec) Jan 14 23:48:17.219079 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 14 23:48:17.219098 kernel: BTRFS: device fsid 78d59ed4-d19c-4fcc-8998-5f0c19b42daf devid 1 transid 38 /dev/mapper/usr (254:0) scanned by mount (221) Jan 14 23:48:17.219118 kernel: BTRFS info (device dm-0): first mount of filesystem 78d59ed4-d19c-4fcc-8998-5f0c19b42daf Jan 14 23:48:17.219138 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jan 14 23:48:17.219158 kernel: BTRFS info (device dm-0): enabling ssd optimizations Jan 14 23:48:17.219177 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 14 23:48:17.219200 kernel: BTRFS info (device dm-0): enabling free space tree Jan 14 23:48:17.219220 kernel: loop: module loaded Jan 14 23:48:17.219263 kernel: loop0: detected capacity change from 0 to 91488 Jan 14 23:48:17.219284 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 14 23:48:17.219306 systemd[1]: Successfully made /usr/ read-only. Jan 14 23:48:17.219333 systemd[1]: systemd 257.9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 14 23:48:17.219360 systemd[1]: Detected virtualization amazon. Jan 14 23:48:17.219381 systemd[1]: Detected architecture arm64. Jan 14 23:48:17.219401 systemd[1]: Running in initrd. Jan 14 23:48:17.219421 systemd[1]: No hostname configured, using default hostname. Jan 14 23:48:17.219442 systemd[1]: Hostname set to . Jan 14 23:48:17.219463 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Jan 14 23:48:17.219483 systemd[1]: Queued start job for default target initrd.target. Jan 14 23:48:17.219508 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Jan 14 23:48:17.219529 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 14 23:48:17.219550 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 14 23:48:17.219572 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 14 23:48:17.219593 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 14 23:48:17.219635 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 14 23:48:17.219657 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 14 23:48:17.219679 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 14 23:48:17.219717 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 14 23:48:17.219745 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Jan 14 23:48:17.219772 systemd[1]: Reached target paths.target - Path Units. Jan 14 23:48:17.219793 systemd[1]: Reached target slices.target - Slice Units. Jan 14 23:48:17.219814 systemd[1]: Reached target swap.target - Swaps. Jan 14 23:48:17.219835 systemd[1]: Reached target timers.target - Timer Units. Jan 14 23:48:17.219857 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 14 23:48:17.219879 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 14 23:48:17.219900 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Jan 14 23:48:17.219925 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 14 23:48:17.219946 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jan 14 23:48:17.219967 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 14 23:48:17.219989 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 14 23:48:17.220010 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 14 23:48:17.220030 systemd[1]: Reached target sockets.target - Socket Units. Jan 14 23:48:17.220052 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 14 23:48:17.220078 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 14 23:48:17.220099 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 14 23:48:17.220120 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 14 23:48:17.220142 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Jan 14 23:48:17.220163 systemd[1]: Starting systemd-fsck-usr.service... Jan 14 23:48:17.220184 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 14 23:48:17.220205 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 14 23:48:17.222716 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 14 23:48:17.222754 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 14 23:48:17.222786 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 14 23:48:17.222808 systemd[1]: Finished systemd-fsck-usr.service. Jan 14 23:48:17.222830 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 14 23:48:17.222851 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 14 23:48:17.222872 kernel: Bridge firewalling registered Jan 14 23:48:17.222897 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 14 23:48:17.222968 systemd-journald[360]: Collecting audit messages is enabled. Jan 14 23:48:17.223014 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 14 23:48:17.223042 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 14 23:48:17.223064 kernel: audit: type=1130 audit(1768434497.185:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:48:17.223085 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 14 23:48:17.223107 systemd-journald[360]: Journal started Jan 14 23:48:17.223145 systemd-journald[360]: Runtime Journal (/run/log/journal/ec26d7cff3ee3388e11677ef765d36df) is 8M, max 75.3M, 67.3M free. Jan 14 23:48:17.185000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:48:17.158010 systemd-modules-load[361]: Inserted module 'br_netfilter' Jan 14 23:48:17.228252 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 14 23:48:17.233000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:48:17.240267 kernel: audit: type=1130 audit(1768434497.233:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:48:17.240311 systemd[1]: Started systemd-journald.service - Journal Service. Jan 14 23:48:17.244000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:48:17.251307 kernel: audit: type=1130 audit(1768434497.244:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:48:17.251494 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 14 23:48:17.261179 kernel: audit: type=1130 audit(1768434497.253:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:48:17.253000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:48:17.261205 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 14 23:48:17.265000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:48:17.272273 kernel: audit: type=1130 audit(1768434497.265:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:48:17.274449 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 14 23:48:17.287000 audit: BPF prog-id=6 op=LOAD Jan 14 23:48:17.292055 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 14 23:48:17.299788 kernel: audit: type=1334 audit(1768434497.287:7): prog-id=6 op=LOAD Jan 14 23:48:17.307682 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 14 23:48:17.336786 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 14 23:48:17.348000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:48:17.358452 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 14 23:48:17.369476 kernel: audit: type=1130 audit(1768434497.348:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:48:17.363713 systemd-tmpfiles[386]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Jan 14 23:48:17.377463 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 14 23:48:17.376000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:48:17.389305 kernel: audit: type=1130 audit(1768434497.376:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:48:17.465815 dracut-cmdline[399]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=e4a6d042213df6c386c00b2ef561482ef59cf24ca6770345ce520c577e366e5a Jan 14 23:48:17.500528 systemd-resolved[385]: Positive Trust Anchors: Jan 14 23:48:17.500563 systemd-resolved[385]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 14 23:48:17.500572 systemd-resolved[385]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Jan 14 23:48:17.500632 systemd-resolved[385]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 14 23:48:17.727303 kernel: Loading iSCSI transport class v2.0-870. Jan 14 23:48:17.776295 kernel: iscsi: registered transport (tcp) Jan 14 23:48:17.799880 kernel: iscsi: registered transport (qla4xxx) Jan 14 23:48:17.799965 kernel: QLogic iSCSI HBA Driver Jan 14 23:48:17.803948 kernel: random: crng init done Jan 14 23:48:17.802588 systemd-resolved[385]: Defaulting to hostname 'linux'. Jan 14 23:48:17.804862 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 14 23:48:17.810000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:48:17.813419 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 14 23:48:17.825573 kernel: audit: type=1130 audit(1768434497.810:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:48:17.858645 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 14 23:48:17.898520 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 14 23:48:17.903000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:48:17.907980 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 14 23:48:17.914978 kernel: audit: type=1130 audit(1768434497.903:11): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:48:17.988093 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 14 23:48:17.987000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:48:17.994560 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 14 23:48:18.001598 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 14 23:48:18.067175 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 14 23:48:18.069000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:48:18.073000 audit: BPF prog-id=7 op=LOAD Jan 14 23:48:18.073000 audit: BPF prog-id=8 op=LOAD Jan 14 23:48:18.076022 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 14 23:48:18.137073 systemd-udevd[639]: Using default interface naming scheme 'v257'. Jan 14 23:48:18.163276 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 14 23:48:18.162000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:48:18.172890 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 14 23:48:18.217494 dracut-pre-trigger[712]: rd.md=0: removing MD RAID activation Jan 14 23:48:18.230348 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 14 23:48:18.232000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:48:18.236000 audit: BPF prog-id=9 op=LOAD Jan 14 23:48:18.239193 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 14 23:48:18.287337 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 14 23:48:18.297000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:48:18.300670 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 14 23:48:18.350313 systemd-networkd[750]: lo: Link UP Jan 14 23:48:18.350333 systemd-networkd[750]: lo: Gained carrier Jan 14 23:48:18.353989 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 14 23:48:18.357000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:48:18.359144 systemd[1]: Reached target network.target - Network. Jan 14 23:48:18.461434 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 14 23:48:18.469000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:48:18.473071 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 14 23:48:18.678840 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 14 23:48:18.681470 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 14 23:48:18.692008 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 14 23:48:18.690000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:48:18.699087 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 14 23:48:18.722724 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Jan 14 23:48:18.722796 kernel: ena 0000:00:05.0: enabling device (0010 -> 0012) Jan 14 23:48:18.729158 kernel: ena 0000:00:05.0: ENA device version: 0.10 Jan 14 23:48:18.729636 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Jan 14 23:48:18.737314 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80110000, mac addr 06:d9:ac:97:0c:bf Jan 14 23:48:18.739443 (udev-worker)[789]: Network interface NamePolicy= disabled on kernel command line. Jan 14 23:48:18.755489 systemd-networkd[750]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 14 23:48:18.755513 systemd-networkd[750]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 14 23:48:18.773396 systemd-networkd[750]: eth0: Link UP Jan 14 23:48:18.777410 systemd-networkd[750]: eth0: Gained carrier Jan 14 23:48:18.777450 systemd-networkd[750]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 14 23:48:18.795651 kernel: nvme nvme0: using unchecked data buffer Jan 14 23:48:18.799424 systemd-networkd[750]: eth0: DHCPv4 address 172.31.18.197/20, gateway 172.31.16.1 acquired from 172.31.16.1 Jan 14 23:48:18.803397 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 14 23:48:18.809000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:48:18.958497 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Jan 14 23:48:18.968048 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 14 23:48:19.013877 disk-uuid[875]: Primary Header is updated. Jan 14 23:48:19.013877 disk-uuid[875]: Secondary Entries is updated. Jan 14 23:48:19.013877 disk-uuid[875]: Secondary Header is updated. Jan 14 23:48:19.053190 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Jan 14 23:48:19.114400 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Jan 14 23:48:19.175390 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Jan 14 23:48:19.495313 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 14 23:48:19.496000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:48:19.511963 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 14 23:48:19.514956 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 14 23:48:19.520273 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 14 23:48:19.524557 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 14 23:48:19.582350 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 14 23:48:19.586000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:48:20.155243 disk-uuid[877]: Warning: The kernel is still using the old partition table. Jan 14 23:48:20.155243 disk-uuid[877]: The new table will be used at the next reboot or after you Jan 14 23:48:20.155243 disk-uuid[877]: run partprobe(8) or kpartx(8) Jan 14 23:48:20.155243 disk-uuid[877]: The operation has completed successfully. Jan 14 23:48:20.176795 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 14 23:48:20.177219 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 14 23:48:20.183000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:48:20.183000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:48:20.186389 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 14 23:48:20.259262 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 (259:5) scanned by mount (1093) Jan 14 23:48:20.263441 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 0eb28982-35f7-4b76-8133-b752f60f3941 Jan 14 23:48:20.263487 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Jan 14 23:48:20.272074 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jan 14 23:48:20.272173 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Jan 14 23:48:20.282286 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 0eb28982-35f7-4b76-8133-b752f60f3941 Jan 14 23:48:20.283349 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 14 23:48:20.284000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:48:20.288504 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 14 23:48:20.702428 systemd-networkd[750]: eth0: Gained IPv6LL Jan 14 23:48:21.679307 ignition[1112]: Ignition 2.22.0 Jan 14 23:48:21.681020 ignition[1112]: Stage: fetch-offline Jan 14 23:48:21.681929 ignition[1112]: no configs at "/usr/lib/ignition/base.d" Jan 14 23:48:21.681956 ignition[1112]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 14 23:48:21.683862 ignition[1112]: Ignition finished successfully Jan 14 23:48:21.697343 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 14 23:48:21.699000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:48:21.704406 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 14 23:48:21.764894 ignition[1120]: Ignition 2.22.0 Jan 14 23:48:21.764925 ignition[1120]: Stage: fetch Jan 14 23:48:21.765530 ignition[1120]: no configs at "/usr/lib/ignition/base.d" Jan 14 23:48:21.765561 ignition[1120]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 14 23:48:21.765725 ignition[1120]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 14 23:48:21.786027 ignition[1120]: PUT result: OK Jan 14 23:48:21.789602 ignition[1120]: parsed url from cmdline: "" Jan 14 23:48:21.789628 ignition[1120]: no config URL provided Jan 14 23:48:21.789646 ignition[1120]: reading system config file "/usr/lib/ignition/user.ign" Jan 14 23:48:21.789679 ignition[1120]: no config at "/usr/lib/ignition/user.ign" Jan 14 23:48:21.789710 ignition[1120]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 14 23:48:21.794028 ignition[1120]: PUT result: OK Jan 14 23:48:21.796180 ignition[1120]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Jan 14 23:48:21.800095 ignition[1120]: GET result: OK Jan 14 23:48:21.800668 ignition[1120]: parsing config with SHA512: f97baa8b7488d096eaa13ce9c474b4d266ecf49ebbc6d1b00e917256587fce20d631970f9471fec51acce3d0fbefd68a00b8ab4b3be40994d98008d731c47c93 Jan 14 23:48:21.811829 unknown[1120]: fetched base config from "system" Jan 14 23:48:21.811861 unknown[1120]: fetched base config from "system" Jan 14 23:48:21.811911 unknown[1120]: fetched user config from "aws" Jan 14 23:48:21.819526 ignition[1120]: fetch: fetch complete Jan 14 23:48:21.821366 ignition[1120]: fetch: fetch passed Jan 14 23:48:21.821484 ignition[1120]: Ignition finished successfully Jan 14 23:48:21.828370 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 14 23:48:21.831000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:48:21.834967 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 14 23:48:21.891739 ignition[1126]: Ignition 2.22.0 Jan 14 23:48:21.892266 ignition[1126]: Stage: kargs Jan 14 23:48:21.892848 ignition[1126]: no configs at "/usr/lib/ignition/base.d" Jan 14 23:48:21.892870 ignition[1126]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 14 23:48:21.893007 ignition[1126]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 14 23:48:21.897361 ignition[1126]: PUT result: OK Jan 14 23:48:21.905779 ignition[1126]: kargs: kargs passed Jan 14 23:48:21.905892 ignition[1126]: Ignition finished successfully Jan 14 23:48:21.912036 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 14 23:48:21.911000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:48:21.913853 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 14 23:48:21.967834 ignition[1132]: Ignition 2.22.0 Jan 14 23:48:21.968302 ignition[1132]: Stage: disks Jan 14 23:48:21.968816 ignition[1132]: no configs at "/usr/lib/ignition/base.d" Jan 14 23:48:21.968838 ignition[1132]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 14 23:48:21.968977 ignition[1132]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 14 23:48:21.970703 ignition[1132]: PUT result: OK Jan 14 23:48:21.987942 ignition[1132]: disks: disks passed Jan 14 23:48:21.988220 ignition[1132]: Ignition finished successfully Jan 14 23:48:21.994003 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 14 23:48:21.997000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:48:21.999009 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 14 23:48:22.002130 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 14 23:48:22.005016 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 14 23:48:22.009830 systemd[1]: Reached target sysinit.target - System Initialization. Jan 14 23:48:22.012601 systemd[1]: Reached target basic.target - Basic System. Jan 14 23:48:22.024644 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 14 23:48:22.227642 systemd-fsck[1140]: ROOT: clean, 15/1631200 files, 112378/1617920 blocks Jan 14 23:48:22.232200 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 14 23:48:22.244832 kernel: kauditd_printk_skb: 21 callbacks suppressed Jan 14 23:48:22.244901 kernel: audit: type=1130 audit(1768434502.237:33): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:48:22.237000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:48:22.245215 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 14 23:48:22.517266 kernel: EXT4-fs (nvme0n1p9): mounted filesystem 05dab3f9-40c2-46d9-a2a2-3da8ed7c4451 r/w with ordered data mode. Quota mode: none. Jan 14 23:48:22.518797 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 14 23:48:22.523534 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 14 23:48:22.588032 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 14 23:48:22.595670 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 14 23:48:22.603540 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 14 23:48:22.603642 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 14 23:48:22.603709 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 14 23:48:22.630746 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 14 23:48:22.637494 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 14 23:48:22.654352 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 (259:5) scanned by mount (1159) Jan 14 23:48:22.660380 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 0eb28982-35f7-4b76-8133-b752f60f3941 Jan 14 23:48:22.660453 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Jan 14 23:48:22.669276 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jan 14 23:48:22.669350 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Jan 14 23:48:22.671957 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 14 23:48:23.520096 initrd-setup-root[1183]: cut: /sysroot/etc/passwd: No such file or directory Jan 14 23:48:23.567833 initrd-setup-root[1190]: cut: /sysroot/etc/group: No such file or directory Jan 14 23:48:23.577635 initrd-setup-root[1197]: cut: /sysroot/etc/shadow: No such file or directory Jan 14 23:48:23.590123 initrd-setup-root[1204]: cut: /sysroot/etc/gshadow: No such file or directory Jan 14 23:48:24.391492 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 14 23:48:24.403955 kernel: audit: type=1130 audit(1768434504.392:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:48:24.392000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:48:24.398926 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 14 23:48:24.418517 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 14 23:48:24.439329 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 14 23:48:24.441922 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 0eb28982-35f7-4b76-8133-b752f60f3941 Jan 14 23:48:24.495105 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 14 23:48:24.497000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:48:24.506283 ignition[1272]: INFO : Ignition 2.22.0 Jan 14 23:48:24.506283 ignition[1272]: INFO : Stage: mount Jan 14 23:48:24.506283 ignition[1272]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 14 23:48:24.506283 ignition[1272]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 14 23:48:24.506283 ignition[1272]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 14 23:48:24.506283 ignition[1272]: INFO : PUT result: OK Jan 14 23:48:24.519504 kernel: audit: type=1130 audit(1768434504.497:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:48:24.519804 ignition[1272]: INFO : mount: mount passed Jan 14 23:48:24.521586 ignition[1272]: INFO : Ignition finished successfully Jan 14 23:48:24.526407 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 14 23:48:24.539798 kernel: audit: type=1130 audit(1768434504.525:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:48:24.525000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:48:24.532883 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 14 23:48:24.561959 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 14 23:48:24.602266 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 (259:5) scanned by mount (1283) Jan 14 23:48:24.606810 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 0eb28982-35f7-4b76-8133-b752f60f3941 Jan 14 23:48:24.606865 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Jan 14 23:48:24.615592 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jan 14 23:48:24.615670 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Jan 14 23:48:24.619585 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 14 23:48:24.665182 ignition[1300]: INFO : Ignition 2.22.0 Jan 14 23:48:24.665182 ignition[1300]: INFO : Stage: files Jan 14 23:48:24.675870 ignition[1300]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 14 23:48:24.675870 ignition[1300]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 14 23:48:24.675870 ignition[1300]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 14 23:48:24.675870 ignition[1300]: INFO : PUT result: OK Jan 14 23:48:24.688154 ignition[1300]: DEBUG : files: compiled without relabeling support, skipping Jan 14 23:48:24.691851 ignition[1300]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 14 23:48:24.691851 ignition[1300]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 14 23:48:24.804779 ignition[1300]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 14 23:48:24.807965 ignition[1300]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 14 23:48:24.807965 ignition[1300]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 14 23:48:24.806779 unknown[1300]: wrote ssh authorized keys file for user: core Jan 14 23:48:24.816844 ignition[1300]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Jan 14 23:48:24.816844 ignition[1300]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-arm64.tar.gz: attempt #1 Jan 14 23:48:24.892710 ignition[1300]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 14 23:48:25.133541 ignition[1300]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Jan 14 23:48:25.139196 ignition[1300]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 14 23:48:25.139196 ignition[1300]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 14 23:48:25.139196 ignition[1300]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 14 23:48:25.139196 ignition[1300]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 14 23:48:25.139196 ignition[1300]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 14 23:48:25.139196 ignition[1300]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 14 23:48:25.139196 ignition[1300]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 14 23:48:25.139196 ignition[1300]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 14 23:48:25.139196 ignition[1300]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 14 23:48:25.139196 ignition[1300]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 14 23:48:25.139196 ignition[1300]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Jan 14 23:48:25.184105 ignition[1300]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Jan 14 23:48:25.184105 ignition[1300]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Jan 14 23:48:25.184105 ignition[1300]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-arm64.raw: attempt #1 Jan 14 23:48:25.631993 ignition[1300]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jan 14 23:48:26.047863 ignition[1300]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Jan 14 23:48:26.047863 ignition[1300]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jan 14 23:48:26.085743 ignition[1300]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 14 23:48:26.091876 ignition[1300]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 14 23:48:26.091876 ignition[1300]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jan 14 23:48:26.091876 ignition[1300]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Jan 14 23:48:26.101944 ignition[1300]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Jan 14 23:48:26.101944 ignition[1300]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 14 23:48:26.101944 ignition[1300]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 14 23:48:26.101944 ignition[1300]: INFO : files: files passed Jan 14 23:48:26.101944 ignition[1300]: INFO : Ignition finished successfully Jan 14 23:48:26.115091 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 14 23:48:26.123000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:48:26.129294 kernel: audit: type=1130 audit(1768434506.123:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:48:26.129222 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 14 23:48:26.135714 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 14 23:48:26.152049 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 14 23:48:26.159775 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 14 23:48:26.163000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:48:26.163000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:48:26.173887 kernel: audit: type=1130 audit(1768434506.163:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:48:26.173956 kernel: audit: type=1131 audit(1768434506.163:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:48:26.186850 initrd-setup-root-after-ignition[1331]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 14 23:48:26.186850 initrd-setup-root-after-ignition[1331]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 14 23:48:26.195572 initrd-setup-root-after-ignition[1335]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 14 23:48:26.201512 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 14 23:48:26.206000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:48:26.212725 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 14 23:48:26.215210 kernel: audit: type=1130 audit(1768434506.206:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:48:26.219221 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 14 23:48:26.307081 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 14 23:48:26.307390 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 14 23:48:26.325038 kernel: audit: type=1130 audit(1768434506.313:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:48:26.325083 kernel: audit: type=1131 audit(1768434506.313:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:48:26.313000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:48:26.313000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:48:26.314992 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 14 23:48:26.329761 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 14 23:48:26.334552 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 14 23:48:26.339271 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 14 23:48:26.394626 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 14 23:48:26.399000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:48:26.402650 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 14 23:48:26.442741 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Jan 14 23:48:26.444192 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 14 23:48:26.451121 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 14 23:48:26.454542 systemd[1]: Stopped target timers.target - Timer Units. Jan 14 23:48:26.461455 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 14 23:48:26.463000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:48:26.461725 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 14 23:48:26.467321 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 14 23:48:26.470061 systemd[1]: Stopped target basic.target - Basic System. Jan 14 23:48:26.477452 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 14 23:48:26.481015 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 14 23:48:26.488529 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 14 23:48:26.491555 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Jan 14 23:48:26.499409 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 14 23:48:26.504684 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 14 23:48:26.512254 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 14 23:48:26.514993 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 14 23:48:26.519784 systemd[1]: Stopped target swap.target - Swaps. Jan 14 23:48:26.527000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:48:26.522609 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 14 23:48:26.522853 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 14 23:48:26.533625 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 14 23:48:26.536821 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 14 23:48:26.541530 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 14 23:48:26.548293 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 14 23:48:26.552221 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 14 23:48:26.556000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:48:26.552476 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 14 23:48:26.563622 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 14 23:48:26.563996 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 14 23:48:26.571000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:48:26.573199 systemd[1]: ignition-files.service: Deactivated successfully. Jan 14 23:48:26.574414 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 14 23:48:26.579000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:48:26.581922 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 14 23:48:26.592582 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 14 23:48:26.596030 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 14 23:48:26.596371 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 14 23:48:26.610000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:48:26.612263 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 14 23:48:26.614794 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 14 23:48:26.624000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:48:26.625422 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 14 23:48:26.632016 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 14 23:48:26.633000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:48:26.649284 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 14 23:48:26.656346 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 14 23:48:26.660000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:48:26.660000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:48:26.674430 ignition[1355]: INFO : Ignition 2.22.0 Jan 14 23:48:26.677437 ignition[1355]: INFO : Stage: umount Jan 14 23:48:26.679374 ignition[1355]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 14 23:48:26.679374 ignition[1355]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 14 23:48:26.679374 ignition[1355]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 14 23:48:26.692425 ignition[1355]: INFO : PUT result: OK Jan 14 23:48:26.686776 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 14 23:48:26.704555 ignition[1355]: INFO : umount: umount passed Jan 14 23:48:26.706530 ignition[1355]: INFO : Ignition finished successfully Jan 14 23:48:26.710099 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 14 23:48:26.712396 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 14 23:48:26.719000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:48:26.723171 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 14 23:48:26.725000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:48:26.729000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:48:26.723311 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 14 23:48:26.726605 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 14 23:48:26.737000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:48:26.726710 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 14 23:48:26.731243 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 14 23:48:26.731344 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 14 23:48:26.738564 systemd[1]: Stopped target network.target - Network. Jan 14 23:48:26.747856 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 14 23:48:26.753000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:48:26.747982 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 14 23:48:26.754658 systemd[1]: Stopped target paths.target - Path Units. Jan 14 23:48:26.767409 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 14 23:48:26.774295 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 14 23:48:26.782644 systemd[1]: Stopped target slices.target - Slice Units. Jan 14 23:48:26.785240 systemd[1]: Stopped target sockets.target - Socket Units. Jan 14 23:48:26.790474 systemd[1]: iscsid.socket: Deactivated successfully. Jan 14 23:48:26.790557 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 14 23:48:26.793337 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 14 23:48:26.793403 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 14 23:48:26.810000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:48:26.803724 systemd[1]: systemd-journald-audit.socket: Deactivated successfully. Jan 14 23:48:26.803788 systemd[1]: Closed systemd-journald-audit.socket - Journal Audit Socket. Jan 14 23:48:26.815000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup-pre comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:48:26.807213 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 14 23:48:26.807342 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 14 23:48:26.812089 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 14 23:48:26.812187 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 14 23:48:26.817490 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 14 23:48:26.824839 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 14 23:48:26.856724 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 14 23:48:26.860844 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 14 23:48:26.863000 audit: BPF prog-id=6 op=UNLOAD Jan 14 23:48:26.864000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:48:26.873629 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 14 23:48:26.878000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:48:26.873852 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 14 23:48:26.884821 systemd[1]: Stopped target network-pre.target - Preparation for Network. Jan 14 23:48:26.888000 audit: BPF prog-id=9 op=UNLOAD Jan 14 23:48:26.892472 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 14 23:48:26.892557 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 14 23:48:26.903065 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 14 23:48:26.911128 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 14 23:48:26.914645 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 14 23:48:26.923715 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 14 23:48:26.922000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:48:26.923977 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 14 23:48:26.930000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:48:26.931451 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 14 23:48:26.934000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:48:26.931569 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 14 23:48:26.935646 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 14 23:48:26.949447 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 14 23:48:26.949628 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 14 23:48:26.957379 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 14 23:48:26.953000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:48:26.960000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:48:26.957520 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 14 23:48:26.988894 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 14 23:48:26.993358 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 14 23:48:27.000000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:48:27.003312 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 14 23:48:27.003433 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 14 23:48:27.008106 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 14 23:48:27.008178 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 14 23:48:27.010654 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 14 23:48:27.010752 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 14 23:48:27.013000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:48:27.019181 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 14 23:48:27.019311 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 14 23:48:27.030000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:48:27.031907 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 14 23:48:27.032011 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 14 23:48:27.039000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:48:27.057835 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 14 23:48:27.065850 systemd[1]: systemd-network-generator.service: Deactivated successfully. Jan 14 23:48:27.066780 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Jan 14 23:48:27.074000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:48:27.075623 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 14 23:48:27.075890 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 14 23:48:27.084622 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jan 14 23:48:27.083000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:48:27.088000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:48:27.084739 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 14 23:48:27.093000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:48:27.089478 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 14 23:48:27.089587 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 14 23:48:27.106000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:48:27.094824 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 14 23:48:27.095546 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 14 23:48:27.112290 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 14 23:48:27.116000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:48:27.121000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:48:27.121000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:48:27.113335 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 14 23:48:27.117879 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 14 23:48:27.118069 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 14 23:48:27.124290 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 14 23:48:27.130997 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 14 23:48:27.180499 systemd[1]: Switching root. Jan 14 23:48:27.251320 systemd-journald[360]: Journal stopped Jan 14 23:48:31.311313 systemd-journald[360]: Received SIGTERM from PID 1 (systemd). Jan 14 23:48:31.311443 kernel: kauditd_printk_skb: 39 callbacks suppressed Jan 14 23:48:31.311499 kernel: audit: type=1335 audit(1768434507.257:82): pid=360 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=kernel comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" nl-mcgrp=1 op=disconnect res=1 Jan 14 23:48:31.311541 kernel: SELinux: policy capability network_peer_controls=1 Jan 14 23:48:31.311573 kernel: SELinux: policy capability open_perms=1 Jan 14 23:48:31.311614 kernel: SELinux: policy capability extended_socket_class=1 Jan 14 23:48:31.311648 kernel: SELinux: policy capability always_check_network=0 Jan 14 23:48:31.311700 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 14 23:48:31.311761 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 14 23:48:31.311804 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 14 23:48:31.311838 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 14 23:48:31.311883 kernel: SELinux: policy capability userspace_initial_context=0 Jan 14 23:48:31.311917 kernel: audit: type=1403 audit(1768434508.322:83): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 14 23:48:31.311949 systemd[1]: Successfully loaded SELinux policy in 220.861ms. Jan 14 23:48:31.311989 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 15.872ms. Jan 14 23:48:31.312031 systemd[1]: systemd 257.9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 14 23:48:31.312066 systemd[1]: Detected virtualization amazon. Jan 14 23:48:31.312099 systemd[1]: Detected architecture arm64. Jan 14 23:48:31.312130 systemd[1]: Detected first boot. Jan 14 23:48:31.312159 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Jan 14 23:48:31.312193 kernel: audit: type=1334 audit(1768434508.802:84): prog-id=10 op=LOAD Jan 14 23:48:31.312224 kernel: audit: type=1334 audit(1768434508.803:85): prog-id=10 op=UNLOAD Jan 14 23:48:31.316335 kernel: audit: type=1334 audit(1768434508.803:86): prog-id=11 op=LOAD Jan 14 23:48:31.316366 kernel: audit: type=1334 audit(1768434508.803:87): prog-id=11 op=UNLOAD Jan 14 23:48:31.316397 kernel: NET: Registered PF_VSOCK protocol family Jan 14 23:48:31.316430 zram_generator::config[1400]: No configuration found. Jan 14 23:48:31.316464 systemd[1]: Populated /etc with preset unit settings. Jan 14 23:48:31.316497 kernel: audit: type=1334 audit(1768434510.630:88): prog-id=12 op=LOAD Jan 14 23:48:31.316525 kernel: audit: type=1334 audit(1768434510.630:89): prog-id=3 op=UNLOAD Jan 14 23:48:31.316557 kernel: audit: type=1334 audit(1768434510.631:90): prog-id=13 op=LOAD Jan 14 23:48:31.316587 kernel: audit: type=1334 audit(1768434510.633:91): prog-id=14 op=LOAD Jan 14 23:48:31.316619 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 14 23:48:31.316653 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 14 23:48:31.316688 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 14 23:48:31.316724 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 14 23:48:31.316756 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 14 23:48:31.316791 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 14 23:48:31.316822 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 14 23:48:31.316854 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 14 23:48:31.316884 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 14 23:48:31.316916 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 14 23:48:31.316951 systemd[1]: Created slice user.slice - User and Session Slice. Jan 14 23:48:31.316984 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 14 23:48:31.317018 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 14 23:48:31.317050 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 14 23:48:31.317079 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 14 23:48:31.317111 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 14 23:48:31.317140 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 14 23:48:31.317173 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 14 23:48:31.317203 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 14 23:48:31.317250 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 14 23:48:31.317287 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 14 23:48:31.317322 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 14 23:48:31.317352 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 14 23:48:31.317396 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 14 23:48:31.317425 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 14 23:48:31.317461 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 14 23:48:31.317491 systemd[1]: Reached target remote-veritysetup.target - Remote Verity Protected Volumes. Jan 14 23:48:31.317523 systemd[1]: Reached target slices.target - Slice Units. Jan 14 23:48:31.317557 systemd[1]: Reached target swap.target - Swaps. Jan 14 23:48:31.317587 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 14 23:48:31.317627 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 14 23:48:31.317658 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jan 14 23:48:31.317693 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Jan 14 23:48:31.317724 systemd[1]: Listening on systemd-mountfsd.socket - DDI File System Mounter Socket. Jan 14 23:48:31.317753 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 14 23:48:31.317784 systemd[1]: Listening on systemd-nsresourced.socket - Namespace Resource Manager Socket. Jan 14 23:48:31.317814 systemd[1]: Listening on systemd-oomd.socket - Userspace Out-Of-Memory (OOM) Killer Socket. Jan 14 23:48:31.317843 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 14 23:48:31.317876 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 14 23:48:31.317914 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 14 23:48:31.317943 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 14 23:48:31.317972 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 14 23:48:31.318001 systemd[1]: Mounting media.mount - External Media Directory... Jan 14 23:48:31.318030 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 14 23:48:31.318059 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 14 23:48:31.318093 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 14 23:48:31.318127 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 14 23:48:31.318160 systemd[1]: Reached target machines.target - Containers. Jan 14 23:48:31.318190 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 14 23:48:31.318222 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 14 23:48:31.322334 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 14 23:48:31.322370 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 14 23:48:31.322401 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 14 23:48:31.322439 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 14 23:48:31.322468 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 14 23:48:31.322497 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 14 23:48:31.322530 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 14 23:48:31.322567 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 14 23:48:31.322598 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 14 23:48:31.322632 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 14 23:48:31.322662 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 14 23:48:31.322697 systemd[1]: Stopped systemd-fsck-usr.service. Jan 14 23:48:31.322729 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 14 23:48:31.325000 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 14 23:48:31.325036 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 14 23:48:31.325067 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 14 23:48:31.325100 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 14 23:48:31.325130 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jan 14 23:48:31.325159 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 14 23:48:31.325191 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 14 23:48:31.325243 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 14 23:48:31.325279 systemd[1]: Mounted media.mount - External Media Directory. Jan 14 23:48:31.325309 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 14 23:48:31.325342 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 14 23:48:31.325372 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 14 23:48:31.325407 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 14 23:48:31.325438 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 14 23:48:31.325473 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 14 23:48:31.325509 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 14 23:48:31.325540 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 14 23:48:31.325572 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 14 23:48:31.325605 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 14 23:48:31.325635 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 14 23:48:31.325665 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 14 23:48:31.325693 kernel: fuse: init (API version 7.41) Jan 14 23:48:31.325722 systemd[1]: Listening on systemd-importd.socket - Disk Image Download Service Socket. Jan 14 23:48:31.325755 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 14 23:48:31.325785 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 14 23:48:31.325820 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jan 14 23:48:31.325850 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 14 23:48:31.325883 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Jan 14 23:48:31.325914 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 14 23:48:31.325997 systemd-journald[1484]: Collecting audit messages is enabled. Jan 14 23:48:31.326052 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 14 23:48:31.326083 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 14 23:48:31.326113 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 14 23:48:31.326142 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 14 23:48:31.326171 systemd-journald[1484]: Journal started Jan 14 23:48:31.326222 systemd-journald[1484]: Runtime Journal (/run/log/journal/ec26d7cff3ee3388e11677ef765d36df) is 8M, max 75.3M, 67.3M free. Jan 14 23:48:31.330375 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 14 23:48:30.795000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Jan 14 23:48:31.029000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:48:31.035000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:48:31.041000 audit: BPF prog-id=14 op=UNLOAD Jan 14 23:48:31.041000 audit: BPF prog-id=13 op=UNLOAD Jan 14 23:48:31.046000 audit: BPF prog-id=15 op=LOAD Jan 14 23:48:31.050000 audit: BPF prog-id=16 op=LOAD Jan 14 23:48:31.050000 audit: BPF prog-id=17 op=LOAD Jan 14 23:48:31.166000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:48:31.191000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:48:31.191000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:48:31.203000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:48:31.203000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:48:31.214000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:48:31.214000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:48:31.223000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:48:31.230000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:48:31.297000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Jan 14 23:48:31.297000 audit[1484]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=60 a0=4 a1=ffffc88dfbc0 a2=4000 a3=0 items=0 ppid=1 pid=1484 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:48:31.297000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Jan 14 23:48:30.607907 systemd[1]: Queued start job for default target multi-user.target. Jan 14 23:48:30.635664 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Jan 14 23:48:30.636563 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 14 23:48:31.355764 kernel: ACPI: bus type drm_connector registered Jan 14 23:48:31.355848 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 14 23:48:31.367261 systemd[1]: Started systemd-journald.service - Journal Service. Jan 14 23:48:31.364000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:48:31.370132 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 14 23:48:31.372703 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 14 23:48:31.375000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:48:31.375000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:48:31.377198 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 14 23:48:31.379000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:48:31.382000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:48:31.382000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:48:31.380969 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 14 23:48:31.381431 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 14 23:48:31.384863 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 14 23:48:31.394344 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 14 23:48:31.395000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:48:31.395000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:48:31.409501 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 14 23:48:31.411000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:48:31.416311 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jan 14 23:48:31.418000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-load-credentials comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:48:31.459941 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 14 23:48:31.461000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:48:31.472347 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 14 23:48:31.475344 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 14 23:48:31.481666 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 14 23:48:31.486691 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jan 14 23:48:31.552711 systemd-journald[1484]: Time spent on flushing to /var/log/journal/ec26d7cff3ee3388e11677ef765d36df is 68.419ms for 1059 entries. Jan 14 23:48:31.552711 systemd-journald[1484]: System Journal (/var/log/journal/ec26d7cff3ee3388e11677ef765d36df) is 8M, max 588.1M, 580.1M free. Jan 14 23:48:31.653916 kernel: loop1: detected capacity change from 0 to 100192 Jan 14 23:48:31.654001 systemd-journald[1484]: Received client request to flush runtime journal. Jan 14 23:48:31.594643 systemd-tmpfiles[1513]: ACLs are not supported, ignoring. Jan 14 23:48:31.594669 systemd-tmpfiles[1513]: ACLs are not supported, ignoring. Jan 14 23:48:31.657000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:48:31.656466 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 14 23:48:31.664000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:48:31.662630 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 14 23:48:31.667009 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 14 23:48:31.670000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:48:31.668309 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 14 23:48:31.675211 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 14 23:48:31.682621 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 14 23:48:31.696000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:48:31.690705 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 14 23:48:31.694183 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jan 14 23:48:31.723967 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 14 23:48:31.727218 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 14 23:48:31.745309 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 14 23:48:31.747000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:48:31.855597 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 14 23:48:31.857000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:48:31.859000 audit: BPF prog-id=18 op=LOAD Jan 14 23:48:31.859000 audit: BPF prog-id=19 op=LOAD Jan 14 23:48:31.859000 audit: BPF prog-id=20 op=LOAD Jan 14 23:48:31.862579 systemd[1]: Starting systemd-oomd.service - Userspace Out-Of-Memory (OOM) Killer... Jan 14 23:48:31.865000 audit: BPF prog-id=21 op=LOAD Jan 14 23:48:31.869818 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 14 23:48:31.875741 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 14 23:48:31.902284 kernel: loop2: detected capacity change from 0 to 109872 Jan 14 23:48:31.902000 audit: BPF prog-id=22 op=LOAD Jan 14 23:48:31.902000 audit: BPF prog-id=23 op=LOAD Jan 14 23:48:31.904000 audit: BPF prog-id=24 op=LOAD Jan 14 23:48:31.907740 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 14 23:48:31.914000 audit: BPF prog-id=25 op=LOAD Jan 14 23:48:31.914000 audit: BPF prog-id=26 op=LOAD Jan 14 23:48:31.914000 audit: BPF prog-id=27 op=LOAD Jan 14 23:48:31.917566 systemd[1]: Starting systemd-nsresourced.service - Namespace Resource Manager... Jan 14 23:48:31.972573 systemd-tmpfiles[1559]: ACLs are not supported, ignoring. Jan 14 23:48:31.972607 systemd-tmpfiles[1559]: ACLs are not supported, ignoring. Jan 14 23:48:31.995606 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 14 23:48:31.997000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:48:32.035158 systemd-nsresourced[1561]: Not setting up BPF subsystem, as functionality has been disabled at compile time. Jan 14 23:48:32.042478 systemd[1]: Started systemd-nsresourced.service - Namespace Resource Manager. Jan 14 23:48:32.044000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-nsresourced comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:48:32.051786 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 14 23:48:32.053000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:48:32.182266 kernel: loop3: detected capacity change from 0 to 211168 Jan 14 23:48:32.261617 systemd-oomd[1557]: No swap; memory pressure usage will be degraded Jan 14 23:48:32.263405 systemd[1]: Started systemd-oomd.service - Userspace Out-Of-Memory (OOM) Killer. Jan 14 23:48:32.265000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-oomd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:48:32.267761 kernel: kauditd_printk_skb: 56 callbacks suppressed Jan 14 23:48:32.267831 kernel: audit: type=1130 audit(1768434512.265:146): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-oomd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:48:32.353158 systemd-resolved[1558]: Positive Trust Anchors: Jan 14 23:48:32.353196 systemd-resolved[1558]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 14 23:48:32.353206 systemd-resolved[1558]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Jan 14 23:48:32.353310 systemd-resolved[1558]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 14 23:48:32.368175 systemd-resolved[1558]: Defaulting to hostname 'linux'. Jan 14 23:48:32.372021 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 14 23:48:32.374792 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 14 23:48:32.370000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:48:32.383283 kernel: audit: type=1130 audit(1768434512.370:147): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:48:32.496297 kernel: loop4: detected capacity change from 0 to 61504 Jan 14 23:48:32.534271 kernel: loop5: detected capacity change from 0 to 100192 Jan 14 23:48:32.602297 kernel: loop6: detected capacity change from 0 to 109872 Jan 14 23:48:32.624285 kernel: loop7: detected capacity change from 0 to 211168 Jan 14 23:48:32.631490 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 14 23:48:32.633000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:48:32.633000 audit: BPF prog-id=8 op=UNLOAD Jan 14 23:48:32.641622 kernel: audit: type=1130 audit(1768434512.633:148): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:48:32.641718 kernel: audit: type=1334 audit(1768434512.633:149): prog-id=8 op=UNLOAD Jan 14 23:48:32.633000 audit: BPF prog-id=7 op=UNLOAD Jan 14 23:48:32.643367 kernel: audit: type=1334 audit(1768434512.633:150): prog-id=7 op=UNLOAD Jan 14 23:48:32.638000 audit: BPF prog-id=28 op=LOAD Jan 14 23:48:32.643529 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 14 23:48:32.638000 audit: BPF prog-id=29 op=LOAD Jan 14 23:48:32.650871 kernel: audit: type=1334 audit(1768434512.638:151): prog-id=28 op=LOAD Jan 14 23:48:32.650961 kernel: audit: type=1334 audit(1768434512.638:152): prog-id=29 op=LOAD Jan 14 23:48:32.675588 kernel: loop1: detected capacity change from 0 to 61504 Jan 14 23:48:32.695022 (sd-merge)[1583]: Using extensions 'containerd-flatcar.raw', 'docker-flatcar.raw', 'kubernetes.raw', 'oem-ami.raw'. Jan 14 23:48:32.702459 (sd-merge)[1583]: Merged extensions into '/usr'. Jan 14 23:48:32.711213 systemd[1]: Reload requested from client PID 1511 ('systemd-sysext') (unit systemd-sysext.service)... Jan 14 23:48:32.711462 systemd[1]: Reloading... Jan 14 23:48:32.713532 systemd-udevd[1585]: Using default interface naming scheme 'v257'. Jan 14 23:48:32.850340 zram_generator::config[1618]: No configuration found. Jan 14 23:48:32.977870 (udev-worker)[1629]: Network interface NamePolicy= disabled on kernel command line. Jan 14 23:48:33.495127 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 14 23:48:33.495798 systemd[1]: Reloading finished in 783 ms. Jan 14 23:48:33.532000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:48:33.530500 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 14 23:48:33.547479 kernel: audit: type=1130 audit(1768434513.532:153): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:48:33.562390 kernel: audit: type=1130 audit(1768434513.554:154): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:48:33.554000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:48:33.547322 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 14 23:48:33.618637 systemd[1]: Starting ensure-sysext.service... Jan 14 23:48:33.621000 audit: BPF prog-id=30 op=LOAD Jan 14 23:48:33.625350 kernel: audit: type=1334 audit(1768434513.621:155): prog-id=30 op=LOAD Jan 14 23:48:33.625787 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 14 23:48:33.633690 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 14 23:48:33.647000 audit: BPF prog-id=31 op=LOAD Jan 14 23:48:33.647000 audit: BPF prog-id=25 op=UNLOAD Jan 14 23:48:33.647000 audit: BPF prog-id=32 op=LOAD Jan 14 23:48:33.648000 audit: BPF prog-id=33 op=LOAD Jan 14 23:48:33.648000 audit: BPF prog-id=26 op=UNLOAD Jan 14 23:48:33.640351 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 14 23:48:33.648000 audit: BPF prog-id=27 op=UNLOAD Jan 14 23:48:33.649000 audit: BPF prog-id=34 op=LOAD Jan 14 23:48:33.651000 audit: BPF prog-id=15 op=UNLOAD Jan 14 23:48:33.651000 audit: BPF prog-id=35 op=LOAD Jan 14 23:48:33.651000 audit: BPF prog-id=36 op=LOAD Jan 14 23:48:33.651000 audit: BPF prog-id=16 op=UNLOAD Jan 14 23:48:33.651000 audit: BPF prog-id=17 op=UNLOAD Jan 14 23:48:33.652000 audit: BPF prog-id=37 op=LOAD Jan 14 23:48:33.685000 audit: BPF prog-id=18 op=UNLOAD Jan 14 23:48:33.685000 audit: BPF prog-id=38 op=LOAD Jan 14 23:48:33.685000 audit: BPF prog-id=39 op=LOAD Jan 14 23:48:33.686000 audit: BPF prog-id=19 op=UNLOAD Jan 14 23:48:33.686000 audit: BPF prog-id=20 op=UNLOAD Jan 14 23:48:33.689000 audit: BPF prog-id=40 op=LOAD Jan 14 23:48:33.690000 audit: BPF prog-id=22 op=UNLOAD Jan 14 23:48:33.690000 audit: BPF prog-id=41 op=LOAD Jan 14 23:48:33.690000 audit: BPF prog-id=42 op=LOAD Jan 14 23:48:33.690000 audit: BPF prog-id=23 op=UNLOAD Jan 14 23:48:33.690000 audit: BPF prog-id=24 op=UNLOAD Jan 14 23:48:33.691000 audit: BPF prog-id=43 op=LOAD Jan 14 23:48:33.691000 audit: BPF prog-id=44 op=LOAD Jan 14 23:48:33.691000 audit: BPF prog-id=28 op=UNLOAD Jan 14 23:48:33.691000 audit: BPF prog-id=29 op=UNLOAD Jan 14 23:48:33.696000 audit: BPF prog-id=45 op=LOAD Jan 14 23:48:33.697000 audit: BPF prog-id=21 op=UNLOAD Jan 14 23:48:33.712576 systemd[1]: Reload requested from client PID 1743 ('systemctl') (unit ensure-sysext.service)... Jan 14 23:48:33.712595 systemd[1]: Reloading... Jan 14 23:48:33.779811 systemd-tmpfiles[1747]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Jan 14 23:48:33.779897 systemd-tmpfiles[1747]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Jan 14 23:48:33.780530 systemd-tmpfiles[1747]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 14 23:48:33.783532 systemd-tmpfiles[1747]: ACLs are not supported, ignoring. Jan 14 23:48:33.783713 systemd-tmpfiles[1747]: ACLs are not supported, ignoring. Jan 14 23:48:33.800884 systemd-tmpfiles[1747]: Detected autofs mount point /boot during canonicalization of boot. Jan 14 23:48:33.800918 systemd-tmpfiles[1747]: Skipping /boot Jan 14 23:48:33.827063 systemd-tmpfiles[1747]: Detected autofs mount point /boot during canonicalization of boot. Jan 14 23:48:33.827096 systemd-tmpfiles[1747]: Skipping /boot Jan 14 23:48:33.982376 zram_generator::config[1837]: No configuration found. Jan 14 23:48:34.160907 systemd-networkd[1746]: lo: Link UP Jan 14 23:48:34.160932 systemd-networkd[1746]: lo: Gained carrier Jan 14 23:48:34.165991 systemd-networkd[1746]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 14 23:48:34.166016 systemd-networkd[1746]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 14 23:48:34.169467 systemd-networkd[1746]: eth0: Link UP Jan 14 23:48:34.170529 systemd-networkd[1746]: eth0: Gained carrier Jan 14 23:48:34.170580 systemd-networkd[1746]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 14 23:48:34.184344 systemd-networkd[1746]: eth0: DHCPv4 address 172.31.18.197/20, gateway 172.31.16.1 acquired from 172.31.16.1 Jan 14 23:48:34.479028 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Jan 14 23:48:34.482964 systemd[1]: Reloading finished in 769 ms. Jan 14 23:48:34.510497 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 14 23:48:34.511000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:48:34.514000 audit: BPF prog-id=46 op=LOAD Jan 14 23:48:34.514000 audit: BPF prog-id=45 op=UNLOAD Jan 14 23:48:34.517000 audit: BPF prog-id=47 op=LOAD Jan 14 23:48:34.517000 audit: BPF prog-id=37 op=UNLOAD Jan 14 23:48:34.517000 audit: BPF prog-id=48 op=LOAD Jan 14 23:48:34.518000 audit: BPF prog-id=49 op=LOAD Jan 14 23:48:34.518000 audit: BPF prog-id=38 op=UNLOAD Jan 14 23:48:34.518000 audit: BPF prog-id=39 op=UNLOAD Jan 14 23:48:34.520000 audit: BPF prog-id=50 op=LOAD Jan 14 23:48:34.520000 audit: BPF prog-id=40 op=UNLOAD Jan 14 23:48:34.520000 audit: BPF prog-id=51 op=LOAD Jan 14 23:48:34.521000 audit: BPF prog-id=52 op=LOAD Jan 14 23:48:34.521000 audit: BPF prog-id=41 op=UNLOAD Jan 14 23:48:34.521000 audit: BPF prog-id=42 op=UNLOAD Jan 14 23:48:34.522000 audit: BPF prog-id=53 op=LOAD Jan 14 23:48:34.527000 audit: BPF prog-id=34 op=UNLOAD Jan 14 23:48:34.527000 audit: BPF prog-id=54 op=LOAD Jan 14 23:48:34.527000 audit: BPF prog-id=55 op=LOAD Jan 14 23:48:34.527000 audit: BPF prog-id=35 op=UNLOAD Jan 14 23:48:34.527000 audit: BPF prog-id=36 op=UNLOAD Jan 14 23:48:34.528000 audit: BPF prog-id=56 op=LOAD Jan 14 23:48:34.528000 audit: BPF prog-id=30 op=UNLOAD Jan 14 23:48:34.529000 audit: BPF prog-id=57 op=LOAD Jan 14 23:48:34.529000 audit: BPF prog-id=31 op=UNLOAD Jan 14 23:48:34.530000 audit: BPF prog-id=58 op=LOAD Jan 14 23:48:34.530000 audit: BPF prog-id=59 op=LOAD Jan 14 23:48:34.530000 audit: BPF prog-id=32 op=UNLOAD Jan 14 23:48:34.530000 audit: BPF prog-id=33 op=UNLOAD Jan 14 23:48:34.532000 audit: BPF prog-id=60 op=LOAD Jan 14 23:48:34.532000 audit: BPF prog-id=61 op=LOAD Jan 14 23:48:34.532000 audit: BPF prog-id=43 op=UNLOAD Jan 14 23:48:34.532000 audit: BPF prog-id=44 op=UNLOAD Jan 14 23:48:34.538973 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 14 23:48:34.540000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:48:34.545109 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 14 23:48:34.547000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:48:34.615342 systemd[1]: Reached target network.target - Network. Jan 14 23:48:34.620759 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 14 23:48:34.627723 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 14 23:48:34.632530 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 14 23:48:34.636839 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 14 23:48:34.644405 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 14 23:48:34.658685 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 14 23:48:34.664467 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 14 23:48:34.667146 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 14 23:48:34.667705 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Jan 14 23:48:34.672330 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 14 23:48:34.683716 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 14 23:48:34.689380 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 14 23:48:34.696611 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 14 23:48:34.706316 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jan 14 23:48:34.713504 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 14 23:48:34.719343 systemd[1]: Reached target time-set.target - System Time Set. Jan 14 23:48:34.730407 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 14 23:48:34.745503 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 14 23:48:34.755197 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 14 23:48:34.754000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:48:34.754000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:48:34.758000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:48:34.758000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:48:34.756756 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 14 23:48:34.759314 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 14 23:48:34.778021 systemd[1]: Finished ensure-sysext.service. Jan 14 23:48:34.776000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:48:34.789264 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 14 23:48:34.790551 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 14 23:48:34.791832 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 14 23:48:34.789000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:48:34.789000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:48:34.806000 audit[1907]: SYSTEM_BOOT pid=1907 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Jan 14 23:48:34.814025 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 14 23:48:34.814626 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 14 23:48:34.816607 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 14 23:48:34.813000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:48:34.813000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:48:34.826102 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 14 23:48:34.830000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:48:34.840452 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 14 23:48:34.843000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:48:34.858470 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jan 14 23:48:34.862000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd-persistent-storage comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:48:34.875985 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 14 23:48:34.877000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:48:35.026504 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 14 23:48:35.029000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:48:35.030789 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 14 23:48:35.042000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Jan 14 23:48:35.042000 audit[1933]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffc8317dd0 a2=420 a3=0 items=0 ppid=1890 pid=1933 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:48:35.042000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jan 14 23:48:35.043968 augenrules[1933]: No rules Jan 14 23:48:35.047900 systemd[1]: audit-rules.service: Deactivated successfully. Jan 14 23:48:35.050157 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 14 23:48:35.998401 systemd-networkd[1746]: eth0: Gained IPv6LL Jan 14 23:48:36.003355 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 14 23:48:36.006814 systemd[1]: Reached target network-online.target - Network is Online. Jan 14 23:48:37.981636 ldconfig[1897]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 14 23:48:37.988728 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 14 23:48:37.993970 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 14 23:48:38.018331 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 14 23:48:38.021757 systemd[1]: Reached target sysinit.target - System Initialization. Jan 14 23:48:38.024450 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 14 23:48:38.027405 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 14 23:48:38.030613 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 14 23:48:38.033433 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 14 23:48:38.036431 systemd[1]: Started systemd-sysupdate-reboot.timer - Reboot Automatically After System Update. Jan 14 23:48:38.039497 systemd[1]: Started systemd-sysupdate.timer - Automatic System Update. Jan 14 23:48:38.041988 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 14 23:48:38.045451 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 14 23:48:38.045495 systemd[1]: Reached target paths.target - Path Units. Jan 14 23:48:38.047932 systemd[1]: Reached target timers.target - Timer Units. Jan 14 23:48:38.052206 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 14 23:48:38.057467 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 14 23:48:38.063830 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jan 14 23:48:38.067218 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jan 14 23:48:38.070463 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jan 14 23:48:38.077217 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 14 23:48:38.080536 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jan 14 23:48:38.084401 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 14 23:48:38.087154 systemd[1]: Reached target sockets.target - Socket Units. Jan 14 23:48:38.089715 systemd[1]: Reached target basic.target - Basic System. Jan 14 23:48:38.092055 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 14 23:48:38.092107 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 14 23:48:38.098414 systemd[1]: Starting containerd.service - containerd container runtime... Jan 14 23:48:38.105746 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 14 23:48:38.113767 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 14 23:48:38.135085 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 14 23:48:38.142554 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 14 23:48:38.147710 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 14 23:48:38.150419 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 14 23:48:38.155681 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 14 23:48:38.162751 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 14 23:48:38.171201 systemd[1]: Started ntpd.service - Network Time Service. Jan 14 23:48:38.185505 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 14 23:48:38.195348 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 14 23:48:38.211453 systemd[1]: Starting setup-oem.service - Setup OEM... Jan 14 23:48:38.218859 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 14 23:48:38.231619 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 14 23:48:38.240313 jq[1949]: false Jan 14 23:48:38.248819 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 14 23:48:38.251300 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 14 23:48:38.255005 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 14 23:48:38.257917 systemd[1]: Starting update-engine.service - Update Engine... Jan 14 23:48:38.263303 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 14 23:48:38.271880 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 14 23:48:38.275406 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 14 23:48:38.275921 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 14 23:48:38.326962 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 14 23:48:38.330344 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 14 23:48:38.365127 extend-filesystems[1950]: Found /dev/nvme0n1p6 Jan 14 23:48:38.390708 jq[1965]: true Jan 14 23:48:38.391872 systemd[1]: motdgen.service: Deactivated successfully. Jan 14 23:48:38.397651 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 14 23:48:38.429017 ntpd[1953]: ntpd 4.2.8p18@1.4062-o Wed Jan 14 21:31:34 UTC 2026 (1): Starting Jan 14 23:48:38.429140 ntpd[1953]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jan 14 23:48:38.431251 ntpd[1953]: 14 Jan 23:48:38 ntpd[1953]: ntpd 4.2.8p18@1.4062-o Wed Jan 14 21:31:34 UTC 2026 (1): Starting Jan 14 23:48:38.431251 ntpd[1953]: 14 Jan 23:48:38 ntpd[1953]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jan 14 23:48:38.431251 ntpd[1953]: 14 Jan 23:48:38 ntpd[1953]: ---------------------------------------------------- Jan 14 23:48:38.431251 ntpd[1953]: 14 Jan 23:48:38 ntpd[1953]: ntp-4 is maintained by Network Time Foundation, Jan 14 23:48:38.431251 ntpd[1953]: 14 Jan 23:48:38 ntpd[1953]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jan 14 23:48:38.431251 ntpd[1953]: 14 Jan 23:48:38 ntpd[1953]: corporation. Support and training for ntp-4 are Jan 14 23:48:38.431251 ntpd[1953]: 14 Jan 23:48:38 ntpd[1953]: available at https://www.nwtime.org/support Jan 14 23:48:38.431251 ntpd[1953]: 14 Jan 23:48:38 ntpd[1953]: ---------------------------------------------------- Jan 14 23:48:38.429159 ntpd[1953]: ---------------------------------------------------- Jan 14 23:48:38.429176 ntpd[1953]: ntp-4 is maintained by Network Time Foundation, Jan 14 23:48:38.429193 ntpd[1953]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jan 14 23:48:38.447499 extend-filesystems[1950]: Found /dev/nvme0n1p9 Jan 14 23:48:38.447499 extend-filesystems[1950]: Checking size of /dev/nvme0n1p9 Jan 14 23:48:38.465735 jq[1999]: true Jan 14 23:48:38.465930 ntpd[1953]: 14 Jan 23:48:38 ntpd[1953]: proto: precision = 0.096 usec (-23) Jan 14 23:48:38.465930 ntpd[1953]: 14 Jan 23:48:38 ntpd[1953]: basedate set to 2026-01-02 Jan 14 23:48:38.465930 ntpd[1953]: 14 Jan 23:48:38 ntpd[1953]: gps base set to 2026-01-04 (week 2400) Jan 14 23:48:38.465930 ntpd[1953]: 14 Jan 23:48:38 ntpd[1953]: Listen and drop on 0 v6wildcard [::]:123 Jan 14 23:48:38.465930 ntpd[1953]: 14 Jan 23:48:38 ntpd[1953]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jan 14 23:48:38.465930 ntpd[1953]: 14 Jan 23:48:38 ntpd[1953]: Listen normally on 2 lo 127.0.0.1:123 Jan 14 23:48:38.465930 ntpd[1953]: 14 Jan 23:48:38 ntpd[1953]: Listen normally on 3 eth0 172.31.18.197:123 Jan 14 23:48:38.465930 ntpd[1953]: 14 Jan 23:48:38 ntpd[1953]: Listen normally on 4 lo [::1]:123 Jan 14 23:48:38.465930 ntpd[1953]: 14 Jan 23:48:38 ntpd[1953]: Listen normally on 5 eth0 [fe80::4d9:acff:fe97:cbf%2]:123 Jan 14 23:48:38.465930 ntpd[1953]: 14 Jan 23:48:38 ntpd[1953]: Listening on routing socket on fd #22 for interface updates Jan 14 23:48:38.465930 ntpd[1953]: 14 Jan 23:48:38 ntpd[1953]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 14 23:48:38.465930 ntpd[1953]: 14 Jan 23:48:38 ntpd[1953]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 14 23:48:38.429209 ntpd[1953]: corporation. Support and training for ntp-4 are Jan 14 23:48:38.429287 ntpd[1953]: available at https://www.nwtime.org/support Jan 14 23:48:38.429309 ntpd[1953]: ---------------------------------------------------- Jan 14 23:48:38.436528 ntpd[1953]: proto: precision = 0.096 usec (-23) Jan 14 23:48:38.438596 ntpd[1953]: basedate set to 2026-01-02 Jan 14 23:48:38.438628 ntpd[1953]: gps base set to 2026-01-04 (week 2400) Jan 14 23:48:38.438821 ntpd[1953]: Listen and drop on 0 v6wildcard [::]:123 Jan 14 23:48:38.438866 ntpd[1953]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jan 14 23:48:38.440507 ntpd[1953]: Listen normally on 2 lo 127.0.0.1:123 Jan 14 23:48:38.440558 ntpd[1953]: Listen normally on 3 eth0 172.31.18.197:123 Jan 14 23:48:38.440607 ntpd[1953]: Listen normally on 4 lo [::1]:123 Jan 14 23:48:38.440652 ntpd[1953]: Listen normally on 5 eth0 [fe80::4d9:acff:fe97:cbf%2]:123 Jan 14 23:48:38.440695 ntpd[1953]: Listening on routing socket on fd #22 for interface updates Jan 14 23:48:38.462088 ntpd[1953]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 14 23:48:38.462145 ntpd[1953]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 14 23:48:38.508560 extend-filesystems[1950]: Resized partition /dev/nvme0n1p9 Jan 14 23:48:38.527111 extend-filesystems[2017]: resize2fs 1.47.3 (8-Jul-2025) Jan 14 23:48:38.544329 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 14 23:48:38.553184 dbus-daemon[1947]: [system] SELinux support is enabled Jan 14 23:48:38.553656 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 14 23:48:38.562734 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 14 23:48:38.562802 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 14 23:48:38.566244 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 14 23:48:38.566285 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 14 23:48:38.578711 tar[1970]: linux-arm64/LICENSE Jan 14 23:48:38.579277 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 1617920 to 2604027 blocks Jan 14 23:48:38.579509 tar[1970]: linux-arm64/helm Jan 14 23:48:38.595530 dbus-daemon[1947]: [system] Activating via systemd: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.3' (uid=244 pid=1746 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Jan 14 23:48:38.611827 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Jan 14 23:48:38.636275 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 2604027 Jan 14 23:48:38.642637 update_engine[1963]: I20260114 23:48:38.640182 1963 main.cc:92] Flatcar Update Engine starting Jan 14 23:48:38.656264 extend-filesystems[2017]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Jan 14 23:48:38.656264 extend-filesystems[2017]: old_desc_blocks = 1, new_desc_blocks = 2 Jan 14 23:48:38.656264 extend-filesystems[2017]: The filesystem on /dev/nvme0n1p9 is now 2604027 (4k) blocks long. Jan 14 23:48:38.695901 extend-filesystems[1950]: Resized filesystem in /dev/nvme0n1p9 Jan 14 23:48:38.700565 update_engine[1963]: I20260114 23:48:38.666954 1963 update_check_scheduler.cc:74] Next update check in 8m32s Jan 14 23:48:38.661949 systemd[1]: Finished setup-oem.service - Setup OEM. Jan 14 23:48:38.673801 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 14 23:48:38.674370 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 14 23:48:38.679185 systemd[1]: Started update-engine.service - Update Engine. Jan 14 23:48:38.684780 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Jan 14 23:48:38.692089 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 14 23:48:38.775314 bash[2041]: Updated "/home/core/.ssh/authorized_keys" Jan 14 23:48:38.779029 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 14 23:48:38.789678 systemd[1]: Starting sshkeys.service... Jan 14 23:48:38.800928 systemd-logind[1960]: Watching system buttons on /dev/input/event0 (Power Button) Jan 14 23:48:38.803354 systemd-logind[1960]: Watching system buttons on /dev/input/event1 (Sleep Button) Jan 14 23:48:38.807432 systemd-logind[1960]: New seat seat0. Jan 14 23:48:38.813341 systemd[1]: Started systemd-logind.service - User Login Management. Jan 14 23:48:38.924550 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jan 14 23:48:38.937961 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jan 14 23:48:39.077445 coreos-metadata[1946]: Jan 14 23:48:39.077 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Jan 14 23:48:39.087754 coreos-metadata[1946]: Jan 14 23:48:39.087 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Jan 14 23:48:39.091486 coreos-metadata[1946]: Jan 14 23:48:39.091 INFO Fetch successful Jan 14 23:48:39.091486 coreos-metadata[1946]: Jan 14 23:48:39.091 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Jan 14 23:48:39.097203 coreos-metadata[1946]: Jan 14 23:48:39.095 INFO Fetch successful Jan 14 23:48:39.097203 coreos-metadata[1946]: Jan 14 23:48:39.095 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Jan 14 23:48:39.100058 coreos-metadata[1946]: Jan 14 23:48:39.098 INFO Fetch successful Jan 14 23:48:39.100058 coreos-metadata[1946]: Jan 14 23:48:39.098 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Jan 14 23:48:39.106614 coreos-metadata[1946]: Jan 14 23:48:39.106 INFO Fetch successful Jan 14 23:48:39.106614 coreos-metadata[1946]: Jan 14 23:48:39.106 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Jan 14 23:48:39.107740 coreos-metadata[1946]: Jan 14 23:48:39.107 INFO Fetch failed with 404: resource not found Jan 14 23:48:39.107740 coreos-metadata[1946]: Jan 14 23:48:39.107 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Jan 14 23:48:39.112275 coreos-metadata[1946]: Jan 14 23:48:39.108 INFO Fetch successful Jan 14 23:48:39.112275 coreos-metadata[1946]: Jan 14 23:48:39.108 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Jan 14 23:48:39.117324 coreos-metadata[1946]: Jan 14 23:48:39.117 INFO Fetch successful Jan 14 23:48:39.117324 coreos-metadata[1946]: Jan 14 23:48:39.117 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Jan 14 23:48:39.117494 coreos-metadata[1946]: Jan 14 23:48:39.117 INFO Fetch successful Jan 14 23:48:39.117494 coreos-metadata[1946]: Jan 14 23:48:39.117 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Jan 14 23:48:39.120932 coreos-metadata[1946]: Jan 14 23:48:39.120 INFO Fetch successful Jan 14 23:48:39.120932 coreos-metadata[1946]: Jan 14 23:48:39.120 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Jan 14 23:48:39.127449 coreos-metadata[1946]: Jan 14 23:48:39.127 INFO Fetch successful Jan 14 23:48:39.270822 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Jan 14 23:48:39.285056 amazon-ssm-agent[2042]: Initializing new seelog logger Jan 14 23:48:39.287354 amazon-ssm-agent[2042]: New Seelog Logger Creation Complete Jan 14 23:48:39.289464 amazon-ssm-agent[2042]: 2026/01/14 23:48:39 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 14 23:48:39.289464 amazon-ssm-agent[2042]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 14 23:48:39.299351 amazon-ssm-agent[2042]: 2026/01/14 23:48:39 processing appconfig overrides Jan 14 23:48:39.305459 amazon-ssm-agent[2042]: 2026/01/14 23:48:39 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 14 23:48:39.306092 amazon-ssm-agent[2042]: 2026-01-14 23:48:39.3052 INFO Proxy environment variables: Jan 14 23:48:39.309095 amazon-ssm-agent[2042]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 14 23:48:39.309095 amazon-ssm-agent[2042]: 2026/01/14 23:48:39 processing appconfig overrides Jan 14 23:48:39.309095 amazon-ssm-agent[2042]: 2026/01/14 23:48:39 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 14 23:48:39.309095 amazon-ssm-agent[2042]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 14 23:48:39.309095 amazon-ssm-agent[2042]: 2026/01/14 23:48:39 processing appconfig overrides Jan 14 23:48:39.314643 amazon-ssm-agent[2042]: 2026/01/14 23:48:39 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 14 23:48:39.314643 amazon-ssm-agent[2042]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 14 23:48:39.314643 amazon-ssm-agent[2042]: 2026/01/14 23:48:39 processing appconfig overrides Jan 14 23:48:39.318006 dbus-daemon[1947]: [system] Successfully activated service 'org.freedesktop.hostname1' Jan 14 23:48:39.337165 dbus-daemon[1947]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.6' (uid=0 pid=2027 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Jan 14 23:48:39.382985 systemd[1]: Starting polkit.service - Authorization Manager... Jan 14 23:48:39.405891 amazon-ssm-agent[2042]: 2026-01-14 23:48:39.3053 INFO https_proxy: Jan 14 23:48:39.506310 amazon-ssm-agent[2042]: 2026-01-14 23:48:39.3053 INFO http_proxy: Jan 14 23:48:39.536094 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 14 23:48:39.542348 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 14 23:48:39.607776 amazon-ssm-agent[2042]: 2026-01-14 23:48:39.3053 INFO no_proxy: Jan 14 23:48:39.609686 sshd_keygen[2004]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 14 23:48:39.614253 coreos-metadata[2069]: Jan 14 23:48:39.613 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Jan 14 23:48:39.625653 coreos-metadata[2069]: Jan 14 23:48:39.621 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Jan 14 23:48:39.626331 coreos-metadata[2069]: Jan 14 23:48:39.625 INFO Fetch successful Jan 14 23:48:39.626331 coreos-metadata[2069]: Jan 14 23:48:39.625 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Jan 14 23:48:39.629592 coreos-metadata[2069]: Jan 14 23:48:39.629 INFO Fetch successful Jan 14 23:48:39.632653 unknown[2069]: wrote ssh authorized keys file for user: core Jan 14 23:48:39.685944 containerd[1996]: time="2026-01-14T23:48:39Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Jan 14 23:48:39.703337 containerd[1996]: time="2026-01-14T23:48:39.701156578Z" level=info msg="starting containerd" revision=fcd43222d6b07379a4be9786bda52438f0dd16a1 version=v2.1.5 Jan 14 23:48:39.716111 amazon-ssm-agent[2042]: 2026-01-14 23:48:39.3071 INFO Checking if agent identity type OnPrem can be assumed Jan 14 23:48:39.738021 update-ssh-keys[2178]: Updated "/home/core/.ssh/authorized_keys" Jan 14 23:48:39.750360 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jan 14 23:48:39.761336 systemd[1]: Finished sshkeys.service. Jan 14 23:48:39.825426 amazon-ssm-agent[2042]: 2026-01-14 23:48:39.3072 INFO Checking if agent identity type EC2 can be assumed Jan 14 23:48:39.851101 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 14 23:48:39.860741 containerd[1996]: time="2026-01-14T23:48:39.856323106Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="15.528µs" Jan 14 23:48:39.860741 containerd[1996]: time="2026-01-14T23:48:39.856381474Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Jan 14 23:48:39.860741 containerd[1996]: time="2026-01-14T23:48:39.856451086Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Jan 14 23:48:39.860741 containerd[1996]: time="2026-01-14T23:48:39.856479298Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Jan 14 23:48:39.860741 containerd[1996]: time="2026-01-14T23:48:39.856765162Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Jan 14 23:48:39.860741 containerd[1996]: time="2026-01-14T23:48:39.856802662Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jan 14 23:48:39.860741 containerd[1996]: time="2026-01-14T23:48:39.856927258Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jan 14 23:48:39.860741 containerd[1996]: time="2026-01-14T23:48:39.856952590Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jan 14 23:48:39.861971 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 14 23:48:39.875532 containerd[1996]: time="2026-01-14T23:48:39.875477831Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jan 14 23:48:39.882422 containerd[1996]: time="2026-01-14T23:48:39.881282507Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jan 14 23:48:39.882422 containerd[1996]: time="2026-01-14T23:48:39.881364587Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jan 14 23:48:39.882422 containerd[1996]: time="2026-01-14T23:48:39.881389859Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Jan 14 23:48:39.882422 containerd[1996]: time="2026-01-14T23:48:39.881750651Z" level=info msg="skip loading plugin" error="EROFS unsupported, please `modprobe erofs`: skip plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Jan 14 23:48:39.882422 containerd[1996]: time="2026-01-14T23:48:39.881777495Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Jan 14 23:48:39.882422 containerd[1996]: time="2026-01-14T23:48:39.881937815Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Jan 14 23:48:39.887569 containerd[1996]: time="2026-01-14T23:48:39.886469531Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jan 14 23:48:39.887569 containerd[1996]: time="2026-01-14T23:48:39.886569503Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jan 14 23:48:39.887569 containerd[1996]: time="2026-01-14T23:48:39.886596491Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Jan 14 23:48:39.887569 containerd[1996]: time="2026-01-14T23:48:39.886703063Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Jan 14 23:48:39.887569 containerd[1996]: time="2026-01-14T23:48:39.887131547Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Jan 14 23:48:39.887569 containerd[1996]: time="2026-01-14T23:48:39.887326163Z" level=info msg="metadata content store policy set" policy=shared Jan 14 23:48:39.913950 containerd[1996]: time="2026-01-14T23:48:39.913379999Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Jan 14 23:48:39.913950 containerd[1996]: time="2026-01-14T23:48:39.913471931Z" level=info msg="loading plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Jan 14 23:48:39.913950 containerd[1996]: time="2026-01-14T23:48:39.913617143Z" level=info msg="skip loading plugin" error="could not find mkfs.erofs: exec: \"mkfs.erofs\": executable file not found in $PATH: skip plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Jan 14 23:48:39.913950 containerd[1996]: time="2026-01-14T23:48:39.913645823Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Jan 14 23:48:39.913950 containerd[1996]: time="2026-01-14T23:48:39.913677203Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Jan 14 23:48:39.913950 containerd[1996]: time="2026-01-14T23:48:39.913707095Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Jan 14 23:48:39.913950 containerd[1996]: time="2026-01-14T23:48:39.913734635Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Jan 14 23:48:39.913950 containerd[1996]: time="2026-01-14T23:48:39.913763195Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Jan 14 23:48:39.913950 containerd[1996]: time="2026-01-14T23:48:39.913792859Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Jan 14 23:48:39.913950 containerd[1996]: time="2026-01-14T23:48:39.913820963Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Jan 14 23:48:39.913950 containerd[1996]: time="2026-01-14T23:48:39.913905239Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Jan 14 23:48:39.914631 containerd[1996]: time="2026-01-14T23:48:39.913971587Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Jan 14 23:48:39.914631 containerd[1996]: time="2026-01-14T23:48:39.914001035Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Jan 14 23:48:39.914631 containerd[1996]: time="2026-01-14T23:48:39.914057843Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Jan 14 23:48:39.914631 containerd[1996]: time="2026-01-14T23:48:39.914414399Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Jan 14 23:48:39.914631 containerd[1996]: time="2026-01-14T23:48:39.914457539Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Jan 14 23:48:39.914631 containerd[1996]: time="2026-01-14T23:48:39.914518055Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Jan 14 23:48:39.914631 containerd[1996]: time="2026-01-14T23:48:39.914546603Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Jan 14 23:48:39.914631 containerd[1996]: time="2026-01-14T23:48:39.914600651Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Jan 14 23:48:39.915030 containerd[1996]: time="2026-01-14T23:48:39.914875163Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Jan 14 23:48:39.915030 containerd[1996]: time="2026-01-14T23:48:39.914912891Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Jan 14 23:48:39.915030 containerd[1996]: time="2026-01-14T23:48:39.914973539Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Jan 14 23:48:39.915155 containerd[1996]: time="2026-01-14T23:48:39.915002087Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Jan 14 23:48:39.915155 containerd[1996]: time="2026-01-14T23:48:39.915055715Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Jan 14 23:48:39.915155 containerd[1996]: time="2026-01-14T23:48:39.915083339Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Jan 14 23:48:39.915297 containerd[1996]: time="2026-01-14T23:48:39.915179315Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Jan 14 23:48:39.915363 containerd[1996]: time="2026-01-14T23:48:39.915306791Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Jan 14 23:48:39.915410 containerd[1996]: time="2026-01-14T23:48:39.915366911Z" level=info msg="Start snapshots syncer" Jan 14 23:48:39.919580 containerd[1996]: time="2026-01-14T23:48:39.917316071Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Jan 14 23:48:39.922852 amazon-ssm-agent[2042]: 2026-01-14 23:48:39.7136 INFO Agent will take identity from EC2 Jan 14 23:48:39.923119 containerd[1996]: time="2026-01-14T23:48:39.922987151Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"cgroupWritable\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"\",\"binDirs\":[\"/opt/cni/bin\"],\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogLineSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Jan 14 23:48:39.923594 containerd[1996]: time="2026-01-14T23:48:39.923156003Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Jan 14 23:48:39.923594 containerd[1996]: time="2026-01-14T23:48:39.923345111Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Jan 14 23:48:39.927392 containerd[1996]: time="2026-01-14T23:48:39.925600655Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Jan 14 23:48:39.933701 containerd[1996]: time="2026-01-14T23:48:39.933371447Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Jan 14 23:48:39.933701 containerd[1996]: time="2026-01-14T23:48:39.933465107Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Jan 14 23:48:39.933701 containerd[1996]: time="2026-01-14T23:48:39.933527075Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Jan 14 23:48:39.933701 containerd[1996]: time="2026-01-14T23:48:39.933560603Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Jan 14 23:48:39.933701 containerd[1996]: time="2026-01-14T23:48:39.933634271Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Jan 14 23:48:39.933701 containerd[1996]: time="2026-01-14T23:48:39.933689243Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Jan 14 23:48:39.935290 containerd[1996]: time="2026-01-14T23:48:39.933721331Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Jan 14 23:48:39.935290 containerd[1996]: time="2026-01-14T23:48:39.933774611Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Jan 14 23:48:39.935290 containerd[1996]: time="2026-01-14T23:48:39.934334015Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jan 14 23:48:39.938507 containerd[1996]: time="2026-01-14T23:48:39.934383803Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jan 14 23:48:39.938507 containerd[1996]: time="2026-01-14T23:48:39.938404199Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jan 14 23:48:39.938507 containerd[1996]: time="2026-01-14T23:48:39.938480591Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jan 14 23:48:39.941714 containerd[1996]: time="2026-01-14T23:48:39.938505419Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Jan 14 23:48:39.941714 containerd[1996]: time="2026-01-14T23:48:39.941323595Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Jan 14 23:48:39.941714 containerd[1996]: time="2026-01-14T23:48:39.941396687Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Jan 14 23:48:39.941714 containerd[1996]: time="2026-01-14T23:48:39.941450255Z" level=info msg="runtime interface created" Jan 14 23:48:39.941714 containerd[1996]: time="2026-01-14T23:48:39.941468375Z" level=info msg="created NRI interface" Jan 14 23:48:39.943701 containerd[1996]: time="2026-01-14T23:48:39.941491619Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Jan 14 23:48:39.943701 containerd[1996]: time="2026-01-14T23:48:39.943157651Z" level=info msg="Connect containerd service" Jan 14 23:48:39.945153 containerd[1996]: time="2026-01-14T23:48:39.945090023Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 14 23:48:39.950689 containerd[1996]: time="2026-01-14T23:48:39.950623763Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 14 23:48:39.973396 systemd[1]: issuegen.service: Deactivated successfully. Jan 14 23:48:39.974621 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 14 23:48:39.982115 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 14 23:48:39.991678 locksmithd[2043]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 14 23:48:40.023328 amazon-ssm-agent[2042]: 2026-01-14 23:48:39.7334 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.3.0.0 Jan 14 23:48:40.069498 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 14 23:48:40.077040 polkitd[2156]: Started polkitd version 126 Jan 14 23:48:40.078117 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 14 23:48:40.083521 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 14 23:48:40.086428 systemd[1]: Reached target getty.target - Login Prompts. Jan 14 23:48:40.118747 polkitd[2156]: Loading rules from directory /etc/polkit-1/rules.d Jan 14 23:48:40.121549 polkitd[2156]: Loading rules from directory /run/polkit-1/rules.d Jan 14 23:48:40.121783 polkitd[2156]: Error opening rules directory: Error opening directory “/run/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Jan 14 23:48:40.123636 polkitd[2156]: Loading rules from directory /usr/local/share/polkit-1/rules.d Jan 14 23:48:40.123889 amazon-ssm-agent[2042]: 2026-01-14 23:48:39.7334 INFO [amazon-ssm-agent] OS: linux, Arch: arm64 Jan 14 23:48:40.124049 polkitd[2156]: Error opening rules directory: Error opening directory “/usr/local/share/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Jan 14 23:48:40.125368 polkitd[2156]: Loading rules from directory /usr/share/polkit-1/rules.d Jan 14 23:48:40.127482 polkitd[2156]: Finished loading, compiling and executing 2 rules Jan 14 23:48:40.129250 systemd[1]: Started polkit.service - Authorization Manager. Jan 14 23:48:40.133576 dbus-daemon[1947]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Jan 14 23:48:40.135893 polkitd[2156]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Jan 14 23:48:40.178700 systemd-hostnamed[2027]: Hostname set to (transient) Jan 14 23:48:40.181420 systemd-resolved[1558]: System hostname changed to 'ip-172-31-18-197'. Jan 14 23:48:40.223129 amazon-ssm-agent[2042]: 2026-01-14 23:48:39.7334 INFO [amazon-ssm-agent] Starting Core Agent Jan 14 23:48:40.323133 amazon-ssm-agent[2042]: 2026-01-14 23:48:39.7334 INFO [amazon-ssm-agent] Registrar detected. Attempting registration Jan 14 23:48:40.339045 amazon-ssm-agent[2042]: 2026/01/14 23:48:40 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 14 23:48:40.339045 amazon-ssm-agent[2042]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 14 23:48:40.339251 amazon-ssm-agent[2042]: 2026/01/14 23:48:40 processing appconfig overrides Jan 14 23:48:40.388590 amazon-ssm-agent[2042]: 2026-01-14 23:48:39.7334 INFO [Registrar] Starting registrar module Jan 14 23:48:40.388590 amazon-ssm-agent[2042]: 2026-01-14 23:48:39.7480 INFO [EC2Identity] Checking disk for registration info Jan 14 23:48:40.388590 amazon-ssm-agent[2042]: 2026-01-14 23:48:39.7480 INFO [EC2Identity] No registration info found for ec2 instance, attempting registration Jan 14 23:48:40.388590 amazon-ssm-agent[2042]: 2026-01-14 23:48:39.7480 INFO [EC2Identity] Generating registration keypair Jan 14 23:48:40.388590 amazon-ssm-agent[2042]: 2026-01-14 23:48:40.2956 INFO [EC2Identity] Checking write access before registering Jan 14 23:48:40.388590 amazon-ssm-agent[2042]: 2026-01-14 23:48:40.2964 INFO [EC2Identity] Registering EC2 instance with Systems Manager Jan 14 23:48:40.388590 amazon-ssm-agent[2042]: 2026-01-14 23:48:40.3387 INFO [EC2Identity] EC2 registration was successful. Jan 14 23:48:40.388590 amazon-ssm-agent[2042]: 2026-01-14 23:48:40.3387 INFO [amazon-ssm-agent] Registration attempted. Resuming core agent startup. Jan 14 23:48:40.388590 amazon-ssm-agent[2042]: 2026-01-14 23:48:40.3389 INFO [CredentialRefresher] credentialRefresher has started Jan 14 23:48:40.388590 amazon-ssm-agent[2042]: 2026-01-14 23:48:40.3389 INFO [CredentialRefresher] Starting credentials refresher loop Jan 14 23:48:40.388590 amazon-ssm-agent[2042]: 2026-01-14 23:48:40.3880 INFO EC2RoleProvider Successfully connected with instance profile role credentials Jan 14 23:48:40.388590 amazon-ssm-agent[2042]: 2026-01-14 23:48:40.3883 INFO [CredentialRefresher] Credentials ready Jan 14 23:48:40.422569 amazon-ssm-agent[2042]: 2026-01-14 23:48:40.3886 INFO [CredentialRefresher] Next credential rotation will be in 29.9999899173 minutes Jan 14 23:48:40.472044 containerd[1996]: time="2026-01-14T23:48:40.471954069Z" level=info msg="Start subscribing containerd event" Jan 14 23:48:40.472171 containerd[1996]: time="2026-01-14T23:48:40.472072701Z" level=info msg="Start recovering state" Jan 14 23:48:40.473674 containerd[1996]: time="2026-01-14T23:48:40.473616573Z" level=info msg="Start event monitor" Jan 14 23:48:40.474281 containerd[1996]: time="2026-01-14T23:48:40.473834613Z" level=info msg="Start cni network conf syncer for default" Jan 14 23:48:40.474281 containerd[1996]: time="2026-01-14T23:48:40.473862309Z" level=info msg="Start streaming server" Jan 14 23:48:40.474281 containerd[1996]: time="2026-01-14T23:48:40.473883549Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Jan 14 23:48:40.474281 containerd[1996]: time="2026-01-14T23:48:40.473919993Z" level=info msg="runtime interface starting up..." Jan 14 23:48:40.474281 containerd[1996]: time="2026-01-14T23:48:40.473939421Z" level=info msg="starting plugins..." Jan 14 23:48:40.474281 containerd[1996]: time="2026-01-14T23:48:40.473974761Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Jan 14 23:48:40.476278 containerd[1996]: time="2026-01-14T23:48:40.476184597Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 14 23:48:40.476582 containerd[1996]: time="2026-01-14T23:48:40.476550753Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 14 23:48:40.477051 containerd[1996]: time="2026-01-14T23:48:40.477015621Z" level=info msg="containerd successfully booted in 0.791812s" Jan 14 23:48:40.477374 systemd[1]: Started containerd.service - containerd container runtime. Jan 14 23:48:40.542485 tar[1970]: linux-arm64/README.md Jan 14 23:48:40.569368 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 14 23:48:41.416582 amazon-ssm-agent[2042]: 2026-01-14 23:48:41.4164 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Jan 14 23:48:41.517164 amazon-ssm-agent[2042]: 2026-01-14 23:48:41.4229 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2230) started Jan 14 23:48:41.618020 amazon-ssm-agent[2042]: 2026-01-14 23:48:41.4230 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Jan 14 23:48:43.510352 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 23:48:43.518134 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 14 23:48:43.520898 systemd[1]: Startup finished in 4.319s (kernel) + 12.071s (initrd) + 15.418s (userspace) = 31.809s. Jan 14 23:48:43.529836 (kubelet)[2246]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 14 23:48:44.955044 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 14 23:48:44.958136 systemd[1]: Started sshd@0-172.31.18.197:22-20.161.92.111:53614.service - OpenSSH per-connection server daemon (20.161.92.111:53614). Jan 14 23:48:45.633476 systemd-resolved[1558]: Clock change detected. Flushing caches. Jan 14 23:48:45.771860 sshd[2256]: Accepted publickey for core from 20.161.92.111 port 53614 ssh2: RSA SHA256:tQfYP2ATQd/HQz4yrh8s4gHDWQ0sgDwafourhFj+esE Jan 14 23:48:45.777151 sshd-session[2256]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 23:48:45.802066 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 14 23:48:45.803673 systemd-logind[1960]: New session 1 of user core. Jan 14 23:48:45.804116 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 14 23:48:45.845876 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 14 23:48:45.852467 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 14 23:48:45.872928 (systemd)[2261]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 14 23:48:45.879049 systemd-logind[1960]: New session c1 of user core. Jan 14 23:48:45.981682 kubelet[2246]: E0114 23:48:45.981536 2246 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 14 23:48:45.987407 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 14 23:48:45.987915 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 14 23:48:45.988796 systemd[1]: kubelet.service: Consumed 1.471s CPU time, 259.9M memory peak. Jan 14 23:48:46.176430 systemd[2261]: Queued start job for default target default.target. Jan 14 23:48:46.184928 systemd[2261]: Created slice app.slice - User Application Slice. Jan 14 23:48:46.185015 systemd[2261]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of User's Temporary Directories. Jan 14 23:48:46.185052 systemd[2261]: Reached target paths.target - Paths. Jan 14 23:48:46.185155 systemd[2261]: Reached target timers.target - Timers. Jan 14 23:48:46.187705 systemd[2261]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 14 23:48:46.189855 systemd[2261]: Starting systemd-tmpfiles-setup.service - Create User Files and Directories... Jan 14 23:48:46.220139 systemd[2261]: Finished systemd-tmpfiles-setup.service - Create User Files and Directories. Jan 14 23:48:46.220698 systemd[2261]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 14 23:48:46.221398 systemd[2261]: Reached target sockets.target - Sockets. Jan 14 23:48:46.221629 systemd[2261]: Reached target basic.target - Basic System. Jan 14 23:48:46.221987 systemd[2261]: Reached target default.target - Main User Target. Jan 14 23:48:46.222270 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 14 23:48:46.223721 systemd[2261]: Startup finished in 330ms. Jan 14 23:48:46.231014 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 14 23:48:46.486375 systemd[1]: Started sshd@1-172.31.18.197:22-20.161.92.111:53624.service - OpenSSH per-connection server daemon (20.161.92.111:53624). Jan 14 23:48:46.958153 sshd[2276]: Accepted publickey for core from 20.161.92.111 port 53624 ssh2: RSA SHA256:tQfYP2ATQd/HQz4yrh8s4gHDWQ0sgDwafourhFj+esE Jan 14 23:48:46.960346 sshd-session[2276]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 23:48:46.969666 systemd-logind[1960]: New session 2 of user core. Jan 14 23:48:46.980856 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 14 23:48:47.195656 sshd[2279]: Connection closed by 20.161.92.111 port 53624 Jan 14 23:48:47.196444 sshd-session[2276]: pam_unix(sshd:session): session closed for user core Jan 14 23:48:47.205200 systemd-logind[1960]: Session 2 logged out. Waiting for processes to exit. Jan 14 23:48:47.206435 systemd[1]: sshd@1-172.31.18.197:22-20.161.92.111:53624.service: Deactivated successfully. Jan 14 23:48:47.210343 systemd[1]: session-2.scope: Deactivated successfully. Jan 14 23:48:47.215863 systemd-logind[1960]: Removed session 2. Jan 14 23:48:47.290837 systemd[1]: Started sshd@2-172.31.18.197:22-20.161.92.111:53636.service - OpenSSH per-connection server daemon (20.161.92.111:53636). Jan 14 23:48:47.759466 sshd[2285]: Accepted publickey for core from 20.161.92.111 port 53636 ssh2: RSA SHA256:tQfYP2ATQd/HQz4yrh8s4gHDWQ0sgDwafourhFj+esE Jan 14 23:48:47.761803 sshd-session[2285]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 23:48:47.770136 systemd-logind[1960]: New session 3 of user core. Jan 14 23:48:47.777871 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 14 23:48:47.991706 sshd[2288]: Connection closed by 20.161.92.111 port 53636 Jan 14 23:48:47.992526 sshd-session[2285]: pam_unix(sshd:session): session closed for user core Jan 14 23:48:48.000355 systemd-logind[1960]: Session 3 logged out. Waiting for processes to exit. Jan 14 23:48:48.002200 systemd[1]: sshd@2-172.31.18.197:22-20.161.92.111:53636.service: Deactivated successfully. Jan 14 23:48:48.007170 systemd[1]: session-3.scope: Deactivated successfully. Jan 14 23:48:48.011916 systemd-logind[1960]: Removed session 3. Jan 14 23:48:48.104135 systemd[1]: Started sshd@3-172.31.18.197:22-20.161.92.111:53648.service - OpenSSH per-connection server daemon (20.161.92.111:53648). Jan 14 23:48:48.582104 sshd[2294]: Accepted publickey for core from 20.161.92.111 port 53648 ssh2: RSA SHA256:tQfYP2ATQd/HQz4yrh8s4gHDWQ0sgDwafourhFj+esE Jan 14 23:48:48.584432 sshd-session[2294]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 23:48:48.594683 systemd-logind[1960]: New session 4 of user core. Jan 14 23:48:48.601896 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 14 23:48:48.833363 sshd[2297]: Connection closed by 20.161.92.111 port 53648 Jan 14 23:48:48.834468 sshd-session[2294]: pam_unix(sshd:session): session closed for user core Jan 14 23:48:48.841173 systemd[1]: sshd@3-172.31.18.197:22-20.161.92.111:53648.service: Deactivated successfully. Jan 14 23:48:48.844391 systemd[1]: session-4.scope: Deactivated successfully. Jan 14 23:48:48.846078 systemd-logind[1960]: Session 4 logged out. Waiting for processes to exit. Jan 14 23:48:48.848562 systemd-logind[1960]: Removed session 4. Jan 14 23:48:48.922928 systemd[1]: Started sshd@4-172.31.18.197:22-20.161.92.111:53660.service - OpenSSH per-connection server daemon (20.161.92.111:53660). Jan 14 23:48:49.372649 sshd[2303]: Accepted publickey for core from 20.161.92.111 port 53660 ssh2: RSA SHA256:tQfYP2ATQd/HQz4yrh8s4gHDWQ0sgDwafourhFj+esE Jan 14 23:48:49.374710 sshd-session[2303]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 23:48:49.383121 systemd-logind[1960]: New session 5 of user core. Jan 14 23:48:49.393892 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 14 23:48:49.587124 sudo[2307]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 14 23:48:49.587773 sudo[2307]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 14 23:48:49.607542 sudo[2307]: pam_unix(sudo:session): session closed for user root Jan 14 23:48:49.685409 sshd[2306]: Connection closed by 20.161.92.111 port 53660 Jan 14 23:48:49.686873 sshd-session[2303]: pam_unix(sshd:session): session closed for user core Jan 14 23:48:49.695498 systemd[1]: sshd@4-172.31.18.197:22-20.161.92.111:53660.service: Deactivated successfully. Jan 14 23:48:49.699897 systemd[1]: session-5.scope: Deactivated successfully. Jan 14 23:48:49.703701 systemd-logind[1960]: Session 5 logged out. Waiting for processes to exit. Jan 14 23:48:49.705860 systemd-logind[1960]: Removed session 5. Jan 14 23:48:49.778001 systemd[1]: Started sshd@5-172.31.18.197:22-20.161.92.111:53666.service - OpenSSH per-connection server daemon (20.161.92.111:53666). Jan 14 23:48:50.232081 sshd[2313]: Accepted publickey for core from 20.161.92.111 port 53666 ssh2: RSA SHA256:tQfYP2ATQd/HQz4yrh8s4gHDWQ0sgDwafourhFj+esE Jan 14 23:48:50.234532 sshd-session[2313]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 23:48:50.242781 systemd-logind[1960]: New session 6 of user core. Jan 14 23:48:50.253843 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 14 23:48:50.397495 sudo[2318]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 14 23:48:50.398149 sudo[2318]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 14 23:48:50.406845 sudo[2318]: pam_unix(sudo:session): session closed for user root Jan 14 23:48:50.418944 sudo[2317]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jan 14 23:48:50.419544 sudo[2317]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 14 23:48:50.436301 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 14 23:48:50.498000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Jan 14 23:48:50.500135 kernel: kauditd_printk_skb: 83 callbacks suppressed Jan 14 23:48:50.500236 kernel: audit: type=1305 audit(1768434530.498:237): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Jan 14 23:48:50.498000 audit[2340]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=fffff203d800 a2=420 a3=0 items=0 ppid=2321 pid=2340 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:48:50.503838 augenrules[2340]: No rules Jan 14 23:48:50.509603 kernel: audit: type=1300 audit(1768434530.498:237): arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=fffff203d800 a2=420 a3=0 items=0 ppid=2321 pid=2340 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:48:50.509697 kernel: audit: type=1327 audit(1768434530.498:237): proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jan 14 23:48:50.498000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jan 14 23:48:50.514320 systemd[1]: audit-rules.service: Deactivated successfully. Jan 14 23:48:50.515192 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 14 23:48:50.515000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:48:50.518896 sudo[2317]: pam_unix(sudo:session): session closed for user root Jan 14 23:48:50.515000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:48:50.525501 kernel: audit: type=1130 audit(1768434530.515:238): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:48:50.525728 kernel: audit: type=1131 audit(1768434530.515:239): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:48:50.525791 kernel: audit: type=1106 audit(1768434530.515:240): pid=2317 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 14 23:48:50.515000 audit[2317]: USER_END pid=2317 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 14 23:48:50.515000 audit[2317]: CRED_DISP pid=2317 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 14 23:48:50.535275 kernel: audit: type=1104 audit(1768434530.515:241): pid=2317 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 14 23:48:50.601133 sshd[2316]: Connection closed by 20.161.92.111 port 53666 Jan 14 23:48:50.601992 sshd-session[2313]: pam_unix(sshd:session): session closed for user core Jan 14 23:48:50.603000 audit[2313]: USER_END pid=2313 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 23:48:50.608843 systemd-logind[1960]: Session 6 logged out. Waiting for processes to exit. Jan 14 23:48:50.603000 audit[2313]: CRED_DISP pid=2313 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 23:48:50.610448 systemd[1]: sshd@5-172.31.18.197:22-20.161.92.111:53666.service: Deactivated successfully. Jan 14 23:48:50.616068 systemd[1]: session-6.scope: Deactivated successfully. Jan 14 23:48:50.617154 kernel: audit: type=1106 audit(1768434530.603:242): pid=2313 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 23:48:50.617273 kernel: audit: type=1104 audit(1768434530.603:243): pid=2313 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 23:48:50.617316 kernel: audit: type=1131 audit(1768434530.608:244): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-172.31.18.197:22-20.161.92.111:53666 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:48:50.608000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-172.31.18.197:22-20.161.92.111:53666 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:48:50.623531 systemd-logind[1960]: Removed session 6. Jan 14 23:48:50.706171 systemd[1]: Started sshd@6-172.31.18.197:22-20.161.92.111:53680.service - OpenSSH per-connection server daemon (20.161.92.111:53680). Jan 14 23:48:50.707000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-172.31.18.197:22-20.161.92.111:53680 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:48:51.166000 audit[2349]: USER_ACCT pid=2349 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 23:48:51.167031 sshd[2349]: Accepted publickey for core from 20.161.92.111 port 53680 ssh2: RSA SHA256:tQfYP2ATQd/HQz4yrh8s4gHDWQ0sgDwafourhFj+esE Jan 14 23:48:51.167000 audit[2349]: CRED_ACQ pid=2349 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 23:48:51.167000 audit[2349]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=fffff693f750 a2=3 a3=0 items=0 ppid=1 pid=2349 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=7 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:48:51.167000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 23:48:51.169308 sshd-session[2349]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 23:48:51.177689 systemd-logind[1960]: New session 7 of user core. Jan 14 23:48:51.184906 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 14 23:48:51.190000 audit[2349]: USER_START pid=2349 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 23:48:51.193000 audit[2352]: CRED_ACQ pid=2352 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 23:48:51.333000 audit[2353]: USER_ACCT pid=2353 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 14 23:48:51.334378 sudo[2353]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 14 23:48:51.334000 audit[2353]: CRED_REFR pid=2353 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 14 23:48:51.335041 sudo[2353]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 14 23:48:51.338000 audit[2353]: USER_START pid=2353 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 14 23:48:52.602730 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 14 23:48:52.633083 (dockerd)[2370]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 14 23:48:53.702099 dockerd[2370]: time="2026-01-14T23:48:53.702000556Z" level=info msg="Starting up" Jan 14 23:48:53.703464 dockerd[2370]: time="2026-01-14T23:48:53.703413376Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Jan 14 23:48:53.723736 dockerd[2370]: time="2026-01-14T23:48:53.723665776Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Jan 14 23:48:53.791761 dockerd[2370]: time="2026-01-14T23:48:53.791686612Z" level=info msg="Loading containers: start." Jan 14 23:48:53.808641 kernel: Initializing XFRM netlink socket Jan 14 23:48:53.977000 audit[2420]: NETFILTER_CFG table=nat:2 family=2 entries=2 op=nft_register_chain pid=2420 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 23:48:53.977000 audit[2420]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=116 a0=3 a1=fffff921e850 a2=0 a3=0 items=0 ppid=2370 pid=2420 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:48:53.977000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Jan 14 23:48:53.981000 audit[2422]: NETFILTER_CFG table=filter:3 family=2 entries=2 op=nft_register_chain pid=2422 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 23:48:53.981000 audit[2422]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=124 a0=3 a1=ffffe1d06c60 a2=0 a3=0 items=0 ppid=2370 pid=2422 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:48:53.981000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Jan 14 23:48:53.985000 audit[2424]: NETFILTER_CFG table=filter:4 family=2 entries=1 op=nft_register_chain pid=2424 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 23:48:53.985000 audit[2424]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffffe3f2270 a2=0 a3=0 items=0 ppid=2370 pid=2424 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:48:53.985000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D464F5257415244 Jan 14 23:48:53.989000 audit[2426]: NETFILTER_CFG table=filter:5 family=2 entries=1 op=nft_register_chain pid=2426 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 23:48:53.989000 audit[2426]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffffa3c0330 a2=0 a3=0 items=0 ppid=2370 pid=2426 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:48:53.989000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D425249444745 Jan 14 23:48:53.993000 audit[2428]: NETFILTER_CFG table=filter:6 family=2 entries=1 op=nft_register_chain pid=2428 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 23:48:53.993000 audit[2428]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffd1afc1f0 a2=0 a3=0 items=0 ppid=2370 pid=2428 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:48:53.993000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D4354 Jan 14 23:48:53.997000 audit[2430]: NETFILTER_CFG table=filter:7 family=2 entries=1 op=nft_register_chain pid=2430 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 23:48:53.997000 audit[2430]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=112 a0=3 a1=ffffcb396590 a2=0 a3=0 items=0 ppid=2370 pid=2430 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:48:53.997000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 14 23:48:54.001000 audit[2432]: NETFILTER_CFG table=filter:8 family=2 entries=1 op=nft_register_chain pid=2432 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 23:48:54.001000 audit[2432]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=112 a0=3 a1=ffffe107b360 a2=0 a3=0 items=0 ppid=2370 pid=2432 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:48:54.001000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Jan 14 23:48:54.006000 audit[2434]: NETFILTER_CFG table=nat:9 family=2 entries=2 op=nft_register_chain pid=2434 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 23:48:54.006000 audit[2434]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=384 a0=3 a1=fffff283a860 a2=0 a3=0 items=0 ppid=2370 pid=2434 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:48:54.006000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Jan 14 23:48:54.048000 audit[2437]: NETFILTER_CFG table=nat:10 family=2 entries=2 op=nft_register_chain pid=2437 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 23:48:54.048000 audit[2437]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=472 a0=3 a1=ffffc1038250 a2=0 a3=0 items=0 ppid=2370 pid=2437 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:48:54.048000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003132372E302E302E302F38 Jan 14 23:48:54.052000 audit[2439]: NETFILTER_CFG table=filter:11 family=2 entries=2 op=nft_register_chain pid=2439 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 23:48:54.052000 audit[2439]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=340 a0=3 a1=ffffd2af4110 a2=0 a3=0 items=0 ppid=2370 pid=2439 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:48:54.052000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D464F5257415244 Jan 14 23:48:54.056000 audit[2441]: NETFILTER_CFG table=filter:12 family=2 entries=1 op=nft_register_rule pid=2441 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 23:48:54.056000 audit[2441]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=236 a0=3 a1=fffff7d04d80 a2=0 a3=0 items=0 ppid=2370 pid=2441 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:48:54.056000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D425249444745 Jan 14 23:48:54.061000 audit[2443]: NETFILTER_CFG table=filter:13 family=2 entries=1 op=nft_register_rule pid=2443 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 23:48:54.061000 audit[2443]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=248 a0=3 a1=ffffd57987f0 a2=0 a3=0 items=0 ppid=2370 pid=2443 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:48:54.061000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 14 23:48:54.065000 audit[2445]: NETFILTER_CFG table=filter:14 family=2 entries=1 op=nft_register_rule pid=2445 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 23:48:54.065000 audit[2445]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=232 a0=3 a1=ffffc2304aa0 a2=0 a3=0 items=0 ppid=2370 pid=2445 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:48:54.065000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D4354 Jan 14 23:48:54.178000 audit[2475]: NETFILTER_CFG table=nat:15 family=10 entries=2 op=nft_register_chain pid=2475 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 23:48:54.178000 audit[2475]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=116 a0=3 a1=ffffd4dba310 a2=0 a3=0 items=0 ppid=2370 pid=2475 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:48:54.178000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Jan 14 23:48:54.183000 audit[2477]: NETFILTER_CFG table=filter:16 family=10 entries=2 op=nft_register_chain pid=2477 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 23:48:54.183000 audit[2477]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=124 a0=3 a1=ffffc313d9a0 a2=0 a3=0 items=0 ppid=2370 pid=2477 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:48:54.183000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Jan 14 23:48:54.187000 audit[2479]: NETFILTER_CFG table=filter:17 family=10 entries=1 op=nft_register_chain pid=2479 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 23:48:54.187000 audit[2479]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffdf031830 a2=0 a3=0 items=0 ppid=2370 pid=2479 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:48:54.187000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D464F5257415244 Jan 14 23:48:54.191000 audit[2481]: NETFILTER_CFG table=filter:18 family=10 entries=1 op=nft_register_chain pid=2481 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 23:48:54.191000 audit[2481]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffffa396590 a2=0 a3=0 items=0 ppid=2370 pid=2481 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:48:54.191000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D425249444745 Jan 14 23:48:54.195000 audit[2483]: NETFILTER_CFG table=filter:19 family=10 entries=1 op=nft_register_chain pid=2483 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 23:48:54.195000 audit[2483]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=fffff969eb70 a2=0 a3=0 items=0 ppid=2370 pid=2483 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:48:54.195000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D4354 Jan 14 23:48:54.198000 audit[2485]: NETFILTER_CFG table=filter:20 family=10 entries=1 op=nft_register_chain pid=2485 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 23:48:54.198000 audit[2485]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=112 a0=3 a1=ffffe0c25a20 a2=0 a3=0 items=0 ppid=2370 pid=2485 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:48:54.198000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 14 23:48:54.202000 audit[2487]: NETFILTER_CFG table=filter:21 family=10 entries=1 op=nft_register_chain pid=2487 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 23:48:54.202000 audit[2487]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=112 a0=3 a1=ffffeb3b44e0 a2=0 a3=0 items=0 ppid=2370 pid=2487 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:48:54.202000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Jan 14 23:48:54.207000 audit[2489]: NETFILTER_CFG table=nat:22 family=10 entries=2 op=nft_register_chain pid=2489 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 23:48:54.207000 audit[2489]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=384 a0=3 a1=ffffea51eab0 a2=0 a3=0 items=0 ppid=2370 pid=2489 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:48:54.207000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Jan 14 23:48:54.212000 audit[2491]: NETFILTER_CFG table=nat:23 family=10 entries=2 op=nft_register_chain pid=2491 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 23:48:54.212000 audit[2491]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=484 a0=3 a1=ffffff8fd510 a2=0 a3=0 items=0 ppid=2370 pid=2491 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:48:54.212000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003A3A312F313238 Jan 14 23:48:54.218000 audit[2493]: NETFILTER_CFG table=filter:24 family=10 entries=2 op=nft_register_chain pid=2493 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 23:48:54.218000 audit[2493]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=340 a0=3 a1=ffffc6a54990 a2=0 a3=0 items=0 ppid=2370 pid=2493 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:48:54.218000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D464F5257415244 Jan 14 23:48:54.222000 audit[2495]: NETFILTER_CFG table=filter:25 family=10 entries=1 op=nft_register_rule pid=2495 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 23:48:54.222000 audit[2495]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=236 a0=3 a1=ffffc11e6090 a2=0 a3=0 items=0 ppid=2370 pid=2495 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:48:54.222000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D425249444745 Jan 14 23:48:54.226000 audit[2497]: NETFILTER_CFG table=filter:26 family=10 entries=1 op=nft_register_rule pid=2497 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 23:48:54.226000 audit[2497]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=248 a0=3 a1=ffffc8a15e50 a2=0 a3=0 items=0 ppid=2370 pid=2497 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:48:54.226000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 14 23:48:54.230000 audit[2499]: NETFILTER_CFG table=filter:27 family=10 entries=1 op=nft_register_rule pid=2499 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 23:48:54.230000 audit[2499]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=232 a0=3 a1=ffffc69a7000 a2=0 a3=0 items=0 ppid=2370 pid=2499 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:48:54.230000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D4354 Jan 14 23:48:54.243000 audit[2504]: NETFILTER_CFG table=filter:28 family=2 entries=1 op=nft_register_chain pid=2504 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 23:48:54.243000 audit[2504]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffd773f4f0 a2=0 a3=0 items=0 ppid=2370 pid=2504 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:48:54.243000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Jan 14 23:48:54.248000 audit[2506]: NETFILTER_CFG table=filter:29 family=2 entries=1 op=nft_register_rule pid=2506 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 23:48:54.248000 audit[2506]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=212 a0=3 a1=ffffffbd4350 a2=0 a3=0 items=0 ppid=2370 pid=2506 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:48:54.248000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Jan 14 23:48:54.252000 audit[2508]: NETFILTER_CFG table=filter:30 family=2 entries=1 op=nft_register_rule pid=2508 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 23:48:54.252000 audit[2508]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=224 a0=3 a1=ffffd198f630 a2=0 a3=0 items=0 ppid=2370 pid=2508 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:48:54.252000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Jan 14 23:48:54.256000 audit[2510]: NETFILTER_CFG table=filter:31 family=10 entries=1 op=nft_register_chain pid=2510 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 23:48:54.256000 audit[2510]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffc0aff3b0 a2=0 a3=0 items=0 ppid=2370 pid=2510 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:48:54.256000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Jan 14 23:48:54.261000 audit[2512]: NETFILTER_CFG table=filter:32 family=10 entries=1 op=nft_register_rule pid=2512 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 23:48:54.261000 audit[2512]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=212 a0=3 a1=fffff45be310 a2=0 a3=0 items=0 ppid=2370 pid=2512 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:48:54.261000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Jan 14 23:48:54.265000 audit[2514]: NETFILTER_CFG table=filter:33 family=10 entries=1 op=nft_register_rule pid=2514 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 23:48:54.265000 audit[2514]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=224 a0=3 a1=ffffee14ad20 a2=0 a3=0 items=0 ppid=2370 pid=2514 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:48:54.265000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Jan 14 23:48:54.280804 (udev-worker)[2393]: Network interface NamePolicy= disabled on kernel command line. Jan 14 23:48:54.294000 audit[2519]: NETFILTER_CFG table=nat:34 family=2 entries=2 op=nft_register_chain pid=2519 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 23:48:54.294000 audit[2519]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=520 a0=3 a1=ffffc30d02f0 a2=0 a3=0 items=0 ppid=2370 pid=2519 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:48:54.294000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4900504F5354524F5554494E47002D73003137322E31372E302E302F31360000002D6F00646F636B657230002D6A004D415351554552414445 Jan 14 23:48:54.303000 audit[2521]: NETFILTER_CFG table=nat:35 family=2 entries=1 op=nft_register_rule pid=2521 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 23:48:54.303000 audit[2521]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=288 a0=3 a1=fffffac73020 a2=0 a3=0 items=0 ppid=2370 pid=2521 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:48:54.303000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4900444F434B4552002D6900646F636B657230002D6A0052455455524E Jan 14 23:48:54.322000 audit[2529]: NETFILTER_CFG table=filter:36 family=2 entries=1 op=nft_register_rule pid=2529 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 23:48:54.322000 audit[2529]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=300 a0=3 a1=fffff9cc04f0 a2=0 a3=0 items=0 ppid=2370 pid=2529 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:48:54.322000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D464F5257415244002D6900646F636B657230002D6A00414343455054 Jan 14 23:48:54.341000 audit[2535]: NETFILTER_CFG table=filter:37 family=2 entries=1 op=nft_register_rule pid=2535 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 23:48:54.341000 audit[2535]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=376 a0=3 a1=ffffda5c8830 a2=0 a3=0 items=0 ppid=2370 pid=2535 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:48:54.341000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45520000002D6900646F636B657230002D6F00646F636B657230002D6A0044524F50 Jan 14 23:48:54.346000 audit[2537]: NETFILTER_CFG table=filter:38 family=2 entries=1 op=nft_register_rule pid=2537 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 23:48:54.346000 audit[2537]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=512 a0=3 a1=fffffde77460 a2=0 a3=0 items=0 ppid=2370 pid=2537 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:48:54.346000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D4354002D6F00646F636B657230002D6D00636F6E6E747261636B002D2D637473746174650052454C415445442C45535441424C4953484544002D6A00414343455054 Jan 14 23:48:54.350000 audit[2539]: NETFILTER_CFG table=filter:39 family=2 entries=1 op=nft_register_rule pid=2539 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 23:48:54.350000 audit[2539]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=312 a0=3 a1=fffffcbfca00 a2=0 a3=0 items=0 ppid=2370 pid=2539 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:48:54.350000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D425249444745002D6F00646F636B657230002D6A00444F434B4552 Jan 14 23:48:54.355000 audit[2541]: NETFILTER_CFG table=filter:40 family=2 entries=1 op=nft_register_rule pid=2541 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 23:48:54.355000 audit[2541]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=428 a0=3 a1=ffffeac6f0f0 a2=0 a3=0 items=0 ppid=2370 pid=2541 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:48:54.355000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D49534F4C4154494F4E2D53544147452D31002D6900646F636B6572300000002D6F00646F636B657230002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D32 Jan 14 23:48:54.360000 audit[2543]: NETFILTER_CFG table=filter:41 family=2 entries=1 op=nft_register_rule pid=2543 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 23:48:54.360000 audit[2543]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=312 a0=3 a1=fffff87ca2e0 a2=0 a3=0 items=0 ppid=2370 pid=2543 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:48:54.360000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D32002D6F00646F636B657230002D6A0044524F50 Jan 14 23:48:54.361660 systemd-networkd[1746]: docker0: Link UP Jan 14 23:48:54.369286 dockerd[2370]: time="2026-01-14T23:48:54.369169239Z" level=info msg="Loading containers: done." Jan 14 23:48:54.397273 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck532073563-merged.mount: Deactivated successfully. Jan 14 23:48:54.413317 dockerd[2370]: time="2026-01-14T23:48:54.413017047Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 14 23:48:54.413317 dockerd[2370]: time="2026-01-14T23:48:54.413135379Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Jan 14 23:48:54.413611 dockerd[2370]: time="2026-01-14T23:48:54.413429031Z" level=info msg="Initializing buildkit" Jan 14 23:48:54.455989 dockerd[2370]: time="2026-01-14T23:48:54.455889267Z" level=info msg="Completed buildkit initialization" Jan 14 23:48:54.470007 dockerd[2370]: time="2026-01-14T23:48:54.469913331Z" level=info msg="Daemon has completed initialization" Jan 14 23:48:54.470394 dockerd[2370]: time="2026-01-14T23:48:54.470202711Z" level=info msg="API listen on /run/docker.sock" Jan 14 23:48:54.470922 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 14 23:48:54.470000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=docker comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:48:56.238353 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 14 23:48:56.241047 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 14 23:48:56.558346 containerd[1996]: time="2026-01-14T23:48:56.558208434Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.7\"" Jan 14 23:48:56.687950 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 23:48:56.687000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:48:56.689682 kernel: kauditd_printk_skb: 132 callbacks suppressed Jan 14 23:48:56.689778 kernel: audit: type=1130 audit(1768434536.687:295): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:48:56.703089 (kubelet)[2590]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 14 23:48:56.789519 kubelet[2590]: E0114 23:48:56.789449 2590 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 14 23:48:56.798015 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 14 23:48:56.798330 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 14 23:48:56.799330 systemd[1]: kubelet.service: Consumed 322ms CPU time, 106.4M memory peak. Jan 14 23:48:56.798000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 14 23:48:56.805630 kernel: audit: type=1131 audit(1768434536.798:296): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 14 23:48:57.194165 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2883741153.mount: Deactivated successfully. Jan 14 23:48:58.533623 containerd[1996]: time="2026-01-14T23:48:58.533159804Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 23:48:58.535710 containerd[1996]: time="2026-01-14T23:48:58.535634384Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.7: active requests=0, bytes read=25791094" Jan 14 23:48:58.537571 containerd[1996]: time="2026-01-14T23:48:58.537488468Z" level=info msg="ImageCreate event name:\"sha256:6d7bc8e445519fe4d49eee834f33f3e165eef4d3c0919ba08c67cdf8db905b7e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 23:48:58.544610 containerd[1996]: time="2026-01-14T23:48:58.542699780Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:9585226cb85d1dc0f0ef5f7a75f04e4bc91ddd82de249533bd293aa3cf958dab\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 23:48:58.544836 containerd[1996]: time="2026-01-14T23:48:58.544791236Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.7\" with image id \"sha256:6d7bc8e445519fe4d49eee834f33f3e165eef4d3c0919ba08c67cdf8db905b7e\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.7\", repo digest \"registry.k8s.io/kube-apiserver@sha256:9585226cb85d1dc0f0ef5f7a75f04e4bc91ddd82de249533bd293aa3cf958dab\", size \"27383880\" in 1.985801098s" Jan 14 23:48:58.544952 containerd[1996]: time="2026-01-14T23:48:58.544926176Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.7\" returns image reference \"sha256:6d7bc8e445519fe4d49eee834f33f3e165eef4d3c0919ba08c67cdf8db905b7e\"" Jan 14 23:48:58.547825 containerd[1996]: time="2026-01-14T23:48:58.547761884Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.7\"" Jan 14 23:49:00.255570 containerd[1996]: time="2026-01-14T23:49:00.255486872Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 23:49:00.258113 containerd[1996]: time="2026-01-14T23:49:00.258037652Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.7: active requests=0, bytes read=23544927" Jan 14 23:49:00.259776 containerd[1996]: time="2026-01-14T23:49:00.259702292Z" level=info msg="ImageCreate event name:\"sha256:a94595d0240bcc5e538b4b33bbc890512a731425be69643cbee284072f7d8f64\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 23:49:00.264309 containerd[1996]: time="2026-01-14T23:49:00.264260948Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:f69d77ca0626b5a4b7b432c18de0952941181db7341c80eb89731f46d1d0c230\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 23:49:00.266445 containerd[1996]: time="2026-01-14T23:49:00.266266580Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.7\" with image id \"sha256:a94595d0240bcc5e538b4b33bbc890512a731425be69643cbee284072f7d8f64\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.7\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:f69d77ca0626b5a4b7b432c18de0952941181db7341c80eb89731f46d1d0c230\", size \"25137562\" in 1.71844646s" Jan 14 23:49:00.266445 containerd[1996]: time="2026-01-14T23:49:00.266318396Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.7\" returns image reference \"sha256:a94595d0240bcc5e538b4b33bbc890512a731425be69643cbee284072f7d8f64\"" Jan 14 23:49:00.267494 containerd[1996]: time="2026-01-14T23:49:00.267440060Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.7\"" Jan 14 23:49:01.781564 containerd[1996]: time="2026-01-14T23:49:01.781481472Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 23:49:01.784375 containerd[1996]: time="2026-01-14T23:49:01.783967356Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.7: active requests=0, bytes read=18289931" Jan 14 23:49:01.785675 containerd[1996]: time="2026-01-14T23:49:01.785618700Z" level=info msg="ImageCreate event name:\"sha256:94005b6be50f054c8a4ef3f0d6976644e8b3c6a8bf15a9e8a2eeac3e8331b010\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 23:49:01.790374 containerd[1996]: time="2026-01-14T23:49:01.790312416Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:21bda321d8b4d48eb059fbc1593203d55d8b3bc7acd0584e04e55504796d78d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 23:49:01.792481 containerd[1996]: time="2026-01-14T23:49:01.792433908Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.7\" with image id \"sha256:94005b6be50f054c8a4ef3f0d6976644e8b3c6a8bf15a9e8a2eeac3e8331b010\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.7\", repo digest \"registry.k8s.io/kube-scheduler@sha256:21bda321d8b4d48eb059fbc1593203d55d8b3bc7acd0584e04e55504796d78d0\", size \"19882566\" in 1.524934052s" Jan 14 23:49:01.792663 containerd[1996]: time="2026-01-14T23:49:01.792635820Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.7\" returns image reference \"sha256:94005b6be50f054c8a4ef3f0d6976644e8b3c6a8bf15a9e8a2eeac3e8331b010\"" Jan 14 23:49:01.794064 containerd[1996]: time="2026-01-14T23:49:01.793928856Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.7\"" Jan 14 23:49:03.058270 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4250957138.mount: Deactivated successfully. Jan 14 23:49:03.627349 containerd[1996]: time="2026-01-14T23:49:03.627268777Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 23:49:03.630147 containerd[1996]: time="2026-01-14T23:49:03.630059893Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.7: active requests=0, bytes read=18413667" Jan 14 23:49:03.631315 containerd[1996]: time="2026-01-14T23:49:03.631255945Z" level=info msg="ImageCreate event name:\"sha256:78ccb937011a53894db229033fd54e237d478ec85315f8b08e5dcaa0f737111b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 23:49:03.635565 containerd[1996]: time="2026-01-14T23:49:03.635092825Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:ec25702b19026e9c0d339bc1c3bd231435a59f28b5fccb21e1b1078a357380f5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 23:49:03.636304 containerd[1996]: time="2026-01-14T23:49:03.636246517Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.7\" with image id \"sha256:78ccb937011a53894db229033fd54e237d478ec85315f8b08e5dcaa0f737111b\", repo tag \"registry.k8s.io/kube-proxy:v1.33.7\", repo digest \"registry.k8s.io/kube-proxy@sha256:ec25702b19026e9c0d339bc1c3bd231435a59f28b5fccb21e1b1078a357380f5\", size \"28257692\" in 1.842259257s" Jan 14 23:49:03.636384 containerd[1996]: time="2026-01-14T23:49:03.636300877Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.7\" returns image reference \"sha256:78ccb937011a53894db229033fd54e237d478ec85315f8b08e5dcaa0f737111b\"" Jan 14 23:49:03.636940 containerd[1996]: time="2026-01-14T23:49:03.636885973Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Jan 14 23:49:04.185173 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1522110686.mount: Deactivated successfully. Jan 14 23:49:05.484623 containerd[1996]: time="2026-01-14T23:49:05.484005278Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 23:49:05.486832 containerd[1996]: time="2026-01-14T23:49:05.486769922Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=18338670" Jan 14 23:49:05.489440 containerd[1996]: time="2026-01-14T23:49:05.489364790Z" level=info msg="ImageCreate event name:\"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 23:49:05.496121 containerd[1996]: time="2026-01-14T23:49:05.495212018Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 23:49:05.497453 containerd[1996]: time="2026-01-14T23:49:05.497389934Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"19148915\" in 1.860250905s" Jan 14 23:49:05.497780 containerd[1996]: time="2026-01-14T23:49:05.497449358Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\"" Jan 14 23:49:05.498152 containerd[1996]: time="2026-01-14T23:49:05.498069374Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jan 14 23:49:05.946616 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4099941344.mount: Deactivated successfully. Jan 14 23:49:05.956234 containerd[1996]: time="2026-01-14T23:49:05.955697236Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 14 23:49:05.957170 containerd[1996]: time="2026-01-14T23:49:05.957012604Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Jan 14 23:49:05.958650 containerd[1996]: time="2026-01-14T23:49:05.958451212Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 14 23:49:05.964608 containerd[1996]: time="2026-01-14T23:49:05.963520204Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 14 23:49:05.965090 containerd[1996]: time="2026-01-14T23:49:05.965049592Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 466.909754ms" Jan 14 23:49:05.965228 containerd[1996]: time="2026-01-14T23:49:05.965185036Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Jan 14 23:49:05.966331 containerd[1996]: time="2026-01-14T23:49:05.966260404Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Jan 14 23:49:06.469001 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3185884179.mount: Deactivated successfully. Jan 14 23:49:07.049027 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 14 23:49:07.053361 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 14 23:49:08.621000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:49:08.621209 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 23:49:08.629623 kernel: audit: type=1130 audit(1768434548.621:297): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:49:08.641922 (kubelet)[2791]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 14 23:49:08.783940 kubelet[2791]: E0114 23:49:08.783840 2791 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 14 23:49:08.791402 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 14 23:49:08.791928 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 14 23:49:08.792000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 14 23:49:08.793323 systemd[1]: kubelet.service: Consumed 338ms CPU time, 106.8M memory peak. Jan 14 23:49:08.799809 kernel: audit: type=1131 audit(1768434548.792:298): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 14 23:49:09.711631 containerd[1996]: time="2026-01-14T23:49:09.710821003Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 23:49:09.714352 containerd[1996]: time="2026-01-14T23:49:09.714269251Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=57926377" Jan 14 23:49:09.715348 containerd[1996]: time="2026-01-14T23:49:09.715310551Z" level=info msg="ImageCreate event name:\"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 23:49:09.721480 containerd[1996]: time="2026-01-14T23:49:09.721429135Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 23:49:09.723718 containerd[1996]: time="2026-01-14T23:49:09.723512599Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"70026017\" in 3.757190239s" Jan 14 23:49:09.723718 containerd[1996]: time="2026-01-14T23:49:09.723564715Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\"" Jan 14 23:49:10.417371 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Jan 14 23:49:10.417000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hostnamed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:49:10.429694 kernel: audit: type=1131 audit(1768434550.417:299): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hostnamed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:49:10.445000 audit: BPF prog-id=65 op=UNLOAD Jan 14 23:49:10.447617 kernel: audit: type=1334 audit(1768434550.445:300): prog-id=65 op=UNLOAD Jan 14 23:49:18.435000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:49:18.435979 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 23:49:18.436538 systemd[1]: kubelet.service: Consumed 338ms CPU time, 106.8M memory peak. Jan 14 23:49:18.442306 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 14 23:49:18.435000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:49:18.449363 kernel: audit: type=1130 audit(1768434558.435:301): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:49:18.449482 kernel: audit: type=1131 audit(1768434558.435:302): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:49:18.496188 systemd[1]: Reload requested from client PID 2833 ('systemctl') (unit session-7.scope)... Jan 14 23:49:18.496395 systemd[1]: Reloading... Jan 14 23:49:18.754930 zram_generator::config[2881]: No configuration found. Jan 14 23:49:19.245898 systemd[1]: Reloading finished in 748 ms. Jan 14 23:49:19.302000 audit: BPF prog-id=69 op=LOAD Jan 14 23:49:19.302000 audit: BPF prog-id=47 op=UNLOAD Jan 14 23:49:19.309276 kernel: audit: type=1334 audit(1768434559.302:303): prog-id=69 op=LOAD Jan 14 23:49:19.309362 kernel: audit: type=1334 audit(1768434559.302:304): prog-id=47 op=UNLOAD Jan 14 23:49:19.313169 kernel: audit: type=1334 audit(1768434559.302:305): prog-id=70 op=LOAD Jan 14 23:49:19.302000 audit: BPF prog-id=70 op=LOAD Jan 14 23:49:19.302000 audit: BPF prog-id=71 op=LOAD Jan 14 23:49:19.314992 kernel: audit: type=1334 audit(1768434559.302:306): prog-id=71 op=LOAD Jan 14 23:49:19.302000 audit: BPF prog-id=48 op=UNLOAD Jan 14 23:49:19.316700 kernel: audit: type=1334 audit(1768434559.302:307): prog-id=48 op=UNLOAD Jan 14 23:49:19.302000 audit: BPF prog-id=49 op=UNLOAD Jan 14 23:49:19.318439 kernel: audit: type=1334 audit(1768434559.302:308): prog-id=49 op=UNLOAD Jan 14 23:49:19.314000 audit: BPF prog-id=72 op=LOAD Jan 14 23:49:19.314000 audit: BPF prog-id=62 op=UNLOAD Jan 14 23:49:19.321776 kernel: audit: type=1334 audit(1768434559.314:309): prog-id=72 op=LOAD Jan 14 23:49:19.321867 kernel: audit: type=1334 audit(1768434559.314:310): prog-id=62 op=UNLOAD Jan 14 23:49:19.316000 audit: BPF prog-id=73 op=LOAD Jan 14 23:49:19.316000 audit: BPF prog-id=74 op=LOAD Jan 14 23:49:19.316000 audit: BPF prog-id=63 op=UNLOAD Jan 14 23:49:19.316000 audit: BPF prog-id=64 op=UNLOAD Jan 14 23:49:19.319000 audit: BPF prog-id=75 op=LOAD Jan 14 23:49:19.323000 audit: BPF prog-id=46 op=UNLOAD Jan 14 23:49:19.325000 audit: BPF prog-id=76 op=LOAD Jan 14 23:49:19.325000 audit: BPF prog-id=56 op=UNLOAD Jan 14 23:49:19.326000 audit: BPF prog-id=77 op=LOAD Jan 14 23:49:19.326000 audit: BPF prog-id=53 op=UNLOAD Jan 14 23:49:19.326000 audit: BPF prog-id=78 op=LOAD Jan 14 23:49:19.327000 audit: BPF prog-id=79 op=LOAD Jan 14 23:49:19.327000 audit: BPF prog-id=54 op=UNLOAD Jan 14 23:49:19.327000 audit: BPF prog-id=55 op=UNLOAD Jan 14 23:49:19.329000 audit: BPF prog-id=80 op=LOAD Jan 14 23:49:19.329000 audit: BPF prog-id=68 op=UNLOAD Jan 14 23:49:19.330000 audit: BPF prog-id=81 op=LOAD Jan 14 23:49:19.330000 audit: BPF prog-id=57 op=UNLOAD Jan 14 23:49:19.331000 audit: BPF prog-id=82 op=LOAD Jan 14 23:49:19.331000 audit: BPF prog-id=83 op=LOAD Jan 14 23:49:19.331000 audit: BPF prog-id=58 op=UNLOAD Jan 14 23:49:19.331000 audit: BPF prog-id=59 op=UNLOAD Jan 14 23:49:19.332000 audit: BPF prog-id=84 op=LOAD Jan 14 23:49:19.332000 audit: BPF prog-id=85 op=LOAD Jan 14 23:49:19.332000 audit: BPF prog-id=60 op=UNLOAD Jan 14 23:49:19.332000 audit: BPF prog-id=61 op=UNLOAD Jan 14 23:49:19.333000 audit: BPF prog-id=86 op=LOAD Jan 14 23:49:19.333000 audit: BPF prog-id=50 op=UNLOAD Jan 14 23:49:19.334000 audit: BPF prog-id=87 op=LOAD Jan 14 23:49:19.334000 audit: BPF prog-id=88 op=LOAD Jan 14 23:49:19.334000 audit: BPF prog-id=51 op=UNLOAD Jan 14 23:49:19.334000 audit: BPF prog-id=52 op=UNLOAD Jan 14 23:49:19.361299 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 14 23:49:19.361508 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 14 23:49:19.362184 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 23:49:19.362307 systemd[1]: kubelet.service: Consumed 230ms CPU time, 95.3M memory peak. Jan 14 23:49:19.361000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 14 23:49:19.369179 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 14 23:49:20.383674 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 23:49:20.383000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:49:20.398088 (kubelet)[2944]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 14 23:49:20.470527 kubelet[2944]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 14 23:49:20.471055 kubelet[2944]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 14 23:49:20.471145 kubelet[2944]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 14 23:49:20.471373 kubelet[2944]: I0114 23:49:20.471327 2944 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 14 23:49:22.428656 kubelet[2944]: I0114 23:49:22.427714 2944 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Jan 14 23:49:22.428656 kubelet[2944]: I0114 23:49:22.427760 2944 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 14 23:49:22.428656 kubelet[2944]: I0114 23:49:22.428128 2944 server.go:956] "Client rotation is on, will bootstrap in background" Jan 14 23:49:22.478663 kubelet[2944]: E0114 23:49:22.478558 2944 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://172.31.18.197:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.18.197:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jan 14 23:49:22.479898 kubelet[2944]: I0114 23:49:22.479626 2944 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 14 23:49:22.494382 kubelet[2944]: I0114 23:49:22.494334 2944 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 14 23:49:22.500628 kubelet[2944]: I0114 23:49:22.500234 2944 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 14 23:49:22.500905 kubelet[2944]: I0114 23:49:22.500849 2944 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 14 23:49:22.501209 kubelet[2944]: I0114 23:49:22.500906 2944 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-18-197","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 14 23:49:22.501388 kubelet[2944]: I0114 23:49:22.501343 2944 topology_manager.go:138] "Creating topology manager with none policy" Jan 14 23:49:22.501388 kubelet[2944]: I0114 23:49:22.501365 2944 container_manager_linux.go:303] "Creating device plugin manager" Jan 14 23:49:22.501752 kubelet[2944]: I0114 23:49:22.501724 2944 state_mem.go:36] "Initialized new in-memory state store" Jan 14 23:49:22.509009 kubelet[2944]: I0114 23:49:22.508810 2944 kubelet.go:480] "Attempting to sync node with API server" Jan 14 23:49:22.509009 kubelet[2944]: I0114 23:49:22.508860 2944 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 14 23:49:22.509009 kubelet[2944]: I0114 23:49:22.508907 2944 kubelet.go:386] "Adding apiserver pod source" Jan 14 23:49:22.509009 kubelet[2944]: I0114 23:49:22.508952 2944 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 14 23:49:22.518505 kubelet[2944]: E0114 23:49:22.518450 2944 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://172.31.18.197:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-18-197&limit=500&resourceVersion=0\": dial tcp 172.31.18.197:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 14 23:49:22.519725 kubelet[2944]: I0114 23:49:22.518751 2944 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.1.5" apiVersion="v1" Jan 14 23:49:22.520739 kubelet[2944]: I0114 23:49:22.520705 2944 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jan 14 23:49:22.521113 kubelet[2944]: W0114 23:49:22.521091 2944 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 14 23:49:22.529690 kubelet[2944]: I0114 23:49:22.529645 2944 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 14 23:49:22.529830 kubelet[2944]: I0114 23:49:22.529724 2944 server.go:1289] "Started kubelet" Jan 14 23:49:22.533679 kubelet[2944]: E0114 23:49:22.532674 2944 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://172.31.18.197:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.18.197:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 14 23:49:22.533679 kubelet[2944]: I0114 23:49:22.533333 2944 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jan 14 23:49:22.537271 kubelet[2944]: I0114 23:49:22.537188 2944 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 14 23:49:22.537903 kubelet[2944]: I0114 23:49:22.537873 2944 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 14 23:49:22.538477 kubelet[2944]: I0114 23:49:22.538420 2944 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 14 23:49:22.546620 kubelet[2944]: E0114 23:49:22.542968 2944 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.18.197:6443/api/v1/namespaces/default/events\": dial tcp 172.31.18.197:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-18-197.188abddd741adbfb default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-18-197,UID:ip-172-31-18-197,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-18-197,},FirstTimestamp:2026-01-14 23:49:22.529680379 +0000 UTC m=+2.123550624,LastTimestamp:2026-01-14 23:49:22.529680379 +0000 UTC m=+2.123550624,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-18-197,}" Jan 14 23:49:22.553743 kubelet[2944]: I0114 23:49:22.553693 2944 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 14 23:49:22.554137 kubelet[2944]: E0114 23:49:22.554085 2944 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-18-197\" not found" Jan 14 23:49:22.554411 kubelet[2944]: I0114 23:49:22.554385 2944 server.go:317] "Adding debug handlers to kubelet server" Jan 14 23:49:22.554605 kubelet[2944]: I0114 23:49:22.554552 2944 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 14 23:49:22.554708 kubelet[2944]: I0114 23:49:22.554677 2944 reconciler.go:26] "Reconciler: start to sync state" Jan 14 23:49:22.559399 kubelet[2944]: I0114 23:49:22.557759 2944 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 14 23:49:22.557000 audit[2959]: NETFILTER_CFG table=mangle:42 family=2 entries=2 op=nft_register_chain pid=2959 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 23:49:22.557000 audit[2959]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=136 a0=3 a1=ffffde9a76e0 a2=0 a3=0 items=0 ppid=2944 pid=2959 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:49:22.557000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Jan 14 23:49:22.560000 audit[2960]: NETFILTER_CFG table=filter:43 family=2 entries=1 op=nft_register_chain pid=2960 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 23:49:22.560000 audit[2960]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffffce7ac0 a2=0 a3=0 items=0 ppid=2944 pid=2960 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:49:22.560000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Jan 14 23:49:22.566145 kubelet[2944]: E0114 23:49:22.565407 2944 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.18.197:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-18-197?timeout=10s\": dial tcp 172.31.18.197:6443: connect: connection refused" interval="200ms" Jan 14 23:49:22.567757 kubelet[2944]: E0114 23:49:22.567198 2944 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://172.31.18.197:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.18.197:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 14 23:49:22.568000 audit[2962]: NETFILTER_CFG table=filter:44 family=2 entries=2 op=nft_register_chain pid=2962 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 23:49:22.568000 audit[2962]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=340 a0=3 a1=ffffd781f0e0 a2=0 a3=0 items=0 ppid=2944 pid=2962 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:49:22.568000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 14 23:49:22.569838 kubelet[2944]: E0114 23:49:22.569783 2944 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 14 23:49:22.570044 kubelet[2944]: I0114 23:49:22.570009 2944 factory.go:223] Registration of the containerd container factory successfully Jan 14 23:49:22.570044 kubelet[2944]: I0114 23:49:22.570039 2944 factory.go:223] Registration of the systemd container factory successfully Jan 14 23:49:22.570239 kubelet[2944]: I0114 23:49:22.570199 2944 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 14 23:49:22.575000 audit[2964]: NETFILTER_CFG table=filter:45 family=2 entries=2 op=nft_register_chain pid=2964 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 23:49:22.575000 audit[2964]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=340 a0=3 a1=ffffc0c820e0 a2=0 a3=0 items=0 ppid=2944 pid=2964 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:49:22.575000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 14 23:49:22.596000 audit[2971]: NETFILTER_CFG table=filter:46 family=2 entries=1 op=nft_register_rule pid=2971 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 23:49:22.596000 audit[2971]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=924 a0=3 a1=ffffc39ad180 a2=0 a3=0 items=0 ppid=2944 pid=2971 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:49:22.596000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E7400626C6F636B20696E636F6D696E67206C6F63616C6E657420636F6E6E656374696F6E73002D2D647374003132372E302E302E302F38 Jan 14 23:49:22.599090 kubelet[2944]: I0114 23:49:22.599036 2944 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Jan 14 23:49:22.600000 audit[2974]: NETFILTER_CFG table=mangle:47 family=10 entries=2 op=nft_register_chain pid=2974 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 23:49:22.600000 audit[2974]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=136 a0=3 a1=ffffc983e350 a2=0 a3=0 items=0 ppid=2944 pid=2974 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:49:22.600000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Jan 14 23:49:22.603444 kubelet[2944]: I0114 23:49:22.602949 2944 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Jan 14 23:49:22.603444 kubelet[2944]: I0114 23:49:22.602986 2944 status_manager.go:230] "Starting to sync pod status with apiserver" Jan 14 23:49:22.603444 kubelet[2944]: I0114 23:49:22.603019 2944 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 14 23:49:22.603444 kubelet[2944]: I0114 23:49:22.603034 2944 kubelet.go:2436] "Starting kubelet main sync loop" Jan 14 23:49:22.603444 kubelet[2944]: E0114 23:49:22.603104 2944 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 14 23:49:22.603000 audit[2973]: NETFILTER_CFG table=mangle:48 family=2 entries=1 op=nft_register_chain pid=2973 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 23:49:22.603000 audit[2973]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=fffffa9272b0 a2=0 a3=0 items=0 ppid=2944 pid=2973 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:49:22.603000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Jan 14 23:49:22.605393 kubelet[2944]: I0114 23:49:22.604621 2944 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 14 23:49:22.605393 kubelet[2944]: I0114 23:49:22.604651 2944 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 14 23:49:22.605393 kubelet[2944]: I0114 23:49:22.604683 2944 state_mem.go:36] "Initialized new in-memory state store" Jan 14 23:49:22.606574 kubelet[2944]: E0114 23:49:22.606524 2944 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://172.31.18.197:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.18.197:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 14 23:49:22.607000 audit[2975]: NETFILTER_CFG table=mangle:49 family=10 entries=1 op=nft_register_chain pid=2975 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 23:49:22.607000 audit[2975]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffc6217c40 a2=0 a3=0 items=0 ppid=2944 pid=2975 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:49:22.607000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Jan 14 23:49:22.609220 kubelet[2944]: I0114 23:49:22.609191 2944 policy_none.go:49] "None policy: Start" Jan 14 23:49:22.610061 kubelet[2944]: I0114 23:49:22.609659 2944 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 14 23:49:22.610061 kubelet[2944]: I0114 23:49:22.609701 2944 state_mem.go:35] "Initializing new in-memory state store" Jan 14 23:49:22.610000 audit[2976]: NETFILTER_CFG table=nat:50 family=2 entries=1 op=nft_register_chain pid=2976 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 23:49:22.610000 audit[2976]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffde33ab60 a2=0 a3=0 items=0 ppid=2944 pid=2976 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:49:22.610000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Jan 14 23:49:22.612000 audit[2977]: NETFILTER_CFG table=nat:51 family=10 entries=1 op=nft_register_chain pid=2977 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 23:49:22.612000 audit[2977]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffe9cbc4b0 a2=0 a3=0 items=0 ppid=2944 pid=2977 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:49:22.612000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Jan 14 23:49:22.615000 audit[2979]: NETFILTER_CFG table=filter:52 family=2 entries=1 op=nft_register_chain pid=2979 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 23:49:22.615000 audit[2979]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffd65be640 a2=0 a3=0 items=0 ppid=2944 pid=2979 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:49:22.615000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Jan 14 23:49:22.617000 audit[2980]: NETFILTER_CFG table=filter:53 family=10 entries=1 op=nft_register_chain pid=2980 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 23:49:22.617000 audit[2980]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffca756780 a2=0 a3=0 items=0 ppid=2944 pid=2980 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:49:22.617000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Jan 14 23:49:22.624214 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 14 23:49:22.643505 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 14 23:49:22.652229 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 14 23:49:22.654355 kubelet[2944]: E0114 23:49:22.654302 2944 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-18-197\" not found" Jan 14 23:49:22.664693 kubelet[2944]: E0114 23:49:22.664527 2944 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jan 14 23:49:22.665658 kubelet[2944]: I0114 23:49:22.665264 2944 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 14 23:49:22.665658 kubelet[2944]: I0114 23:49:22.665288 2944 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 14 23:49:22.666651 kubelet[2944]: I0114 23:49:22.666621 2944 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 14 23:49:22.671339 kubelet[2944]: E0114 23:49:22.670768 2944 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 14 23:49:22.671675 kubelet[2944]: E0114 23:49:22.671648 2944 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-18-197\" not found" Jan 14 23:49:22.727479 systemd[1]: Created slice kubepods-burstable-pod9bc83122488d7c43421b8e3e7827bdd2.slice - libcontainer container kubepods-burstable-pod9bc83122488d7c43421b8e3e7827bdd2.slice. Jan 14 23:49:22.753909 kubelet[2944]: E0114 23:49:22.753856 2944 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-18-197\" not found" node="ip-172-31-18-197" Jan 14 23:49:22.762217 systemd[1]: Created slice kubepods-burstable-podb48e8f30b30d49e16633ce6306866fbb.slice - libcontainer container kubepods-burstable-podb48e8f30b30d49e16633ce6306866fbb.slice. Jan 14 23:49:22.769280 kubelet[2944]: E0114 23:49:22.768563 2944 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.18.197:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-18-197?timeout=10s\": dial tcp 172.31.18.197:6443: connect: connection refused" interval="400ms" Jan 14 23:49:22.770112 kubelet[2944]: I0114 23:49:22.770051 2944 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-18-197" Jan 14 23:49:22.770984 kubelet[2944]: E0114 23:49:22.770767 2944 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.18.197:6443/api/v1/nodes\": dial tcp 172.31.18.197:6443: connect: connection refused" node="ip-172-31-18-197" Jan 14 23:49:22.773558 kubelet[2944]: E0114 23:49:22.772864 2944 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-18-197\" not found" node="ip-172-31-18-197" Jan 14 23:49:22.779187 systemd[1]: Created slice kubepods-burstable-pod6a1e434e62c735300e7198691e88e3d7.slice - libcontainer container kubepods-burstable-pod6a1e434e62c735300e7198691e88e3d7.slice. Jan 14 23:49:22.783499 kubelet[2944]: E0114 23:49:22.783462 2944 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-18-197\" not found" node="ip-172-31-18-197" Jan 14 23:49:22.856621 kubelet[2944]: I0114 23:49:22.856071 2944 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9bc83122488d7c43421b8e3e7827bdd2-ca-certs\") pod \"kube-apiserver-ip-172-31-18-197\" (UID: \"9bc83122488d7c43421b8e3e7827bdd2\") " pod="kube-system/kube-apiserver-ip-172-31-18-197" Jan 14 23:49:22.856621 kubelet[2944]: I0114 23:49:22.856157 2944 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b48e8f30b30d49e16633ce6306866fbb-ca-certs\") pod \"kube-controller-manager-ip-172-31-18-197\" (UID: \"b48e8f30b30d49e16633ce6306866fbb\") " pod="kube-system/kube-controller-manager-ip-172-31-18-197" Jan 14 23:49:22.856621 kubelet[2944]: I0114 23:49:22.856213 2944 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b48e8f30b30d49e16633ce6306866fbb-k8s-certs\") pod \"kube-controller-manager-ip-172-31-18-197\" (UID: \"b48e8f30b30d49e16633ce6306866fbb\") " pod="kube-system/kube-controller-manager-ip-172-31-18-197" Jan 14 23:49:22.856621 kubelet[2944]: I0114 23:49:22.856260 2944 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b48e8f30b30d49e16633ce6306866fbb-kubeconfig\") pod \"kube-controller-manager-ip-172-31-18-197\" (UID: \"b48e8f30b30d49e16633ce6306866fbb\") " pod="kube-system/kube-controller-manager-ip-172-31-18-197" Jan 14 23:49:22.856621 kubelet[2944]: I0114 23:49:22.856301 2944 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9bc83122488d7c43421b8e3e7827bdd2-k8s-certs\") pod \"kube-apiserver-ip-172-31-18-197\" (UID: \"9bc83122488d7c43421b8e3e7827bdd2\") " pod="kube-system/kube-apiserver-ip-172-31-18-197" Jan 14 23:49:22.857001 kubelet[2944]: I0114 23:49:22.856349 2944 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9bc83122488d7c43421b8e3e7827bdd2-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-18-197\" (UID: \"9bc83122488d7c43421b8e3e7827bdd2\") " pod="kube-system/kube-apiserver-ip-172-31-18-197" Jan 14 23:49:22.857001 kubelet[2944]: I0114 23:49:22.856415 2944 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b48e8f30b30d49e16633ce6306866fbb-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-18-197\" (UID: \"b48e8f30b30d49e16633ce6306866fbb\") " pod="kube-system/kube-controller-manager-ip-172-31-18-197" Jan 14 23:49:22.857001 kubelet[2944]: I0114 23:49:22.856471 2944 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b48e8f30b30d49e16633ce6306866fbb-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-18-197\" (UID: \"b48e8f30b30d49e16633ce6306866fbb\") " pod="kube-system/kube-controller-manager-ip-172-31-18-197" Jan 14 23:49:22.857001 kubelet[2944]: I0114 23:49:22.856525 2944 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6a1e434e62c735300e7198691e88e3d7-kubeconfig\") pod \"kube-scheduler-ip-172-31-18-197\" (UID: \"6a1e434e62c735300e7198691e88e3d7\") " pod="kube-system/kube-scheduler-ip-172-31-18-197" Jan 14 23:49:22.974369 kubelet[2944]: I0114 23:49:22.974176 2944 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-18-197" Jan 14 23:49:22.974937 kubelet[2944]: E0114 23:49:22.974898 2944 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.18.197:6443/api/v1/nodes\": dial tcp 172.31.18.197:6443: connect: connection refused" node="ip-172-31-18-197" Jan 14 23:49:23.056041 containerd[1996]: time="2026-01-14T23:49:23.055838633Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-18-197,Uid:9bc83122488d7c43421b8e3e7827bdd2,Namespace:kube-system,Attempt:0,}" Jan 14 23:49:23.076981 containerd[1996]: time="2026-01-14T23:49:23.076891733Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-18-197,Uid:b48e8f30b30d49e16633ce6306866fbb,Namespace:kube-system,Attempt:0,}" Jan 14 23:49:23.086981 containerd[1996]: time="2026-01-14T23:49:23.086935050Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-18-197,Uid:6a1e434e62c735300e7198691e88e3d7,Namespace:kube-system,Attempt:0,}" Jan 14 23:49:23.101616 containerd[1996]: time="2026-01-14T23:49:23.101211462Z" level=info msg="connecting to shim c60731a5637e60d73dd3db65c6b2c95797f7f6d2dd05cece0984664d0ec9ae43" address="unix:///run/containerd/s/be53b1e73177744f93faca9d1919e646c08dacf22a0645a4e70febb3a8760138" namespace=k8s.io protocol=ttrpc version=3 Jan 14 23:49:23.149612 containerd[1996]: time="2026-01-14T23:49:23.149489790Z" level=info msg="connecting to shim 7adca946cbee63a85f5c4236daadf37caed657cbc39392be919b48941888c6d6" address="unix:///run/containerd/s/14087e40d13dc3990ad0ca37e0f2ba2d9908d8d17b7305f0f3b7461981f6193a" namespace=k8s.io protocol=ttrpc version=3 Jan 14 23:49:23.172248 kubelet[2944]: E0114 23:49:23.172161 2944 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.18.197:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-18-197?timeout=10s\": dial tcp 172.31.18.197:6443: connect: connection refused" interval="800ms" Jan 14 23:49:23.180885 containerd[1996]: time="2026-01-14T23:49:23.180830298Z" level=info msg="connecting to shim 45b3ab18905646a82e4d18f4477a3fc4c7b98ddbdac14bb0157106e58da95c72" address="unix:///run/containerd/s/09de55abdc151c170cb6ec4a7c570c28e4b13c12fc6e2cb0fbb99106eed57792" namespace=k8s.io protocol=ttrpc version=3 Jan 14 23:49:23.207470 systemd[1]: Started cri-containerd-c60731a5637e60d73dd3db65c6b2c95797f7f6d2dd05cece0984664d0ec9ae43.scope - libcontainer container c60731a5637e60d73dd3db65c6b2c95797f7f6d2dd05cece0984664d0ec9ae43. Jan 14 23:49:23.225225 systemd[1]: Started cri-containerd-7adca946cbee63a85f5c4236daadf37caed657cbc39392be919b48941888c6d6.scope - libcontainer container 7adca946cbee63a85f5c4236daadf37caed657cbc39392be919b48941888c6d6. Jan 14 23:49:23.269913 systemd[1]: Started cri-containerd-45b3ab18905646a82e4d18f4477a3fc4c7b98ddbdac14bb0157106e58da95c72.scope - libcontainer container 45b3ab18905646a82e4d18f4477a3fc4c7b98ddbdac14bb0157106e58da95c72. Jan 14 23:49:23.276000 audit: BPF prog-id=89 op=LOAD Jan 14 23:49:23.279000 audit: BPF prog-id=90 op=LOAD Jan 14 23:49:23.279000 audit[3037]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=400017e180 a2=98 a3=0 items=0 ppid=3010 pid=3037 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:49:23.279000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3761646361393436636265653633613835663563343233366461616466 Jan 14 23:49:23.279000 audit: BPF prog-id=90 op=UNLOAD Jan 14 23:49:23.279000 audit[3037]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3010 pid=3037 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:49:23.279000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3761646361393436636265653633613835663563343233366461616466 Jan 14 23:49:23.280000 audit: BPF prog-id=91 op=LOAD Jan 14 23:49:23.280000 audit: BPF prog-id=92 op=LOAD Jan 14 23:49:23.280000 audit[3037]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=400017e3e8 a2=98 a3=0 items=0 ppid=3010 pid=3037 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:49:23.280000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3761646361393436636265653633613835663563343233366461616466 Jan 14 23:49:23.281000 audit: BPF prog-id=93 op=LOAD Jan 14 23:49:23.281000 audit[3037]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=400017e168 a2=98 a3=0 items=0 ppid=3010 pid=3037 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:49:23.281000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3761646361393436636265653633613835663563343233366461616466 Jan 14 23:49:23.281000 audit: BPF prog-id=93 op=UNLOAD Jan 14 23:49:23.281000 audit[3037]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3010 pid=3037 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:49:23.281000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3761646361393436636265653633613835663563343233366461616466 Jan 14 23:49:23.282000 audit: BPF prog-id=92 op=UNLOAD Jan 14 23:49:23.282000 audit[3037]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3010 pid=3037 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:49:23.282000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3761646361393436636265653633613835663563343233366461616466 Jan 14 23:49:23.282000 audit: BPF prog-id=94 op=LOAD Jan 14 23:49:23.282000 audit[3037]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=400017e648 a2=98 a3=0 items=0 ppid=3010 pid=3037 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:49:23.282000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3761646361393436636265653633613835663563343233366461616466 Jan 14 23:49:23.283000 audit: BPF prog-id=95 op=LOAD Jan 14 23:49:23.283000 audit[3008]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a0180 a2=98 a3=0 items=0 ppid=2990 pid=3008 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:49:23.283000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6336303733316135363337653630643733646433646236356336623263 Jan 14 23:49:23.283000 audit: BPF prog-id=95 op=UNLOAD Jan 14 23:49:23.283000 audit[3008]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2990 pid=3008 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:49:23.283000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6336303733316135363337653630643733646433646236356336623263 Jan 14 23:49:23.284000 audit: BPF prog-id=96 op=LOAD Jan 14 23:49:23.284000 audit[3008]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a03e8 a2=98 a3=0 items=0 ppid=2990 pid=3008 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:49:23.284000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6336303733316135363337653630643733646433646236356336623263 Jan 14 23:49:23.284000 audit: BPF prog-id=97 op=LOAD Jan 14 23:49:23.284000 audit[3008]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=40001a0168 a2=98 a3=0 items=0 ppid=2990 pid=3008 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:49:23.284000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6336303733316135363337653630643733646433646236356336623263 Jan 14 23:49:23.284000 audit: BPF prog-id=97 op=UNLOAD Jan 14 23:49:23.284000 audit[3008]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2990 pid=3008 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:49:23.284000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6336303733316135363337653630643733646433646236356336623263 Jan 14 23:49:23.284000 audit: BPF prog-id=96 op=UNLOAD Jan 14 23:49:23.284000 audit[3008]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2990 pid=3008 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:49:23.284000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6336303733316135363337653630643733646433646236356336623263 Jan 14 23:49:23.284000 audit: BPF prog-id=98 op=LOAD Jan 14 23:49:23.284000 audit[3008]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a0648 a2=98 a3=0 items=0 ppid=2990 pid=3008 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:49:23.284000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6336303733316135363337653630643733646433646236356336623263 Jan 14 23:49:23.301000 audit: BPF prog-id=99 op=LOAD Jan 14 23:49:23.304000 audit: BPF prog-id=100 op=LOAD Jan 14 23:49:23.304000 audit[3063]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000176180 a2=98 a3=0 items=0 ppid=3033 pid=3063 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:49:23.304000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3435623361623138393035363436613832653464313866343437376133 Jan 14 23:49:23.304000 audit: BPF prog-id=100 op=UNLOAD Jan 14 23:49:23.304000 audit[3063]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3033 pid=3063 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:49:23.304000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3435623361623138393035363436613832653464313866343437376133 Jan 14 23:49:23.304000 audit: BPF prog-id=101 op=LOAD Jan 14 23:49:23.304000 audit[3063]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001763e8 a2=98 a3=0 items=0 ppid=3033 pid=3063 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:49:23.304000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3435623361623138393035363436613832653464313866343437376133 Jan 14 23:49:23.304000 audit: BPF prog-id=102 op=LOAD Jan 14 23:49:23.304000 audit[3063]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000176168 a2=98 a3=0 items=0 ppid=3033 pid=3063 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:49:23.304000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3435623361623138393035363436613832653464313866343437376133 Jan 14 23:49:23.305000 audit: BPF prog-id=102 op=UNLOAD Jan 14 23:49:23.305000 audit[3063]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3033 pid=3063 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:49:23.305000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3435623361623138393035363436613832653464313866343437376133 Jan 14 23:49:23.305000 audit: BPF prog-id=101 op=UNLOAD Jan 14 23:49:23.305000 audit[3063]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3033 pid=3063 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:49:23.305000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3435623361623138393035363436613832653464313866343437376133 Jan 14 23:49:23.305000 audit: BPF prog-id=103 op=LOAD Jan 14 23:49:23.305000 audit[3063]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000176648 a2=98 a3=0 items=0 ppid=3033 pid=3063 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:49:23.305000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3435623361623138393035363436613832653464313866343437376133 Jan 14 23:49:23.381607 kubelet[2944]: I0114 23:49:23.381514 2944 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-18-197" Jan 14 23:49:23.383136 kubelet[2944]: E0114 23:49:23.383086 2944 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.18.197:6443/api/v1/nodes\": dial tcp 172.31.18.197:6443: connect: connection refused" node="ip-172-31-18-197" Jan 14 23:49:23.385199 containerd[1996]: time="2026-01-14T23:49:23.384403627Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-18-197,Uid:b48e8f30b30d49e16633ce6306866fbb,Namespace:kube-system,Attempt:0,} returns sandbox id \"7adca946cbee63a85f5c4236daadf37caed657cbc39392be919b48941888c6d6\"" Jan 14 23:49:23.395847 containerd[1996]: time="2026-01-14T23:49:23.395790955Z" level=info msg="CreateContainer within sandbox \"7adca946cbee63a85f5c4236daadf37caed657cbc39392be919b48941888c6d6\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 14 23:49:23.405744 containerd[1996]: time="2026-01-14T23:49:23.405692299Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-18-197,Uid:9bc83122488d7c43421b8e3e7827bdd2,Namespace:kube-system,Attempt:0,} returns sandbox id \"c60731a5637e60d73dd3db65c6b2c95797f7f6d2dd05cece0984664d0ec9ae43\"" Jan 14 23:49:23.412169 containerd[1996]: time="2026-01-14T23:49:23.412084279Z" level=info msg="CreateContainer within sandbox \"c60731a5637e60d73dd3db65c6b2c95797f7f6d2dd05cece0984664d0ec9ae43\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 14 23:49:23.425138 containerd[1996]: time="2026-01-14T23:49:23.425075839Z" level=info msg="Container f7a36272e2f2686043f56697d76899ca8fb8bbc67662af29f9291149d37b5e56: CDI devices from CRI Config.CDIDevices: []" Jan 14 23:49:23.427505 containerd[1996]: time="2026-01-14T23:49:23.427441915Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-18-197,Uid:6a1e434e62c735300e7198691e88e3d7,Namespace:kube-system,Attempt:0,} returns sandbox id \"45b3ab18905646a82e4d18f4477a3fc4c7b98ddbdac14bb0157106e58da95c72\"" Jan 14 23:49:23.435826 containerd[1996]: time="2026-01-14T23:49:23.435777607Z" level=info msg="Container 03d27272b4492e7b91b3bc6532ed36ccbc0db1707f96f071e7142b544e63c61c: CDI devices from CRI Config.CDIDevices: []" Jan 14 23:49:23.436228 containerd[1996]: time="2026-01-14T23:49:23.436150423Z" level=info msg="CreateContainer within sandbox \"45b3ab18905646a82e4d18f4477a3fc4c7b98ddbdac14bb0157106e58da95c72\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 14 23:49:23.441245 containerd[1996]: time="2026-01-14T23:49:23.441177151Z" level=info msg="CreateContainer within sandbox \"7adca946cbee63a85f5c4236daadf37caed657cbc39392be919b48941888c6d6\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"f7a36272e2f2686043f56697d76899ca8fb8bbc67662af29f9291149d37b5e56\"" Jan 14 23:49:23.447520 containerd[1996]: time="2026-01-14T23:49:23.447443359Z" level=info msg="StartContainer for \"f7a36272e2f2686043f56697d76899ca8fb8bbc67662af29f9291149d37b5e56\"" Jan 14 23:49:23.453853 containerd[1996]: time="2026-01-14T23:49:23.453783799Z" level=info msg="connecting to shim f7a36272e2f2686043f56697d76899ca8fb8bbc67662af29f9291149d37b5e56" address="unix:///run/containerd/s/14087e40d13dc3990ad0ca37e0f2ba2d9908d8d17b7305f0f3b7461981f6193a" protocol=ttrpc version=3 Jan 14 23:49:23.465133 containerd[1996]: time="2026-01-14T23:49:23.464535319Z" level=info msg="CreateContainer within sandbox \"c60731a5637e60d73dd3db65c6b2c95797f7f6d2dd05cece0984664d0ec9ae43\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"03d27272b4492e7b91b3bc6532ed36ccbc0db1707f96f071e7142b544e63c61c\"" Jan 14 23:49:23.465397 containerd[1996]: time="2026-01-14T23:49:23.465328099Z" level=info msg="StartContainer for \"03d27272b4492e7b91b3bc6532ed36ccbc0db1707f96f071e7142b544e63c61c\"" Jan 14 23:49:23.470531 containerd[1996]: time="2026-01-14T23:49:23.470453107Z" level=info msg="connecting to shim 03d27272b4492e7b91b3bc6532ed36ccbc0db1707f96f071e7142b544e63c61c" address="unix:///run/containerd/s/be53b1e73177744f93faca9d1919e646c08dacf22a0645a4e70febb3a8760138" protocol=ttrpc version=3 Jan 14 23:49:23.475807 containerd[1996]: time="2026-01-14T23:49:23.475542823Z" level=info msg="Container c6b03fab291f5dd307fb0058be7e16b80d2249035f35cf493b17ebff7cf8abf8: CDI devices from CRI Config.CDIDevices: []" Jan 14 23:49:23.494217 containerd[1996]: time="2026-01-14T23:49:23.494134736Z" level=info msg="CreateContainer within sandbox \"45b3ab18905646a82e4d18f4477a3fc4c7b98ddbdac14bb0157106e58da95c72\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"c6b03fab291f5dd307fb0058be7e16b80d2249035f35cf493b17ebff7cf8abf8\"" Jan 14 23:49:23.495343 containerd[1996]: time="2026-01-14T23:49:23.495033512Z" level=info msg="StartContainer for \"c6b03fab291f5dd307fb0058be7e16b80d2249035f35cf493b17ebff7cf8abf8\"" Jan 14 23:49:23.498979 containerd[1996]: time="2026-01-14T23:49:23.498871484Z" level=info msg="connecting to shim c6b03fab291f5dd307fb0058be7e16b80d2249035f35cf493b17ebff7cf8abf8" address="unix:///run/containerd/s/09de55abdc151c170cb6ec4a7c570c28e4b13c12fc6e2cb0fbb99106eed57792" protocol=ttrpc version=3 Jan 14 23:49:23.502150 systemd[1]: Started cri-containerd-f7a36272e2f2686043f56697d76899ca8fb8bbc67662af29f9291149d37b5e56.scope - libcontainer container f7a36272e2f2686043f56697d76899ca8fb8bbc67662af29f9291149d37b5e56. Jan 14 23:49:23.545959 systemd[1]: Started cri-containerd-03d27272b4492e7b91b3bc6532ed36ccbc0db1707f96f071e7142b544e63c61c.scope - libcontainer container 03d27272b4492e7b91b3bc6532ed36ccbc0db1707f96f071e7142b544e63c61c. Jan 14 23:49:23.560696 kernel: kauditd_printk_skb: 136 callbacks suppressed Jan 14 23:49:23.560841 kernel: audit: type=1334 audit(1768434563.556:381): prog-id=104 op=LOAD Jan 14 23:49:23.556000 audit: BPF prog-id=104 op=LOAD Jan 14 23:49:23.561178 kubelet[2944]: E0114 23:49:23.561135 2944 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://172.31.18.197:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.18.197:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 14 23:49:23.566080 kernel: audit: type=1334 audit(1768434563.561:382): prog-id=105 op=LOAD Jan 14 23:49:23.561000 audit: BPF prog-id=105 op=LOAD Jan 14 23:49:23.575383 kernel: audit: type=1300 audit(1768434563.561:382): arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000128180 a2=98 a3=0 items=0 ppid=3010 pid=3118 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:49:23.561000 audit[3118]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000128180 a2=98 a3=0 items=0 ppid=3010 pid=3118 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:49:23.561000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6637613336323732653266323638363034336635363639376437363839 Jan 14 23:49:23.583246 kernel: audit: type=1327 audit(1768434563.561:382): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6637613336323732653266323638363034336635363639376437363839 Jan 14 23:49:23.561000 audit: BPF prog-id=105 op=UNLOAD Jan 14 23:49:23.561000 audit[3118]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3010 pid=3118 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:49:23.591468 kernel: audit: type=1334 audit(1768434563.561:383): prog-id=105 op=UNLOAD Jan 14 23:49:23.591569 kernel: audit: type=1300 audit(1768434563.561:383): arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3010 pid=3118 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:49:23.591930 systemd[1]: Started cri-containerd-c6b03fab291f5dd307fb0058be7e16b80d2249035f35cf493b17ebff7cf8abf8.scope - libcontainer container c6b03fab291f5dd307fb0058be7e16b80d2249035f35cf493b17ebff7cf8abf8. Jan 14 23:49:23.561000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6637613336323732653266323638363034336635363639376437363839 Jan 14 23:49:23.599105 kernel: audit: type=1327 audit(1768434563.561:383): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6637613336323732653266323638363034336635363639376437363839 Jan 14 23:49:23.601039 kernel: audit: type=1334 audit(1768434563.562:384): prog-id=106 op=LOAD Jan 14 23:49:23.562000 audit: BPF prog-id=106 op=LOAD Jan 14 23:49:23.562000 audit[3118]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001283e8 a2=98 a3=0 items=0 ppid=3010 pid=3118 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:49:23.609051 kernel: audit: type=1300 audit(1768434563.562:384): arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001283e8 a2=98 a3=0 items=0 ppid=3010 pid=3118 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:49:23.562000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6637613336323732653266323638363034336635363639376437363839 Jan 14 23:49:23.618106 kernel: audit: type=1327 audit(1768434563.562:384): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6637613336323732653266323638363034336635363639376437363839 Jan 14 23:49:23.563000 audit: BPF prog-id=107 op=LOAD Jan 14 23:49:23.563000 audit[3118]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000128168 a2=98 a3=0 items=0 ppid=3010 pid=3118 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:49:23.563000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6637613336323732653266323638363034336635363639376437363839 Jan 14 23:49:23.563000 audit: BPF prog-id=107 op=UNLOAD Jan 14 23:49:23.563000 audit[3118]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3010 pid=3118 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:49:23.563000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6637613336323732653266323638363034336635363639376437363839 Jan 14 23:49:23.563000 audit: BPF prog-id=106 op=UNLOAD Jan 14 23:49:23.563000 audit[3118]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3010 pid=3118 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:49:23.563000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6637613336323732653266323638363034336635363639376437363839 Jan 14 23:49:23.563000 audit: BPF prog-id=108 op=LOAD Jan 14 23:49:23.563000 audit[3118]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000128648 a2=98 a3=0 items=0 ppid=3010 pid=3118 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:49:23.563000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6637613336323732653266323638363034336635363639376437363839 Jan 14 23:49:23.620000 audit: BPF prog-id=109 op=LOAD Jan 14 23:49:23.626000 audit: BPF prog-id=110 op=LOAD Jan 14 23:49:23.626000 audit[3124]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000216180 a2=98 a3=0 items=0 ppid=2990 pid=3124 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:49:23.626000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3033643237323732623434393265376239316233626336353332656433 Jan 14 23:49:23.629000 audit: BPF prog-id=110 op=UNLOAD Jan 14 23:49:23.629000 audit[3124]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2990 pid=3124 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:49:23.629000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3033643237323732623434393265376239316233626336353332656433 Jan 14 23:49:23.629000 audit: BPF prog-id=111 op=LOAD Jan 14 23:49:23.629000 audit[3124]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40002163e8 a2=98 a3=0 items=0 ppid=2990 pid=3124 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:49:23.629000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3033643237323732623434393265376239316233626336353332656433 Jan 14 23:49:23.629000 audit: BPF prog-id=112 op=LOAD Jan 14 23:49:23.629000 audit[3124]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000216168 a2=98 a3=0 items=0 ppid=2990 pid=3124 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:49:23.629000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3033643237323732623434393265376239316233626336353332656433 Jan 14 23:49:23.630000 audit: BPF prog-id=112 op=UNLOAD Jan 14 23:49:23.630000 audit[3124]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2990 pid=3124 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:49:23.630000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3033643237323732623434393265376239316233626336353332656433 Jan 14 23:49:23.630000 audit: BPF prog-id=111 op=UNLOAD Jan 14 23:49:23.630000 audit[3124]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2990 pid=3124 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:49:23.630000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3033643237323732623434393265376239316233626336353332656433 Jan 14 23:49:23.630000 audit: BPF prog-id=113 op=LOAD Jan 14 23:49:23.630000 audit[3124]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000216648 a2=98 a3=0 items=0 ppid=2990 pid=3124 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:49:23.630000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3033643237323732623434393265376239316233626336353332656433 Jan 14 23:49:23.658000 audit: BPF prog-id=114 op=LOAD Jan 14 23:49:23.661000 audit: BPF prog-id=115 op=LOAD Jan 14 23:49:23.661000 audit[3135]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=400017e180 a2=98 a3=0 items=0 ppid=3033 pid=3135 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:49:23.661000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6336623033666162323931663564643330376662303035386265376531 Jan 14 23:49:23.663000 audit: BPF prog-id=115 op=UNLOAD Jan 14 23:49:23.663000 audit[3135]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3033 pid=3135 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:49:23.663000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6336623033666162323931663564643330376662303035386265376531 Jan 14 23:49:23.664000 audit: BPF prog-id=116 op=LOAD Jan 14 23:49:23.664000 audit[3135]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=400017e3e8 a2=98 a3=0 items=0 ppid=3033 pid=3135 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:49:23.664000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6336623033666162323931663564643330376662303035386265376531 Jan 14 23:49:23.666000 audit: BPF prog-id=117 op=LOAD Jan 14 23:49:23.666000 audit[3135]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=400017e168 a2=98 a3=0 items=0 ppid=3033 pid=3135 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:49:23.666000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6336623033666162323931663564643330376662303035386265376531 Jan 14 23:49:23.666000 audit: BPF prog-id=117 op=UNLOAD Jan 14 23:49:23.666000 audit[3135]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3033 pid=3135 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:49:23.666000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6336623033666162323931663564643330376662303035386265376531 Jan 14 23:49:23.666000 audit: BPF prog-id=116 op=UNLOAD Jan 14 23:49:23.666000 audit[3135]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3033 pid=3135 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:49:23.666000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6336623033666162323931663564643330376662303035386265376531 Jan 14 23:49:23.666000 audit: BPF prog-id=118 op=LOAD Jan 14 23:49:23.666000 audit[3135]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=400017e648 a2=98 a3=0 items=0 ppid=3033 pid=3135 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:49:23.666000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6336623033666162323931663564643330376662303035386265376531 Jan 14 23:49:23.694367 containerd[1996]: time="2026-01-14T23:49:23.694285989Z" level=info msg="StartContainer for \"f7a36272e2f2686043f56697d76899ca8fb8bbc67662af29f9291149d37b5e56\" returns successfully" Jan 14 23:49:23.776360 containerd[1996]: time="2026-01-14T23:49:23.776152737Z" level=info msg="StartContainer for \"03d27272b4492e7b91b3bc6532ed36ccbc0db1707f96f071e7142b544e63c61c\" returns successfully" Jan 14 23:49:23.781257 containerd[1996]: time="2026-01-14T23:49:23.781084221Z" level=info msg="StartContainer for \"c6b03fab291f5dd307fb0058be7e16b80d2249035f35cf493b17ebff7cf8abf8\" returns successfully" Jan 14 23:49:23.863157 kubelet[2944]: E0114 23:49:23.862928 2944 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://172.31.18.197:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.18.197:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 14 23:49:23.913999 update_engine[1963]: I20260114 23:49:23.912627 1963 update_attempter.cc:509] Updating boot flags... Jan 14 23:49:23.976100 kubelet[2944]: E0114 23:49:23.976011 2944 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.18.197:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-18-197?timeout=10s\": dial tcp 172.31.18.197:6443: connect: connection refused" interval="1.6s" Jan 14 23:49:24.190699 kubelet[2944]: I0114 23:49:24.189440 2944 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-18-197" Jan 14 23:49:24.687614 kubelet[2944]: E0114 23:49:24.683097 2944 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-18-197\" not found" node="ip-172-31-18-197" Jan 14 23:49:24.695380 kubelet[2944]: E0114 23:49:24.695331 2944 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-18-197\" not found" node="ip-172-31-18-197" Jan 14 23:49:24.705826 kubelet[2944]: E0114 23:49:24.705748 2944 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-18-197\" not found" node="ip-172-31-18-197" Jan 14 23:49:25.704545 kubelet[2944]: E0114 23:49:25.704491 2944 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-18-197\" not found" node="ip-172-31-18-197" Jan 14 23:49:25.705270 kubelet[2944]: E0114 23:49:25.705182 2944 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-18-197\" not found" node="ip-172-31-18-197" Jan 14 23:49:25.707542 kubelet[2944]: E0114 23:49:25.707493 2944 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-18-197\" not found" node="ip-172-31-18-197" Jan 14 23:49:26.709503 kubelet[2944]: E0114 23:49:26.709449 2944 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-18-197\" not found" node="ip-172-31-18-197" Jan 14 23:49:26.710620 kubelet[2944]: E0114 23:49:26.710552 2944 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-18-197\" not found" node="ip-172-31-18-197" Jan 14 23:49:26.712400 kubelet[2944]: E0114 23:49:26.711539 2944 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-18-197\" not found" node="ip-172-31-18-197" Jan 14 23:49:29.535891 kubelet[2944]: I0114 23:49:29.535825 2944 apiserver.go:52] "Watching apiserver" Jan 14 23:49:29.655374 kubelet[2944]: I0114 23:49:29.655306 2944 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 14 23:49:29.774842 kubelet[2944]: E0114 23:49:29.774760 2944 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-18-197\" not found" node="ip-172-31-18-197" Jan 14 23:49:29.791510 kubelet[2944]: E0114 23:49:29.791251 2944 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ip-172-31-18-197.188abddd741adbfb default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-18-197,UID:ip-172-31-18-197,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-18-197,},FirstTimestamp:2026-01-14 23:49:22.529680379 +0000 UTC m=+2.123550624,LastTimestamp:2026-01-14 23:49:22.529680379 +0000 UTC m=+2.123550624,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-18-197,}" Jan 14 23:49:29.794430 kubelet[2944]: I0114 23:49:29.794349 2944 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-18-197" Jan 14 23:49:29.794430 kubelet[2944]: E0114 23:49:29.794419 2944 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"ip-172-31-18-197\": node \"ip-172-31-18-197\" not found" Jan 14 23:49:29.855012 kubelet[2944]: I0114 23:49:29.854939 2944 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-18-197" Jan 14 23:49:29.936492 kubelet[2944]: E0114 23:49:29.936286 2944 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ip-172-31-18-197.188abddd767e51ef default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-18-197,UID:ip-172-31-18-197,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:ip-172-31-18-197,},FirstTimestamp:2026-01-14 23:49:22.569753071 +0000 UTC m=+2.163623316,LastTimestamp:2026-01-14 23:49:22.569753071 +0000 UTC m=+2.163623316,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-18-197,}" Jan 14 23:49:29.952675 kubelet[2944]: E0114 23:49:29.952299 2944 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-18-197\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ip-172-31-18-197" Jan 14 23:49:29.952675 kubelet[2944]: I0114 23:49:29.952348 2944 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-18-197" Jan 14 23:49:29.967997 kubelet[2944]: E0114 23:49:29.967937 2944 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ip-172-31-18-197\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ip-172-31-18-197" Jan 14 23:49:29.967997 kubelet[2944]: I0114 23:49:29.967989 2944 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-18-197" Jan 14 23:49:29.976150 kubelet[2944]: E0114 23:49:29.976091 2944 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ip-172-31-18-197\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ip-172-31-18-197" Jan 14 23:49:32.146415 kubelet[2944]: I0114 23:49:32.146355 2944 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-18-197" Jan 14 23:49:32.646482 kubelet[2944]: I0114 23:49:32.646151 2944 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-18-197" podStartSLOduration=0.646081433 podStartE2EDuration="646.081433ms" podCreationTimestamp="2026-01-14 23:49:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-14 23:49:32.642816317 +0000 UTC m=+12.236686622" watchObservedRunningTime="2026-01-14 23:49:32.646081433 +0000 UTC m=+12.239951666" Jan 14 23:49:33.838191 kubelet[2944]: I0114 23:49:33.837982 2944 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-18-197" Jan 14 23:49:33.871863 systemd[1]: Reload requested from client PID 3406 ('systemctl') (unit session-7.scope)... Jan 14 23:49:33.871888 systemd[1]: Reloading... Jan 14 23:49:34.046440 kubelet[2944]: I0114 23:49:34.046356 2944 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-18-197" podStartSLOduration=1.046334056 podStartE2EDuration="1.046334056s" podCreationTimestamp="2026-01-14 23:49:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-14 23:49:33.878491759 +0000 UTC m=+13.472361980" watchObservedRunningTime="2026-01-14 23:49:34.046334056 +0000 UTC m=+13.640204289" Jan 14 23:49:34.114617 zram_generator::config[3456]: No configuration found. Jan 14 23:49:34.645460 systemd[1]: Reloading finished in 772 ms. Jan 14 23:49:34.714692 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 14 23:49:34.731628 systemd[1]: kubelet.service: Deactivated successfully. Jan 14 23:49:34.732521 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 23:49:34.738665 kernel: kauditd_printk_skb: 56 callbacks suppressed Jan 14 23:49:34.738803 kernel: audit: type=1131 audit(1768434574.732:405): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:49:34.732000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:49:34.734699 systemd[1]: kubelet.service: Consumed 3.046s CPU time, 128.7M memory peak. Jan 14 23:49:34.744134 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 14 23:49:34.752000 audit: BPF prog-id=119 op=LOAD Jan 14 23:49:34.757217 kernel: audit: type=1334 audit(1768434574.752:406): prog-id=119 op=LOAD Jan 14 23:49:34.757351 kernel: audit: type=1334 audit(1768434574.752:407): prog-id=75 op=UNLOAD Jan 14 23:49:34.752000 audit: BPF prog-id=75 op=UNLOAD Jan 14 23:49:34.760135 kernel: audit: type=1334 audit(1768434574.754:408): prog-id=120 op=LOAD Jan 14 23:49:34.754000 audit: BPF prog-id=120 op=LOAD Jan 14 23:49:34.762721 kernel: audit: type=1334 audit(1768434574.754:409): prog-id=76 op=UNLOAD Jan 14 23:49:34.754000 audit: BPF prog-id=76 op=UNLOAD Jan 14 23:49:34.756000 audit: BPF prog-id=121 op=LOAD Jan 14 23:49:34.764734 kernel: audit: type=1334 audit(1768434574.756:410): prog-id=121 op=LOAD Jan 14 23:49:34.766399 kernel: audit: type=1334 audit(1768434574.756:411): prog-id=80 op=UNLOAD Jan 14 23:49:34.756000 audit: BPF prog-id=80 op=UNLOAD Jan 14 23:49:34.759000 audit: BPF prog-id=122 op=LOAD Jan 14 23:49:34.768269 kernel: audit: type=1334 audit(1768434574.759:412): prog-id=122 op=LOAD Jan 14 23:49:34.771031 kernel: audit: type=1334 audit(1768434574.759:413): prog-id=69 op=UNLOAD Jan 14 23:49:34.759000 audit: BPF prog-id=69 op=UNLOAD Jan 14 23:49:34.760000 audit: BPF prog-id=123 op=LOAD Jan 14 23:49:34.773149 kernel: audit: type=1334 audit(1768434574.760:414): prog-id=123 op=LOAD Jan 14 23:49:34.760000 audit: BPF prog-id=124 op=LOAD Jan 14 23:49:34.760000 audit: BPF prog-id=70 op=UNLOAD Jan 14 23:49:34.760000 audit: BPF prog-id=71 op=UNLOAD Jan 14 23:49:34.765000 audit: BPF prog-id=125 op=LOAD Jan 14 23:49:34.767000 audit: BPF prog-id=81 op=UNLOAD Jan 14 23:49:34.768000 audit: BPF prog-id=126 op=LOAD Jan 14 23:49:34.768000 audit: BPF prog-id=127 op=LOAD Jan 14 23:49:34.768000 audit: BPF prog-id=82 op=UNLOAD Jan 14 23:49:34.768000 audit: BPF prog-id=83 op=UNLOAD Jan 14 23:49:34.774000 audit: BPF prog-id=128 op=LOAD Jan 14 23:49:34.774000 audit: BPF prog-id=86 op=UNLOAD Jan 14 23:49:34.774000 audit: BPF prog-id=129 op=LOAD Jan 14 23:49:34.774000 audit: BPF prog-id=130 op=LOAD Jan 14 23:49:34.774000 audit: BPF prog-id=87 op=UNLOAD Jan 14 23:49:34.774000 audit: BPF prog-id=88 op=UNLOAD Jan 14 23:49:34.776000 audit: BPF prog-id=131 op=LOAD Jan 14 23:49:34.776000 audit: BPF prog-id=77 op=UNLOAD Jan 14 23:49:34.776000 audit: BPF prog-id=132 op=LOAD Jan 14 23:49:34.776000 audit: BPF prog-id=133 op=LOAD Jan 14 23:49:34.776000 audit: BPF prog-id=78 op=UNLOAD Jan 14 23:49:34.776000 audit: BPF prog-id=79 op=UNLOAD Jan 14 23:49:34.777000 audit: BPF prog-id=134 op=LOAD Jan 14 23:49:34.777000 audit: BPF prog-id=135 op=LOAD Jan 14 23:49:34.777000 audit: BPF prog-id=84 op=UNLOAD Jan 14 23:49:34.777000 audit: BPF prog-id=85 op=UNLOAD Jan 14 23:49:34.780000 audit: BPF prog-id=136 op=LOAD Jan 14 23:49:34.780000 audit: BPF prog-id=72 op=UNLOAD Jan 14 23:49:34.781000 audit: BPF prog-id=137 op=LOAD Jan 14 23:49:34.781000 audit: BPF prog-id=138 op=LOAD Jan 14 23:49:34.781000 audit: BPF prog-id=73 op=UNLOAD Jan 14 23:49:34.781000 audit: BPF prog-id=74 op=UNLOAD Jan 14 23:49:35.134892 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 23:49:35.134000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:49:35.156344 (kubelet)[3513]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 14 23:49:35.281459 kubelet[3513]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 14 23:49:35.281459 kubelet[3513]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 14 23:49:35.281459 kubelet[3513]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 14 23:49:35.281459 kubelet[3513]: I0114 23:49:35.281292 3513 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 14 23:49:35.300610 kubelet[3513]: I0114 23:49:35.300541 3513 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Jan 14 23:49:35.301065 kubelet[3513]: I0114 23:49:35.300799 3513 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 14 23:49:35.301473 kubelet[3513]: I0114 23:49:35.301445 3513 server.go:956] "Client rotation is on, will bootstrap in background" Jan 14 23:49:35.304258 kubelet[3513]: I0114 23:49:35.304222 3513 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Jan 14 23:49:35.309600 kubelet[3513]: I0114 23:49:35.309539 3513 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 14 23:49:35.323279 kubelet[3513]: I0114 23:49:35.322919 3513 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 14 23:49:35.329055 kubelet[3513]: I0114 23:49:35.329004 3513 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 14 23:49:35.329454 kubelet[3513]: I0114 23:49:35.329409 3513 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 14 23:49:35.329754 kubelet[3513]: I0114 23:49:35.329457 3513 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-18-197","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 14 23:49:35.329919 kubelet[3513]: I0114 23:49:35.329774 3513 topology_manager.go:138] "Creating topology manager with none policy" Jan 14 23:49:35.329919 kubelet[3513]: I0114 23:49:35.329796 3513 container_manager_linux.go:303] "Creating device plugin manager" Jan 14 23:49:35.329919 kubelet[3513]: I0114 23:49:35.329870 3513 state_mem.go:36] "Initialized new in-memory state store" Jan 14 23:49:35.330159 kubelet[3513]: I0114 23:49:35.330132 3513 kubelet.go:480] "Attempting to sync node with API server" Jan 14 23:49:35.330233 kubelet[3513]: I0114 23:49:35.330166 3513 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 14 23:49:35.330233 kubelet[3513]: I0114 23:49:35.330212 3513 kubelet.go:386] "Adding apiserver pod source" Jan 14 23:49:35.330632 kubelet[3513]: I0114 23:49:35.330237 3513 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 14 23:49:35.332885 kubelet[3513]: I0114 23:49:35.332816 3513 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.1.5" apiVersion="v1" Jan 14 23:49:35.333844 kubelet[3513]: I0114 23:49:35.333804 3513 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jan 14 23:49:35.342022 kubelet[3513]: I0114 23:49:35.341973 3513 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 14 23:49:35.342143 kubelet[3513]: I0114 23:49:35.342044 3513 server.go:1289] "Started kubelet" Jan 14 23:49:35.356508 kubelet[3513]: I0114 23:49:35.354999 3513 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 14 23:49:35.363732 kubelet[3513]: I0114 23:49:35.363648 3513 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jan 14 23:49:35.368245 kubelet[3513]: I0114 23:49:35.367529 3513 server.go:317] "Adding debug handlers to kubelet server" Jan 14 23:49:35.378634 kubelet[3513]: I0114 23:49:35.378482 3513 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 14 23:49:35.378953 kubelet[3513]: I0114 23:49:35.378908 3513 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 14 23:49:35.379291 kubelet[3513]: I0114 23:49:35.379245 3513 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 14 23:49:35.388972 kubelet[3513]: I0114 23:49:35.388782 3513 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 14 23:49:35.392370 kubelet[3513]: E0114 23:49:35.391824 3513 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-18-197\" not found" Jan 14 23:49:35.405609 kubelet[3513]: I0114 23:49:35.399313 3513 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 14 23:49:35.405609 kubelet[3513]: I0114 23:49:35.399542 3513 reconciler.go:26] "Reconciler: start to sync state" Jan 14 23:49:35.427405 kubelet[3513]: I0114 23:49:35.427339 3513 factory.go:223] Registration of the systemd container factory successfully Jan 14 23:49:35.427873 kubelet[3513]: I0114 23:49:35.427494 3513 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 14 23:49:35.436745 kubelet[3513]: I0114 23:49:35.436207 3513 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Jan 14 23:49:35.439351 kubelet[3513]: E0114 23:49:35.439257 3513 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 14 23:49:35.440979 kubelet[3513]: I0114 23:49:35.440915 3513 factory.go:223] Registration of the containerd container factory successfully Jan 14 23:49:35.460539 kubelet[3513]: I0114 23:49:35.460313 3513 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Jan 14 23:49:35.460539 kubelet[3513]: I0114 23:49:35.460359 3513 status_manager.go:230] "Starting to sync pod status with apiserver" Jan 14 23:49:35.460539 kubelet[3513]: I0114 23:49:35.460396 3513 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 14 23:49:35.460539 kubelet[3513]: I0114 23:49:35.460411 3513 kubelet.go:2436] "Starting kubelet main sync loop" Jan 14 23:49:35.460539 kubelet[3513]: E0114 23:49:35.460496 3513 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 14 23:49:35.552210 kubelet[3513]: I0114 23:49:35.552152 3513 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 14 23:49:35.552210 kubelet[3513]: I0114 23:49:35.552187 3513 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 14 23:49:35.552429 kubelet[3513]: I0114 23:49:35.552224 3513 state_mem.go:36] "Initialized new in-memory state store" Jan 14 23:49:35.552487 kubelet[3513]: I0114 23:49:35.552454 3513 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 14 23:49:35.552537 kubelet[3513]: I0114 23:49:35.552475 3513 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 14 23:49:35.552537 kubelet[3513]: I0114 23:49:35.552506 3513 policy_none.go:49] "None policy: Start" Jan 14 23:49:35.552537 kubelet[3513]: I0114 23:49:35.552523 3513 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 14 23:49:35.552725 kubelet[3513]: I0114 23:49:35.552543 3513 state_mem.go:35] "Initializing new in-memory state store" Jan 14 23:49:35.552776 kubelet[3513]: I0114 23:49:35.552744 3513 state_mem.go:75] "Updated machine memory state" Jan 14 23:49:35.560678 kubelet[3513]: E0114 23:49:35.560629 3513 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 14 23:49:35.562620 kubelet[3513]: E0114 23:49:35.562238 3513 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jan 14 23:49:35.563698 kubelet[3513]: I0114 23:49:35.563654 3513 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 14 23:49:35.563820 kubelet[3513]: I0114 23:49:35.563690 3513 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 14 23:49:35.565872 kubelet[3513]: I0114 23:49:35.565843 3513 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 14 23:49:35.573237 kubelet[3513]: E0114 23:49:35.573192 3513 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 14 23:49:35.685017 kubelet[3513]: I0114 23:49:35.683862 3513 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-18-197" Jan 14 23:49:35.702446 kubelet[3513]: I0114 23:49:35.701946 3513 kubelet_node_status.go:124] "Node was previously registered" node="ip-172-31-18-197" Jan 14 23:49:35.702446 kubelet[3513]: I0114 23:49:35.702076 3513 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-18-197" Jan 14 23:49:35.762462 kubelet[3513]: I0114 23:49:35.762409 3513 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-18-197" Jan 14 23:49:35.763169 kubelet[3513]: I0114 23:49:35.763140 3513 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-18-197" Jan 14 23:49:35.763513 kubelet[3513]: I0114 23:49:35.762418 3513 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-18-197" Jan 14 23:49:35.777005 kubelet[3513]: E0114 23:49:35.776938 3513 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ip-172-31-18-197\" already exists" pod="kube-system/kube-scheduler-ip-172-31-18-197" Jan 14 23:49:35.778404 kubelet[3513]: E0114 23:49:35.778294 3513 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-18-197\" already exists" pod="kube-system/kube-apiserver-ip-172-31-18-197" Jan 14 23:49:35.807977 kubelet[3513]: I0114 23:49:35.807708 3513 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6a1e434e62c735300e7198691e88e3d7-kubeconfig\") pod \"kube-scheduler-ip-172-31-18-197\" (UID: \"6a1e434e62c735300e7198691e88e3d7\") " pod="kube-system/kube-scheduler-ip-172-31-18-197" Jan 14 23:49:35.807977 kubelet[3513]: I0114 23:49:35.807779 3513 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9bc83122488d7c43421b8e3e7827bdd2-ca-certs\") pod \"kube-apiserver-ip-172-31-18-197\" (UID: \"9bc83122488d7c43421b8e3e7827bdd2\") " pod="kube-system/kube-apiserver-ip-172-31-18-197" Jan 14 23:49:35.807977 kubelet[3513]: I0114 23:49:35.807829 3513 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9bc83122488d7c43421b8e3e7827bdd2-k8s-certs\") pod \"kube-apiserver-ip-172-31-18-197\" (UID: \"9bc83122488d7c43421b8e3e7827bdd2\") " pod="kube-system/kube-apiserver-ip-172-31-18-197" Jan 14 23:49:35.807977 kubelet[3513]: I0114 23:49:35.807868 3513 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9bc83122488d7c43421b8e3e7827bdd2-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-18-197\" (UID: \"9bc83122488d7c43421b8e3e7827bdd2\") " pod="kube-system/kube-apiserver-ip-172-31-18-197" Jan 14 23:49:35.807977 kubelet[3513]: I0114 23:49:35.807915 3513 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b48e8f30b30d49e16633ce6306866fbb-kubeconfig\") pod \"kube-controller-manager-ip-172-31-18-197\" (UID: \"b48e8f30b30d49e16633ce6306866fbb\") " pod="kube-system/kube-controller-manager-ip-172-31-18-197" Jan 14 23:49:35.809571 kubelet[3513]: I0114 23:49:35.808567 3513 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b48e8f30b30d49e16633ce6306866fbb-ca-certs\") pod \"kube-controller-manager-ip-172-31-18-197\" (UID: \"b48e8f30b30d49e16633ce6306866fbb\") " pod="kube-system/kube-controller-manager-ip-172-31-18-197" Jan 14 23:49:35.809571 kubelet[3513]: I0114 23:49:35.808683 3513 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b48e8f30b30d49e16633ce6306866fbb-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-18-197\" (UID: \"b48e8f30b30d49e16633ce6306866fbb\") " pod="kube-system/kube-controller-manager-ip-172-31-18-197" Jan 14 23:49:35.809571 kubelet[3513]: I0114 23:49:35.808728 3513 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b48e8f30b30d49e16633ce6306866fbb-k8s-certs\") pod \"kube-controller-manager-ip-172-31-18-197\" (UID: \"b48e8f30b30d49e16633ce6306866fbb\") " pod="kube-system/kube-controller-manager-ip-172-31-18-197" Jan 14 23:49:35.809571 kubelet[3513]: I0114 23:49:35.808782 3513 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b48e8f30b30d49e16633ce6306866fbb-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-18-197\" (UID: \"b48e8f30b30d49e16633ce6306866fbb\") " pod="kube-system/kube-controller-manager-ip-172-31-18-197" Jan 14 23:49:36.332215 kubelet[3513]: I0114 23:49:36.332142 3513 apiserver.go:52] "Watching apiserver" Jan 14 23:49:36.406138 kubelet[3513]: I0114 23:49:36.406079 3513 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 14 23:49:36.494273 kubelet[3513]: I0114 23:49:36.494218 3513 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-18-197" Jan 14 23:49:36.497796 kubelet[3513]: I0114 23:49:36.497736 3513 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-18-197" Jan 14 23:49:36.511435 kubelet[3513]: E0114 23:49:36.511391 3513 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ip-172-31-18-197\" already exists" pod="kube-system/kube-scheduler-ip-172-31-18-197" Jan 14 23:49:36.513888 kubelet[3513]: E0114 23:49:36.513733 3513 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-18-197\" already exists" pod="kube-system/kube-apiserver-ip-172-31-18-197" Jan 14 23:49:36.550926 kubelet[3513]: I0114 23:49:36.550247 3513 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-18-197" podStartSLOduration=1.550106132 podStartE2EDuration="1.550106132s" podCreationTimestamp="2026-01-14 23:49:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-14 23:49:36.530728676 +0000 UTC m=+1.365189163" watchObservedRunningTime="2026-01-14 23:49:36.550106132 +0000 UTC m=+1.384566631" Jan 14 23:49:38.268710 kubelet[3513]: I0114 23:49:38.268644 3513 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 14 23:49:38.270533 kubelet[3513]: I0114 23:49:38.270160 3513 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 14 23:49:38.271809 containerd[1996]: time="2026-01-14T23:49:38.269805465Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 14 23:49:39.055712 systemd[1]: Created slice kubepods-besteffort-pod083235b8_41cd_4c11_839a_53345868b73f.slice - libcontainer container kubepods-besteffort-pod083235b8_41cd_4c11_839a_53345868b73f.slice. Jan 14 23:49:39.127536 kubelet[3513]: I0114 23:49:39.127311 3513 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/083235b8-41cd-4c11-839a-53345868b73f-lib-modules\") pod \"kube-proxy-rp88g\" (UID: \"083235b8-41cd-4c11-839a-53345868b73f\") " pod="kube-system/kube-proxy-rp88g" Jan 14 23:49:39.127536 kubelet[3513]: I0114 23:49:39.127394 3513 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qqfl5\" (UniqueName: \"kubernetes.io/projected/083235b8-41cd-4c11-839a-53345868b73f-kube-api-access-qqfl5\") pod \"kube-proxy-rp88g\" (UID: \"083235b8-41cd-4c11-839a-53345868b73f\") " pod="kube-system/kube-proxy-rp88g" Jan 14 23:49:39.127536 kubelet[3513]: I0114 23:49:39.127441 3513 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/083235b8-41cd-4c11-839a-53345868b73f-kube-proxy\") pod \"kube-proxy-rp88g\" (UID: \"083235b8-41cd-4c11-839a-53345868b73f\") " pod="kube-system/kube-proxy-rp88g" Jan 14 23:49:39.127536 kubelet[3513]: I0114 23:49:39.127481 3513 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/083235b8-41cd-4c11-839a-53345868b73f-xtables-lock\") pod \"kube-proxy-rp88g\" (UID: \"083235b8-41cd-4c11-839a-53345868b73f\") " pod="kube-system/kube-proxy-rp88g" Jan 14 23:49:39.368383 containerd[1996]: time="2026-01-14T23:49:39.368243158Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-rp88g,Uid:083235b8-41cd-4c11-839a-53345868b73f,Namespace:kube-system,Attempt:0,}" Jan 14 23:49:39.409651 containerd[1996]: time="2026-01-14T23:49:39.408281687Z" level=info msg="connecting to shim 9f9ffe739b8a67cf48391bd71dcadf6a221cecff9c9ebed9389f419e86a00611" address="unix:///run/containerd/s/92e0946b0ce1874860202badb97cc32722e4e56b7c6e5504f478920358361a8a" namespace=k8s.io protocol=ttrpc version=3 Jan 14 23:49:39.517252 systemd[1]: Started cri-containerd-9f9ffe739b8a67cf48391bd71dcadf6a221cecff9c9ebed9389f419e86a00611.scope - libcontainer container 9f9ffe739b8a67cf48391bd71dcadf6a221cecff9c9ebed9389f419e86a00611. Jan 14 23:49:39.571903 systemd[1]: Created slice kubepods-besteffort-pode3fa86c0_e678_4308_bcca_a509f5953997.slice - libcontainer container kubepods-besteffort-pode3fa86c0_e678_4308_bcca_a509f5953997.slice. Jan 14 23:49:39.583000 audit: BPF prog-id=139 op=LOAD Jan 14 23:49:39.585000 audit: BPF prog-id=140 op=LOAD Jan 14 23:49:39.585000 audit[3580]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130180 a2=98 a3=0 items=0 ppid=3568 pid=3580 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:49:39.585000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3966396666653733396238613637636634383339316264373164636164 Jan 14 23:49:39.585000 audit: BPF prog-id=140 op=UNLOAD Jan 14 23:49:39.585000 audit[3580]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3568 pid=3580 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:49:39.585000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3966396666653733396238613637636634383339316264373164636164 Jan 14 23:49:39.586000 audit: BPF prog-id=141 op=LOAD Jan 14 23:49:39.586000 audit[3580]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001303e8 a2=98 a3=0 items=0 ppid=3568 pid=3580 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:49:39.586000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3966396666653733396238613637636634383339316264373164636164 Jan 14 23:49:39.586000 audit: BPF prog-id=142 op=LOAD Jan 14 23:49:39.586000 audit[3580]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000130168 a2=98 a3=0 items=0 ppid=3568 pid=3580 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:49:39.586000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3966396666653733396238613637636634383339316264373164636164 Jan 14 23:49:39.587000 audit: BPF prog-id=142 op=UNLOAD Jan 14 23:49:39.587000 audit[3580]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3568 pid=3580 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:49:39.587000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3966396666653733396238613637636634383339316264373164636164 Jan 14 23:49:39.587000 audit: BPF prog-id=141 op=UNLOAD Jan 14 23:49:39.587000 audit[3580]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3568 pid=3580 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:49:39.587000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3966396666653733396238613637636634383339316264373164636164 Jan 14 23:49:39.587000 audit: BPF prog-id=143 op=LOAD Jan 14 23:49:39.587000 audit[3580]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130648 a2=98 a3=0 items=0 ppid=3568 pid=3580 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:49:39.587000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3966396666653733396238613637636634383339316264373164636164 Jan 14 23:49:39.618203 containerd[1996]: time="2026-01-14T23:49:39.618140160Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-rp88g,Uid:083235b8-41cd-4c11-839a-53345868b73f,Namespace:kube-system,Attempt:0,} returns sandbox id \"9f9ffe739b8a67cf48391bd71dcadf6a221cecff9c9ebed9389f419e86a00611\"" Jan 14 23:49:39.628557 containerd[1996]: time="2026-01-14T23:49:39.628295652Z" level=info msg="CreateContainer within sandbox \"9f9ffe739b8a67cf48391bd71dcadf6a221cecff9c9ebed9389f419e86a00611\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 14 23:49:39.633025 kubelet[3513]: I0114 23:49:39.632706 3513 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v7l8x\" (UniqueName: \"kubernetes.io/projected/e3fa86c0-e678-4308-bcca-a509f5953997-kube-api-access-v7l8x\") pod \"tigera-operator-7dcd859c48-m4t4n\" (UID: \"e3fa86c0-e678-4308-bcca-a509f5953997\") " pod="tigera-operator/tigera-operator-7dcd859c48-m4t4n" Jan 14 23:49:39.633025 kubelet[3513]: I0114 23:49:39.632782 3513 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/e3fa86c0-e678-4308-bcca-a509f5953997-var-lib-calico\") pod \"tigera-operator-7dcd859c48-m4t4n\" (UID: \"e3fa86c0-e678-4308-bcca-a509f5953997\") " pod="tigera-operator/tigera-operator-7dcd859c48-m4t4n" Jan 14 23:49:39.649012 containerd[1996]: time="2026-01-14T23:49:39.648918384Z" level=info msg="Container bf1d36a8373cd4416a9d35559bee4dd8dbc5d62f89fe369e5efd600cf608b981: CDI devices from CRI Config.CDIDevices: []" Jan 14 23:49:39.659339 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount654511017.mount: Deactivated successfully. Jan 14 23:49:39.669432 containerd[1996]: time="2026-01-14T23:49:39.669371988Z" level=info msg="CreateContainer within sandbox \"9f9ffe739b8a67cf48391bd71dcadf6a221cecff9c9ebed9389f419e86a00611\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"bf1d36a8373cd4416a9d35559bee4dd8dbc5d62f89fe369e5efd600cf608b981\"" Jan 14 23:49:39.672347 containerd[1996]: time="2026-01-14T23:49:39.672274728Z" level=info msg="StartContainer for \"bf1d36a8373cd4416a9d35559bee4dd8dbc5d62f89fe369e5efd600cf608b981\"" Jan 14 23:49:39.678806 containerd[1996]: time="2026-01-14T23:49:39.678742680Z" level=info msg="connecting to shim bf1d36a8373cd4416a9d35559bee4dd8dbc5d62f89fe369e5efd600cf608b981" address="unix:///run/containerd/s/92e0946b0ce1874860202badb97cc32722e4e56b7c6e5504f478920358361a8a" protocol=ttrpc version=3 Jan 14 23:49:39.714944 systemd[1]: Started cri-containerd-bf1d36a8373cd4416a9d35559bee4dd8dbc5d62f89fe369e5efd600cf608b981.scope - libcontainer container bf1d36a8373cd4416a9d35559bee4dd8dbc5d62f89fe369e5efd600cf608b981. Jan 14 23:49:39.790000 audit: BPF prog-id=144 op=LOAD Jan 14 23:49:39.792211 kernel: kauditd_printk_skb: 54 callbacks suppressed Jan 14 23:49:39.792267 kernel: audit: type=1334 audit(1768434579.790:455): prog-id=144 op=LOAD Jan 14 23:49:39.790000 audit[3605]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001b03e8 a2=98 a3=0 items=0 ppid=3568 pid=3605 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:49:39.800170 kernel: audit: type=1300 audit(1768434579.790:455): arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001b03e8 a2=98 a3=0 items=0 ppid=3568 pid=3605 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:49:39.790000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6266316433366138333733636434343136613964333535353962656534 Jan 14 23:49:39.806623 kernel: audit: type=1327 audit(1768434579.790:455): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6266316433366138333733636434343136613964333535353962656534 Jan 14 23:49:39.808920 kernel: audit: type=1334 audit(1768434579.793:456): prog-id=145 op=LOAD Jan 14 23:49:39.793000 audit: BPF prog-id=145 op=LOAD Jan 14 23:49:39.793000 audit[3605]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=40001b0168 a2=98 a3=0 items=0 ppid=3568 pid=3605 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:49:39.793000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6266316433366138333733636434343136613964333535353962656534 Jan 14 23:49:39.818669 kernel: audit: type=1300 audit(1768434579.793:456): arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=40001b0168 a2=98 a3=0 items=0 ppid=3568 pid=3605 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:49:39.818772 kernel: audit: type=1327 audit(1768434579.793:456): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6266316433366138333733636434343136613964333535353962656534 Jan 14 23:49:39.793000 audit: BPF prog-id=145 op=UNLOAD Jan 14 23:49:39.824330 kernel: audit: type=1334 audit(1768434579.793:457): prog-id=145 op=UNLOAD Jan 14 23:49:39.793000 audit[3605]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3568 pid=3605 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:49:39.830504 kernel: audit: type=1300 audit(1768434579.793:457): arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3568 pid=3605 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:49:39.831700 kernel: audit: type=1327 audit(1768434579.793:457): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6266316433366138333733636434343136613964333535353962656534 Jan 14 23:49:39.793000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6266316433366138333733636434343136613964333535353962656534 Jan 14 23:49:39.793000 audit: BPF prog-id=144 op=UNLOAD Jan 14 23:49:39.839082 kernel: audit: type=1334 audit(1768434579.793:458): prog-id=144 op=UNLOAD Jan 14 23:49:39.793000 audit[3605]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3568 pid=3605 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:49:39.793000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6266316433366138333733636434343136613964333535353962656534 Jan 14 23:49:39.793000 audit: BPF prog-id=146 op=LOAD Jan 14 23:49:39.793000 audit[3605]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001b0648 a2=98 a3=0 items=0 ppid=3568 pid=3605 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:49:39.793000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6266316433366138333733636434343136613964333535353962656534 Jan 14 23:49:39.864998 containerd[1996]: time="2026-01-14T23:49:39.864916225Z" level=info msg="StartContainer for \"bf1d36a8373cd4416a9d35559bee4dd8dbc5d62f89fe369e5efd600cf608b981\" returns successfully" Jan 14 23:49:39.883494 containerd[1996]: time="2026-01-14T23:49:39.882364681Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-m4t4n,Uid:e3fa86c0-e678-4308-bcca-a509f5953997,Namespace:tigera-operator,Attempt:0,}" Jan 14 23:49:39.919668 containerd[1996]: time="2026-01-14T23:49:39.919240693Z" level=info msg="connecting to shim 7bff15e8298eb1901ea375f1bf95a3c279b3ae3359852b2d42832db6ee7fda9a" address="unix:///run/containerd/s/112527f83027f3a4ecb09e8766a78af86446564ea97ee63dbd2c288c1a82f777" namespace=k8s.io protocol=ttrpc version=3 Jan 14 23:49:39.976906 systemd[1]: Started cri-containerd-7bff15e8298eb1901ea375f1bf95a3c279b3ae3359852b2d42832db6ee7fda9a.scope - libcontainer container 7bff15e8298eb1901ea375f1bf95a3c279b3ae3359852b2d42832db6ee7fda9a. Jan 14 23:49:40.009000 audit: BPF prog-id=147 op=LOAD Jan 14 23:49:40.010000 audit: BPF prog-id=148 op=LOAD Jan 14 23:49:40.010000 audit[3657]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a8180 a2=98 a3=0 items=0 ppid=3646 pid=3657 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:49:40.010000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3762666631356538323938656231393031656133373566316266393561 Jan 14 23:49:40.011000 audit: BPF prog-id=148 op=UNLOAD Jan 14 23:49:40.011000 audit[3657]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3646 pid=3657 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:49:40.011000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3762666631356538323938656231393031656133373566316266393561 Jan 14 23:49:40.011000 audit: BPF prog-id=149 op=LOAD Jan 14 23:49:40.011000 audit[3657]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a83e8 a2=98 a3=0 items=0 ppid=3646 pid=3657 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:49:40.011000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3762666631356538323938656231393031656133373566316266393561 Jan 14 23:49:40.011000 audit: BPF prog-id=150 op=LOAD Jan 14 23:49:40.011000 audit[3657]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=40001a8168 a2=98 a3=0 items=0 ppid=3646 pid=3657 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:49:40.011000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3762666631356538323938656231393031656133373566316266393561 Jan 14 23:49:40.011000 audit: BPF prog-id=150 op=UNLOAD Jan 14 23:49:40.011000 audit[3657]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3646 pid=3657 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:49:40.011000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3762666631356538323938656231393031656133373566316266393561 Jan 14 23:49:40.011000 audit: BPF prog-id=149 op=UNLOAD Jan 14 23:49:40.011000 audit[3657]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3646 pid=3657 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:49:40.011000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3762666631356538323938656231393031656133373566316266393561 Jan 14 23:49:40.011000 audit: BPF prog-id=151 op=LOAD Jan 14 23:49:40.011000 audit[3657]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a8648 a2=98 a3=0 items=0 ppid=3646 pid=3657 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:49:40.011000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3762666631356538323938656231393031656133373566316266393561 Jan 14 23:49:40.074754 containerd[1996]: time="2026-01-14T23:49:40.074570806Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-m4t4n,Uid:e3fa86c0-e678-4308-bcca-a509f5953997,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"7bff15e8298eb1901ea375f1bf95a3c279b3ae3359852b2d42832db6ee7fda9a\"" Jan 14 23:49:40.080068 containerd[1996]: time="2026-01-14T23:49:40.079926442Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Jan 14 23:49:40.158000 audit[3717]: NETFILTER_CFG table=mangle:54 family=10 entries=1 op=nft_register_chain pid=3717 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 23:49:40.158000 audit[3717]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffd705e5d0 a2=0 a3=1 items=0 ppid=3619 pid=3717 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:49:40.158000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Jan 14 23:49:40.161000 audit[3718]: NETFILTER_CFG table=nat:55 family=10 entries=1 op=nft_register_chain pid=3718 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 23:49:40.161000 audit[3718]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffc6795a00 a2=0 a3=1 items=0 ppid=3619 pid=3718 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:49:40.161000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Jan 14 23:49:40.163000 audit[3716]: NETFILTER_CFG table=mangle:56 family=2 entries=1 op=nft_register_chain pid=3716 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 23:49:40.163000 audit[3716]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffc57b6110 a2=0 a3=1 items=0 ppid=3619 pid=3716 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:49:40.163000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Jan 14 23:49:40.163000 audit[3719]: NETFILTER_CFG table=filter:57 family=10 entries=1 op=nft_register_chain pid=3719 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 23:49:40.163000 audit[3719]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffd8108560 a2=0 a3=1 items=0 ppid=3619 pid=3719 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:49:40.163000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Jan 14 23:49:40.167000 audit[3721]: NETFILTER_CFG table=nat:58 family=2 entries=1 op=nft_register_chain pid=3721 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 23:49:40.167000 audit[3721]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffcff8b420 a2=0 a3=1 items=0 ppid=3619 pid=3721 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:49:40.167000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Jan 14 23:49:40.170000 audit[3724]: NETFILTER_CFG table=filter:59 family=2 entries=1 op=nft_register_chain pid=3724 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 23:49:40.170000 audit[3724]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffe4463070 a2=0 a3=1 items=0 ppid=3619 pid=3724 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:49:40.170000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Jan 14 23:49:40.275000 audit[3726]: NETFILTER_CFG table=filter:60 family=2 entries=1 op=nft_register_chain pid=3726 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 23:49:40.275000 audit[3726]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=108 a0=3 a1=fffff1102eb0 a2=0 a3=1 items=0 ppid=3619 pid=3726 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:49:40.275000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Jan 14 23:49:40.281000 audit[3728]: NETFILTER_CFG table=filter:61 family=2 entries=1 op=nft_register_rule pid=3728 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 23:49:40.281000 audit[3728]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=752 a0=3 a1=ffffd597cf10 a2=0 a3=1 items=0 ppid=3619 pid=3728 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:49:40.281000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276696365 Jan 14 23:49:40.289000 audit[3731]: NETFILTER_CFG table=filter:62 family=2 entries=1 op=nft_register_rule pid=3731 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 23:49:40.289000 audit[3731]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=752 a0=3 a1=fffff18662e0 a2=0 a3=1 items=0 ppid=3619 pid=3731 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:49:40.289000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669 Jan 14 23:49:40.292000 audit[3732]: NETFILTER_CFG table=filter:63 family=2 entries=1 op=nft_register_chain pid=3732 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 23:49:40.292000 audit[3732]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffdfb66390 a2=0 a3=1 items=0 ppid=3619 pid=3732 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:49:40.292000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Jan 14 23:49:40.298000 audit[3734]: NETFILTER_CFG table=filter:64 family=2 entries=1 op=nft_register_rule pid=3734 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 23:49:40.298000 audit[3734]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffef580c30 a2=0 a3=1 items=0 ppid=3619 pid=3734 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:49:40.298000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Jan 14 23:49:40.301000 audit[3735]: NETFILTER_CFG table=filter:65 family=2 entries=1 op=nft_register_chain pid=3735 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 23:49:40.301000 audit[3735]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffff2590650 a2=0 a3=1 items=0 ppid=3619 pid=3735 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:49:40.301000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Jan 14 23:49:40.307000 audit[3737]: NETFILTER_CFG table=filter:66 family=2 entries=1 op=nft_register_rule pid=3737 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 23:49:40.307000 audit[3737]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=744 a0=3 a1=ffffc11e5ca0 a2=0 a3=1 items=0 ppid=3619 pid=3737 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:49:40.307000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Jan 14 23:49:40.315000 audit[3740]: NETFILTER_CFG table=filter:67 family=2 entries=1 op=nft_register_rule pid=3740 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 23:49:40.315000 audit[3740]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=744 a0=3 a1=ffffe8ba9eb0 a2=0 a3=1 items=0 ppid=3619 pid=3740 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:49:40.315000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D53 Jan 14 23:49:40.318000 audit[3741]: NETFILTER_CFG table=filter:68 family=2 entries=1 op=nft_register_chain pid=3741 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 23:49:40.318000 audit[3741]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffd4f3b530 a2=0 a3=1 items=0 ppid=3619 pid=3741 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:49:40.318000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Jan 14 23:49:40.323000 audit[3743]: NETFILTER_CFG table=filter:69 family=2 entries=1 op=nft_register_rule pid=3743 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 23:49:40.323000 audit[3743]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=fffff1fcccf0 a2=0 a3=1 items=0 ppid=3619 pid=3743 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:49:40.323000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Jan 14 23:49:40.326000 audit[3744]: NETFILTER_CFG table=filter:70 family=2 entries=1 op=nft_register_chain pid=3744 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 23:49:40.326000 audit[3744]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffd4b6da60 a2=0 a3=1 items=0 ppid=3619 pid=3744 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:49:40.326000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Jan 14 23:49:40.331000 audit[3746]: NETFILTER_CFG table=filter:71 family=2 entries=1 op=nft_register_rule pid=3746 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 23:49:40.331000 audit[3746]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffc2ea3690 a2=0 a3=1 items=0 ppid=3619 pid=3746 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:49:40.331000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Jan 14 23:49:40.339000 audit[3749]: NETFILTER_CFG table=filter:72 family=2 entries=1 op=nft_register_rule pid=3749 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 23:49:40.339000 audit[3749]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffc2064e60 a2=0 a3=1 items=0 ppid=3619 pid=3749 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:49:40.339000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Jan 14 23:49:40.347000 audit[3752]: NETFILTER_CFG table=filter:73 family=2 entries=1 op=nft_register_rule pid=3752 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 23:49:40.347000 audit[3752]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=fffff4e786f0 a2=0 a3=1 items=0 ppid=3619 pid=3752 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:49:40.347000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Jan 14 23:49:40.349000 audit[3753]: NETFILTER_CFG table=nat:74 family=2 entries=1 op=nft_register_chain pid=3753 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 23:49:40.349000 audit[3753]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=fffffcfafe10 a2=0 a3=1 items=0 ppid=3619 pid=3753 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:49:40.349000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Jan 14 23:49:40.355000 audit[3755]: NETFILTER_CFG table=nat:75 family=2 entries=1 op=nft_register_rule pid=3755 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 23:49:40.355000 audit[3755]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=524 a0=3 a1=ffffc2fcfa70 a2=0 a3=1 items=0 ppid=3619 pid=3755 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:49:40.355000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 14 23:49:40.363000 audit[3758]: NETFILTER_CFG table=nat:76 family=2 entries=1 op=nft_register_rule pid=3758 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 23:49:40.363000 audit[3758]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffe932c500 a2=0 a3=1 items=0 ppid=3619 pid=3758 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:49:40.363000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 14 23:49:40.366000 audit[3759]: NETFILTER_CFG table=nat:77 family=2 entries=1 op=nft_register_chain pid=3759 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 23:49:40.366000 audit[3759]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffc9645e50 a2=0 a3=1 items=0 ppid=3619 pid=3759 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:49:40.366000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Jan 14 23:49:40.371000 audit[3761]: NETFILTER_CFG table=nat:78 family=2 entries=1 op=nft_register_rule pid=3761 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 23:49:40.371000 audit[3761]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=532 a0=3 a1=fffff423f430 a2=0 a3=1 items=0 ppid=3619 pid=3761 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:49:40.371000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Jan 14 23:49:40.413000 audit[3767]: NETFILTER_CFG table=filter:79 family=2 entries=8 op=nft_register_rule pid=3767 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 23:49:40.413000 audit[3767]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=fffff060c810 a2=0 a3=1 items=0 ppid=3619 pid=3767 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:49:40.413000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 23:49:40.426000 audit[3767]: NETFILTER_CFG table=nat:80 family=2 entries=14 op=nft_register_chain pid=3767 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 23:49:40.426000 audit[3767]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5508 a0=3 a1=fffff060c810 a2=0 a3=1 items=0 ppid=3619 pid=3767 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:49:40.426000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 23:49:40.429000 audit[3772]: NETFILTER_CFG table=filter:81 family=10 entries=1 op=nft_register_chain pid=3772 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 23:49:40.429000 audit[3772]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=108 a0=3 a1=ffffc1698fb0 a2=0 a3=1 items=0 ppid=3619 pid=3772 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:49:40.429000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Jan 14 23:49:40.435000 audit[3774]: NETFILTER_CFG table=filter:82 family=10 entries=2 op=nft_register_chain pid=3774 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 23:49:40.435000 audit[3774]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=836 a0=3 a1=ffffd1076b80 a2=0 a3=1 items=0 ppid=3619 pid=3774 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:49:40.435000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C6520736572766963 Jan 14 23:49:40.443000 audit[3777]: NETFILTER_CFG table=filter:83 family=10 entries=1 op=nft_register_rule pid=3777 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 23:49:40.443000 audit[3777]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=752 a0=3 a1=fffff848ec50 a2=0 a3=1 items=0 ppid=3619 pid=3777 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:49:40.443000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276 Jan 14 23:49:40.445000 audit[3778]: NETFILTER_CFG table=filter:84 family=10 entries=1 op=nft_register_chain pid=3778 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 23:49:40.445000 audit[3778]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffea5f3130 a2=0 a3=1 items=0 ppid=3619 pid=3778 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:49:40.445000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Jan 14 23:49:40.451000 audit[3780]: NETFILTER_CFG table=filter:85 family=10 entries=1 op=nft_register_rule pid=3780 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 23:49:40.451000 audit[3780]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffc7c085d0 a2=0 a3=1 items=0 ppid=3619 pid=3780 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:49:40.451000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Jan 14 23:49:40.453000 audit[3781]: NETFILTER_CFG table=filter:86 family=10 entries=1 op=nft_register_chain pid=3781 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 23:49:40.453000 audit[3781]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffcf97f200 a2=0 a3=1 items=0 ppid=3619 pid=3781 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:49:40.453000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Jan 14 23:49:40.459000 audit[3783]: NETFILTER_CFG table=filter:87 family=10 entries=1 op=nft_register_rule pid=3783 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 23:49:40.459000 audit[3783]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=744 a0=3 a1=ffffe2a778c0 a2=0 a3=1 items=0 ppid=3619 pid=3783 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:49:40.459000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B554245 Jan 14 23:49:40.467000 audit[3786]: NETFILTER_CFG table=filter:88 family=10 entries=2 op=nft_register_chain pid=3786 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 23:49:40.467000 audit[3786]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=828 a0=3 a1=ffffe38e6920 a2=0 a3=1 items=0 ppid=3619 pid=3786 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:49:40.467000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Jan 14 23:49:40.470000 audit[3787]: NETFILTER_CFG table=filter:89 family=10 entries=1 op=nft_register_chain pid=3787 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 23:49:40.470000 audit[3787]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffff07969e0 a2=0 a3=1 items=0 ppid=3619 pid=3787 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:49:40.470000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Jan 14 23:49:40.475000 audit[3789]: NETFILTER_CFG table=filter:90 family=10 entries=1 op=nft_register_rule pid=3789 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 23:49:40.475000 audit[3789]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffd508fb40 a2=0 a3=1 items=0 ppid=3619 pid=3789 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:49:40.475000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Jan 14 23:49:40.478000 audit[3790]: NETFILTER_CFG table=filter:91 family=10 entries=1 op=nft_register_chain pid=3790 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 23:49:40.478000 audit[3790]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=fffff0c6daf0 a2=0 a3=1 items=0 ppid=3619 pid=3790 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:49:40.478000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Jan 14 23:49:40.483000 audit[3792]: NETFILTER_CFG table=filter:92 family=10 entries=1 op=nft_register_rule pid=3792 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 23:49:40.483000 audit[3792]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffc47b3180 a2=0 a3=1 items=0 ppid=3619 pid=3792 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:49:40.483000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Jan 14 23:49:40.491000 audit[3795]: NETFILTER_CFG table=filter:93 family=10 entries=1 op=nft_register_rule pid=3795 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 23:49:40.491000 audit[3795]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffcca02080 a2=0 a3=1 items=0 ppid=3619 pid=3795 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:49:40.491000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Jan 14 23:49:40.499000 audit[3798]: NETFILTER_CFG table=filter:94 family=10 entries=1 op=nft_register_rule pid=3798 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 23:49:40.499000 audit[3798]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffe5e14180 a2=0 a3=1 items=0 ppid=3619 pid=3798 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:49:40.499000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C Jan 14 23:49:40.501000 audit[3799]: NETFILTER_CFG table=nat:95 family=10 entries=1 op=nft_register_chain pid=3799 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 23:49:40.501000 audit[3799]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffce9a4aa0 a2=0 a3=1 items=0 ppid=3619 pid=3799 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:49:40.501000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Jan 14 23:49:40.507000 audit[3801]: NETFILTER_CFG table=nat:96 family=10 entries=1 op=nft_register_rule pid=3801 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 23:49:40.507000 audit[3801]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=524 a0=3 a1=ffffed3044c0 a2=0 a3=1 items=0 ppid=3619 pid=3801 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:49:40.507000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 14 23:49:40.515000 audit[3804]: NETFILTER_CFG table=nat:97 family=10 entries=1 op=nft_register_rule pid=3804 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 23:49:40.515000 audit[3804]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=fffffc278060 a2=0 a3=1 items=0 ppid=3619 pid=3804 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:49:40.515000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 14 23:49:40.518000 audit[3805]: NETFILTER_CFG table=nat:98 family=10 entries=1 op=nft_register_chain pid=3805 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 23:49:40.518000 audit[3805]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffff14e1a90 a2=0 a3=1 items=0 ppid=3619 pid=3805 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:49:40.518000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Jan 14 23:49:40.530000 audit[3807]: NETFILTER_CFG table=nat:99 family=10 entries=2 op=nft_register_chain pid=3807 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 23:49:40.530000 audit[3807]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=612 a0=3 a1=ffffe6225660 a2=0 a3=1 items=0 ppid=3619 pid=3807 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:49:40.530000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Jan 14 23:49:40.534000 audit[3808]: NETFILTER_CFG table=filter:100 family=10 entries=1 op=nft_register_chain pid=3808 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 23:49:40.534000 audit[3808]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffe7e36740 a2=0 a3=1 items=0 ppid=3619 pid=3808 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:49:40.534000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Jan 14 23:49:40.543000 audit[3810]: NETFILTER_CFG table=filter:101 family=10 entries=1 op=nft_register_rule pid=3810 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 23:49:40.543000 audit[3810]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=228 a0=3 a1=ffffd6d8e090 a2=0 a3=1 items=0 ppid=3619 pid=3810 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:49:40.543000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 14 23:49:40.570000 audit[3813]: NETFILTER_CFG table=filter:102 family=10 entries=1 op=nft_register_rule pid=3813 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 23:49:40.570000 audit[3813]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=228 a0=3 a1=fffff66023e0 a2=0 a3=1 items=0 ppid=3619 pid=3813 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:49:40.570000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 14 23:49:40.579000 audit[3815]: NETFILTER_CFG table=filter:103 family=10 entries=3 op=nft_register_rule pid=3815 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Jan 14 23:49:40.579000 audit[3815]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2088 a0=3 a1=ffffd42c7e40 a2=0 a3=1 items=0 ppid=3619 pid=3815 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:49:40.579000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 23:49:40.580000 audit[3815]: NETFILTER_CFG table=nat:104 family=10 entries=7 op=nft_register_chain pid=3815 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Jan 14 23:49:40.580000 audit[3815]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2056 a0=3 a1=ffffd42c7e40 a2=0 a3=1 items=0 ppid=3619 pid=3815 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:49:40.580000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 23:49:40.591338 kubelet[3513]: I0114 23:49:40.591220 3513 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-rp88g" podStartSLOduration=1.591199332 podStartE2EDuration="1.591199332s" podCreationTimestamp="2026-01-14 23:49:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-14 23:49:40.546219168 +0000 UTC m=+5.380679655" watchObservedRunningTime="2026-01-14 23:49:40.591199332 +0000 UTC m=+5.425659807" Jan 14 23:49:41.523726 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2706722765.mount: Deactivated successfully. Jan 14 23:49:42.334909 containerd[1996]: time="2026-01-14T23:49:42.334827049Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 23:49:42.336455 containerd[1996]: time="2026-01-14T23:49:42.336226261Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=0" Jan 14 23:49:42.337953 containerd[1996]: time="2026-01-14T23:49:42.337895773Z" level=info msg="ImageCreate event name:\"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 23:49:42.342850 containerd[1996]: time="2026-01-14T23:49:42.342782509Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 23:49:42.344623 containerd[1996]: time="2026-01-14T23:49:42.344418421Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"22147999\" in 2.264096651s" Jan 14 23:49:42.344623 containerd[1996]: time="2026-01-14T23:49:42.344470357Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\"" Jan 14 23:49:42.351324 containerd[1996]: time="2026-01-14T23:49:42.351143497Z" level=info msg="CreateContainer within sandbox \"7bff15e8298eb1901ea375f1bf95a3c279b3ae3359852b2d42832db6ee7fda9a\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jan 14 23:49:42.365072 containerd[1996]: time="2026-01-14T23:49:42.363934633Z" level=info msg="Container dac1888a9150e46d946f273d249809444e739843c6a3221125849236c62192ba: CDI devices from CRI Config.CDIDevices: []" Jan 14 23:49:42.371475 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2567096813.mount: Deactivated successfully. Jan 14 23:49:42.379697 containerd[1996]: time="2026-01-14T23:49:42.379644145Z" level=info msg="CreateContainer within sandbox \"7bff15e8298eb1901ea375f1bf95a3c279b3ae3359852b2d42832db6ee7fda9a\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"dac1888a9150e46d946f273d249809444e739843c6a3221125849236c62192ba\"" Jan 14 23:49:42.380818 containerd[1996]: time="2026-01-14T23:49:42.380755405Z" level=info msg="StartContainer for \"dac1888a9150e46d946f273d249809444e739843c6a3221125849236c62192ba\"" Jan 14 23:49:42.385408 containerd[1996]: time="2026-01-14T23:49:42.385314577Z" level=info msg="connecting to shim dac1888a9150e46d946f273d249809444e739843c6a3221125849236c62192ba" address="unix:///run/containerd/s/112527f83027f3a4ecb09e8766a78af86446564ea97ee63dbd2c288c1a82f777" protocol=ttrpc version=3 Jan 14 23:49:42.428090 systemd[1]: Started cri-containerd-dac1888a9150e46d946f273d249809444e739843c6a3221125849236c62192ba.scope - libcontainer container dac1888a9150e46d946f273d249809444e739843c6a3221125849236c62192ba. Jan 14 23:49:42.452000 audit: BPF prog-id=152 op=LOAD Jan 14 23:49:42.454000 audit: BPF prog-id=153 op=LOAD Jan 14 23:49:42.454000 audit[3824]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001b0180 a2=98 a3=0 items=0 ppid=3646 pid=3824 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:49:42.454000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6461633138383861393135306534366439343666323733643234393830 Jan 14 23:49:42.454000 audit: BPF prog-id=153 op=UNLOAD Jan 14 23:49:42.454000 audit[3824]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3646 pid=3824 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:49:42.454000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6461633138383861393135306534366439343666323733643234393830 Jan 14 23:49:42.454000 audit: BPF prog-id=154 op=LOAD Jan 14 23:49:42.454000 audit[3824]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001b03e8 a2=98 a3=0 items=0 ppid=3646 pid=3824 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:49:42.454000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6461633138383861393135306534366439343666323733643234393830 Jan 14 23:49:42.455000 audit: BPF prog-id=155 op=LOAD Jan 14 23:49:42.455000 audit[3824]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=40001b0168 a2=98 a3=0 items=0 ppid=3646 pid=3824 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:49:42.455000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6461633138383861393135306534366439343666323733643234393830 Jan 14 23:49:42.455000 audit: BPF prog-id=155 op=UNLOAD Jan 14 23:49:42.455000 audit[3824]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3646 pid=3824 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:49:42.455000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6461633138383861393135306534366439343666323733643234393830 Jan 14 23:49:42.455000 audit: BPF prog-id=154 op=UNLOAD Jan 14 23:49:42.455000 audit[3824]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3646 pid=3824 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:49:42.455000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6461633138383861393135306534366439343666323733643234393830 Jan 14 23:49:42.455000 audit: BPF prog-id=156 op=LOAD Jan 14 23:49:42.455000 audit[3824]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001b0648 a2=98 a3=0 items=0 ppid=3646 pid=3824 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:49:42.455000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6461633138383861393135306534366439343666323733643234393830 Jan 14 23:49:42.497479 containerd[1996]: time="2026-01-14T23:49:42.497377526Z" level=info msg="StartContainer for \"dac1888a9150e46d946f273d249809444e739843c6a3221125849236c62192ba\" returns successfully" Jan 14 23:49:51.465455 kernel: kauditd_printk_skb: 202 callbacks suppressed Jan 14 23:49:51.465643 kernel: audit: type=1106 audit(1768434591.455:527): pid=2353 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 14 23:49:51.455000 audit[2353]: USER_END pid=2353 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 14 23:49:51.456298 sudo[2353]: pam_unix(sudo:session): session closed for user root Jan 14 23:49:51.463000 audit[2353]: CRED_DISP pid=2353 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 14 23:49:51.476880 kernel: audit: type=1104 audit(1768434591.463:528): pid=2353 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 14 23:49:51.546621 sshd[2352]: Connection closed by 20.161.92.111 port 53680 Jan 14 23:49:51.547349 sshd-session[2349]: pam_unix(sshd:session): session closed for user core Jan 14 23:49:51.551000 audit[2349]: USER_END pid=2349 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 23:49:51.560027 systemd[1]: sshd@6-172.31.18.197:22-20.161.92.111:53680.service: Deactivated successfully. Jan 14 23:49:51.551000 audit[2349]: CRED_DISP pid=2349 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 23:49:51.568228 kernel: audit: type=1106 audit(1768434591.551:529): pid=2349 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 23:49:51.569029 kernel: audit: type=1104 audit(1768434591.551:530): pid=2349 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 23:49:51.572364 systemd[1]: session-7.scope: Deactivated successfully. Jan 14 23:49:51.559000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-172.31.18.197:22-20.161.92.111:53680 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:49:51.577956 kernel: audit: type=1131 audit(1768434591.559:531): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-172.31.18.197:22-20.161.92.111:53680 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:49:51.578510 systemd[1]: session-7.scope: Consumed 12.338s CPU time, 223.1M memory peak. Jan 14 23:49:51.584047 systemd-logind[1960]: Session 7 logged out. Waiting for processes to exit. Jan 14 23:49:51.588697 systemd-logind[1960]: Removed session 7. Jan 14 23:49:58.325000 audit[3906]: NETFILTER_CFG table=filter:105 family=2 entries=15 op=nft_register_rule pid=3906 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 23:49:58.325000 audit[3906]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5992 a0=3 a1=ffffe93b18b0 a2=0 a3=1 items=0 ppid=3619 pid=3906 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:49:58.331878 kernel: audit: type=1325 audit(1768434598.325:532): table=filter:105 family=2 entries=15 op=nft_register_rule pid=3906 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 23:49:58.325000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 23:49:58.344228 kernel: audit: type=1300 audit(1768434598.325:532): arch=c00000b7 syscall=211 success=yes exit=5992 a0=3 a1=ffffe93b18b0 a2=0 a3=1 items=0 ppid=3619 pid=3906 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:49:58.344339 kernel: audit: type=1327 audit(1768434598.325:532): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 23:49:58.335000 audit[3906]: NETFILTER_CFG table=nat:106 family=2 entries=12 op=nft_register_rule pid=3906 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 23:49:58.335000 audit[3906]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffe93b18b0 a2=0 a3=1 items=0 ppid=3619 pid=3906 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:49:58.358663 kernel: audit: type=1325 audit(1768434598.335:533): table=nat:106 family=2 entries=12 op=nft_register_rule pid=3906 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 23:49:58.358785 kernel: audit: type=1300 audit(1768434598.335:533): arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffe93b18b0 a2=0 a3=1 items=0 ppid=3619 pid=3906 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:49:58.335000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 23:49:58.362149 kernel: audit: type=1327 audit(1768434598.335:533): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 23:49:58.363000 audit[3908]: NETFILTER_CFG table=filter:107 family=2 entries=16 op=nft_register_rule pid=3908 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 23:49:58.363000 audit[3908]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5992 a0=3 a1=fffffc991ad0 a2=0 a3=1 items=0 ppid=3619 pid=3908 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:49:58.368867 kernel: audit: type=1325 audit(1768434598.363:534): table=filter:107 family=2 entries=16 op=nft_register_rule pid=3908 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 23:49:58.363000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 23:49:58.379645 kernel: audit: type=1300 audit(1768434598.363:534): arch=c00000b7 syscall=211 success=yes exit=5992 a0=3 a1=fffffc991ad0 a2=0 a3=1 items=0 ppid=3619 pid=3908 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:49:58.379764 kernel: audit: type=1327 audit(1768434598.363:534): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 23:49:58.380000 audit[3908]: NETFILTER_CFG table=nat:108 family=2 entries=12 op=nft_register_rule pid=3908 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 23:49:58.380000 audit[3908]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=fffffc991ad0 a2=0 a3=1 items=0 ppid=3619 pid=3908 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:49:58.386701 kernel: audit: type=1325 audit(1768434598.380:535): table=nat:108 family=2 entries=12 op=nft_register_rule pid=3908 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 23:49:58.380000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 23:50:06.707000 audit[3910]: NETFILTER_CFG table=filter:109 family=2 entries=17 op=nft_register_rule pid=3910 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 23:50:06.714631 kernel: kauditd_printk_skb: 2 callbacks suppressed Jan 14 23:50:06.714786 kernel: audit: type=1325 audit(1768434606.707:536): table=filter:109 family=2 entries=17 op=nft_register_rule pid=3910 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 23:50:06.707000 audit[3910]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=6736 a0=3 a1=ffffed6bd020 a2=0 a3=1 items=0 ppid=3619 pid=3910 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:50:06.725439 kernel: audit: type=1300 audit(1768434606.707:536): arch=c00000b7 syscall=211 success=yes exit=6736 a0=3 a1=ffffed6bd020 a2=0 a3=1 items=0 ppid=3619 pid=3910 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:50:06.707000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 23:50:06.731632 kernel: audit: type=1327 audit(1768434606.707:536): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 23:50:06.717000 audit[3910]: NETFILTER_CFG table=nat:110 family=2 entries=12 op=nft_register_rule pid=3910 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 23:50:06.737547 kernel: audit: type=1325 audit(1768434606.717:537): table=nat:110 family=2 entries=12 op=nft_register_rule pid=3910 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 23:50:06.717000 audit[3910]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffed6bd020 a2=0 a3=1 items=0 ppid=3619 pid=3910 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:50:06.717000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 23:50:06.747619 kernel: audit: type=1300 audit(1768434606.717:537): arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffed6bd020 a2=0 a3=1 items=0 ppid=3619 pid=3910 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:50:06.753630 kernel: audit: type=1327 audit(1768434606.717:537): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 23:50:06.768000 audit[3912]: NETFILTER_CFG table=filter:111 family=2 entries=18 op=nft_register_rule pid=3912 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 23:50:06.768000 audit[3912]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=6736 a0=3 a1=ffffff7f31f0 a2=0 a3=1 items=0 ppid=3619 pid=3912 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:50:06.780368 kernel: audit: type=1325 audit(1768434606.768:538): table=filter:111 family=2 entries=18 op=nft_register_rule pid=3912 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 23:50:06.780493 kernel: audit: type=1300 audit(1768434606.768:538): arch=c00000b7 syscall=211 success=yes exit=6736 a0=3 a1=ffffff7f31f0 a2=0 a3=1 items=0 ppid=3619 pid=3912 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:50:06.768000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 23:50:06.785216 kernel: audit: type=1327 audit(1768434606.768:538): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 23:50:06.780000 audit[3912]: NETFILTER_CFG table=nat:112 family=2 entries=12 op=nft_register_rule pid=3912 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 23:50:06.780000 audit[3912]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffff7f31f0 a2=0 a3=1 items=0 ppid=3619 pid=3912 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:50:06.780000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 23:50:06.791672 kernel: audit: type=1325 audit(1768434606.780:539): table=nat:112 family=2 entries=12 op=nft_register_rule pid=3912 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 23:50:07.824000 audit[3914]: NETFILTER_CFG table=filter:113 family=2 entries=19 op=nft_register_rule pid=3914 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 23:50:07.824000 audit[3914]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=7480 a0=3 a1=ffffc6c9ac90 a2=0 a3=1 items=0 ppid=3619 pid=3914 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:50:07.824000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 23:50:07.831000 audit[3914]: NETFILTER_CFG table=nat:114 family=2 entries=12 op=nft_register_rule pid=3914 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 23:50:07.831000 audit[3914]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffc6c9ac90 a2=0 a3=1 items=0 ppid=3619 pid=3914 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:50:07.831000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 23:50:12.182351 kernel: kauditd_printk_skb: 8 callbacks suppressed Jan 14 23:50:12.182512 kernel: audit: type=1325 audit(1768434612.176:542): table=filter:115 family=2 entries=21 op=nft_register_rule pid=3919 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 23:50:12.176000 audit[3919]: NETFILTER_CFG table=filter:115 family=2 entries=21 op=nft_register_rule pid=3919 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 23:50:12.176000 audit[3919]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=8224 a0=3 a1=ffffd3337a90 a2=0 a3=1 items=0 ppid=3619 pid=3919 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:50:12.176000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 23:50:12.196232 kernel: audit: type=1300 audit(1768434612.176:542): arch=c00000b7 syscall=211 success=yes exit=8224 a0=3 a1=ffffd3337a90 a2=0 a3=1 items=0 ppid=3619 pid=3919 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:50:12.196358 kernel: audit: type=1327 audit(1768434612.176:542): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 23:50:12.197000 audit[3919]: NETFILTER_CFG table=nat:116 family=2 entries=12 op=nft_register_rule pid=3919 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 23:50:12.197000 audit[3919]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffd3337a90 a2=0 a3=1 items=0 ppid=3619 pid=3919 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:50:12.209311 kernel: audit: type=1325 audit(1768434612.197:543): table=nat:116 family=2 entries=12 op=nft_register_rule pid=3919 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 23:50:12.209435 kernel: audit: type=1300 audit(1768434612.197:543): arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffd3337a90 a2=0 a3=1 items=0 ppid=3619 pid=3919 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:50:12.197000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 23:50:12.213416 kernel: audit: type=1327 audit(1768434612.197:543): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 23:50:12.236000 audit[3921]: NETFILTER_CFG table=filter:117 family=2 entries=22 op=nft_register_rule pid=3921 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 23:50:12.242629 kernel: audit: type=1325 audit(1768434612.236:544): table=filter:117 family=2 entries=22 op=nft_register_rule pid=3921 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 23:50:12.236000 audit[3921]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=8224 a0=3 a1=ffffefe48fa0 a2=0 a3=1 items=0 ppid=3619 pid=3921 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:50:12.253697 kubelet[3513]: I0114 23:50:12.252764 3513 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7dcd859c48-m4t4n" podStartSLOduration=30.984613711 podStartE2EDuration="33.25273719s" podCreationTimestamp="2026-01-14 23:49:39 +0000 UTC" firstStartedPulling="2026-01-14 23:49:40.07817275 +0000 UTC m=+4.912633213" lastFinishedPulling="2026-01-14 23:49:42.346296217 +0000 UTC m=+7.180756692" observedRunningTime="2026-01-14 23:49:42.578620946 +0000 UTC m=+7.413081433" watchObservedRunningTime="2026-01-14 23:50:12.25273719 +0000 UTC m=+37.087197893" Jan 14 23:50:12.255895 kernel: audit: type=1300 audit(1768434612.236:544): arch=c00000b7 syscall=211 success=yes exit=8224 a0=3 a1=ffffefe48fa0 a2=0 a3=1 items=0 ppid=3619 pid=3921 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:50:12.236000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 23:50:12.265782 kernel: audit: type=1327 audit(1768434612.236:544): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 23:50:12.270000 audit[3921]: NETFILTER_CFG table=nat:118 family=2 entries=12 op=nft_register_rule pid=3921 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 23:50:12.270000 audit[3921]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffefe48fa0 a2=0 a3=1 items=0 ppid=3619 pid=3921 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:50:12.270000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 23:50:12.278631 kernel: audit: type=1325 audit(1768434612.270:545): table=nat:118 family=2 entries=12 op=nft_register_rule pid=3921 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 23:50:12.284831 systemd[1]: Created slice kubepods-besteffort-podee52f487_fda1_4ab4_8c6f_3892fa353d5b.slice - libcontainer container kubepods-besteffort-podee52f487_fda1_4ab4_8c6f_3892fa353d5b.slice. Jan 14 23:50:12.352069 kubelet[3513]: I0114 23:50:12.351894 3513 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bcd5v\" (UniqueName: \"kubernetes.io/projected/ee52f487-fda1-4ab4-8c6f-3892fa353d5b-kube-api-access-bcd5v\") pod \"calico-typha-ddd99bf55-zqzzj\" (UID: \"ee52f487-fda1-4ab4-8c6f-3892fa353d5b\") " pod="calico-system/calico-typha-ddd99bf55-zqzzj" Jan 14 23:50:12.352069 kubelet[3513]: I0114 23:50:12.351981 3513 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ee52f487-fda1-4ab4-8c6f-3892fa353d5b-tigera-ca-bundle\") pod \"calico-typha-ddd99bf55-zqzzj\" (UID: \"ee52f487-fda1-4ab4-8c6f-3892fa353d5b\") " pod="calico-system/calico-typha-ddd99bf55-zqzzj" Jan 14 23:50:12.352069 kubelet[3513]: I0114 23:50:12.352022 3513 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/ee52f487-fda1-4ab4-8c6f-3892fa353d5b-typha-certs\") pod \"calico-typha-ddd99bf55-zqzzj\" (UID: \"ee52f487-fda1-4ab4-8c6f-3892fa353d5b\") " pod="calico-system/calico-typha-ddd99bf55-zqzzj" Jan 14 23:50:12.595840 containerd[1996]: time="2026-01-14T23:50:12.595777327Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-ddd99bf55-zqzzj,Uid:ee52f487-fda1-4ab4-8c6f-3892fa353d5b,Namespace:calico-system,Attempt:0,}" Jan 14 23:50:12.650283 containerd[1996]: time="2026-01-14T23:50:12.650195120Z" level=info msg="connecting to shim 8e5186c56ce659fe90e33816652d041d6cb347562dbe8eb638bc476323e4acee" address="unix:///run/containerd/s/ce6ab9049192dcc8ea29a62dd9b4df83b36112bb0e8e87bcdb0b06bfac5eaaa3" namespace=k8s.io protocol=ttrpc version=3 Jan 14 23:50:12.677290 systemd[1]: Created slice kubepods-besteffort-pod39557301_0163_4a1a_aa02_feb533edac3f.slice - libcontainer container kubepods-besteffort-pod39557301_0163_4a1a_aa02_feb533edac3f.slice. Jan 14 23:50:12.755445 kubelet[3513]: I0114 23:50:12.755384 3513 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/39557301-0163-4a1a-aa02-feb533edac3f-node-certs\") pod \"calico-node-gxr77\" (UID: \"39557301-0163-4a1a-aa02-feb533edac3f\") " pod="calico-system/calico-node-gxr77" Jan 14 23:50:12.755948 kubelet[3513]: I0114 23:50:12.755914 3513 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/39557301-0163-4a1a-aa02-feb533edac3f-cni-net-dir\") pod \"calico-node-gxr77\" (UID: \"39557301-0163-4a1a-aa02-feb533edac3f\") " pod="calico-system/calico-node-gxr77" Jan 14 23:50:12.756160 kubelet[3513]: I0114 23:50:12.756120 3513 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/39557301-0163-4a1a-aa02-feb533edac3f-policysync\") pod \"calico-node-gxr77\" (UID: \"39557301-0163-4a1a-aa02-feb533edac3f\") " pod="calico-system/calico-node-gxr77" Jan 14 23:50:12.756448 kubelet[3513]: I0114 23:50:12.756302 3513 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/39557301-0163-4a1a-aa02-feb533edac3f-var-lib-calico\") pod \"calico-node-gxr77\" (UID: \"39557301-0163-4a1a-aa02-feb533edac3f\") " pod="calico-system/calico-node-gxr77" Jan 14 23:50:12.756610 kubelet[3513]: I0114 23:50:12.756378 3513 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/39557301-0163-4a1a-aa02-feb533edac3f-xtables-lock\") pod \"calico-node-gxr77\" (UID: \"39557301-0163-4a1a-aa02-feb533edac3f\") " pod="calico-system/calico-node-gxr77" Jan 14 23:50:12.756914 kubelet[3513]: I0114 23:50:12.756782 3513 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/39557301-0163-4a1a-aa02-feb533edac3f-tigera-ca-bundle\") pod \"calico-node-gxr77\" (UID: \"39557301-0163-4a1a-aa02-feb533edac3f\") " pod="calico-system/calico-node-gxr77" Jan 14 23:50:12.756914 kubelet[3513]: I0114 23:50:12.756876 3513 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/39557301-0163-4a1a-aa02-feb533edac3f-var-run-calico\") pod \"calico-node-gxr77\" (UID: \"39557301-0163-4a1a-aa02-feb533edac3f\") " pod="calico-system/calico-node-gxr77" Jan 14 23:50:12.757214 kubelet[3513]: I0114 23:50:12.757137 3513 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/39557301-0163-4a1a-aa02-feb533edac3f-cni-log-dir\") pod \"calico-node-gxr77\" (UID: \"39557301-0163-4a1a-aa02-feb533edac3f\") " pod="calico-system/calico-node-gxr77" Jan 14 23:50:12.757214 kubelet[3513]: I0114 23:50:12.757207 3513 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/39557301-0163-4a1a-aa02-feb533edac3f-flexvol-driver-host\") pod \"calico-node-gxr77\" (UID: \"39557301-0163-4a1a-aa02-feb533edac3f\") " pod="calico-system/calico-node-gxr77" Jan 14 23:50:12.757659 kubelet[3513]: I0114 23:50:12.757247 3513 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dnbmp\" (UniqueName: \"kubernetes.io/projected/39557301-0163-4a1a-aa02-feb533edac3f-kube-api-access-dnbmp\") pod \"calico-node-gxr77\" (UID: \"39557301-0163-4a1a-aa02-feb533edac3f\") " pod="calico-system/calico-node-gxr77" Jan 14 23:50:12.757659 kubelet[3513]: I0114 23:50:12.757287 3513 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/39557301-0163-4a1a-aa02-feb533edac3f-cni-bin-dir\") pod \"calico-node-gxr77\" (UID: \"39557301-0163-4a1a-aa02-feb533edac3f\") " pod="calico-system/calico-node-gxr77" Jan 14 23:50:12.757659 kubelet[3513]: I0114 23:50:12.757334 3513 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/39557301-0163-4a1a-aa02-feb533edac3f-lib-modules\") pod \"calico-node-gxr77\" (UID: \"39557301-0163-4a1a-aa02-feb533edac3f\") " pod="calico-system/calico-node-gxr77" Jan 14 23:50:12.760289 systemd[1]: Started cri-containerd-8e5186c56ce659fe90e33816652d041d6cb347562dbe8eb638bc476323e4acee.scope - libcontainer container 8e5186c56ce659fe90e33816652d041d6cb347562dbe8eb638bc476323e4acee. Jan 14 23:50:12.791355 kubelet[3513]: E0114 23:50:12.789192 3513 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-8dsvj" podUID="23b28fd4-dc96-481b-a69a-1d96358778f5" Jan 14 23:50:12.834000 audit: BPF prog-id=157 op=LOAD Jan 14 23:50:12.836000 audit: BPF prog-id=158 op=LOAD Jan 14 23:50:12.836000 audit[3943]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130180 a2=98 a3=0 items=0 ppid=3932 pid=3943 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:50:12.836000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3865353138366335366365363539666539306533333831363635326430 Jan 14 23:50:12.837000 audit: BPF prog-id=158 op=UNLOAD Jan 14 23:50:12.837000 audit[3943]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3932 pid=3943 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:50:12.837000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3865353138366335366365363539666539306533333831363635326430 Jan 14 23:50:12.837000 audit: BPF prog-id=159 op=LOAD Jan 14 23:50:12.837000 audit[3943]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001303e8 a2=98 a3=0 items=0 ppid=3932 pid=3943 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:50:12.837000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3865353138366335366365363539666539306533333831363635326430 Jan 14 23:50:12.837000 audit: BPF prog-id=160 op=LOAD Jan 14 23:50:12.837000 audit[3943]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000130168 a2=98 a3=0 items=0 ppid=3932 pid=3943 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:50:12.837000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3865353138366335366365363539666539306533333831363635326430 Jan 14 23:50:12.837000 audit: BPF prog-id=160 op=UNLOAD Jan 14 23:50:12.837000 audit[3943]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3932 pid=3943 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:50:12.837000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3865353138366335366365363539666539306533333831363635326430 Jan 14 23:50:12.837000 audit: BPF prog-id=159 op=UNLOAD Jan 14 23:50:12.837000 audit[3943]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3932 pid=3943 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:50:12.837000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3865353138366335366365363539666539306533333831363635326430 Jan 14 23:50:12.837000 audit: BPF prog-id=161 op=LOAD Jan 14 23:50:12.837000 audit[3943]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130648 a2=98 a3=0 items=0 ppid=3932 pid=3943 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:50:12.837000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3865353138366335366365363539666539306533333831363635326430 Jan 14 23:50:12.859955 kubelet[3513]: I0114 23:50:12.857944 3513 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/23b28fd4-dc96-481b-a69a-1d96358778f5-socket-dir\") pod \"csi-node-driver-8dsvj\" (UID: \"23b28fd4-dc96-481b-a69a-1d96358778f5\") " pod="calico-system/csi-node-driver-8dsvj" Jan 14 23:50:12.859955 kubelet[3513]: I0114 23:50:12.858137 3513 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/23b28fd4-dc96-481b-a69a-1d96358778f5-kubelet-dir\") pod \"csi-node-driver-8dsvj\" (UID: \"23b28fd4-dc96-481b-a69a-1d96358778f5\") " pod="calico-system/csi-node-driver-8dsvj" Jan 14 23:50:12.859955 kubelet[3513]: I0114 23:50:12.858222 3513 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/23b28fd4-dc96-481b-a69a-1d96358778f5-registration-dir\") pod \"csi-node-driver-8dsvj\" (UID: \"23b28fd4-dc96-481b-a69a-1d96358778f5\") " pod="calico-system/csi-node-driver-8dsvj" Jan 14 23:50:12.859955 kubelet[3513]: I0114 23:50:12.858276 3513 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/23b28fd4-dc96-481b-a69a-1d96358778f5-varrun\") pod \"csi-node-driver-8dsvj\" (UID: \"23b28fd4-dc96-481b-a69a-1d96358778f5\") " pod="calico-system/csi-node-driver-8dsvj" Jan 14 23:50:12.859955 kubelet[3513]: I0114 23:50:12.858315 3513 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kdx84\" (UniqueName: \"kubernetes.io/projected/23b28fd4-dc96-481b-a69a-1d96358778f5-kube-api-access-kdx84\") pod \"csi-node-driver-8dsvj\" (UID: \"23b28fd4-dc96-481b-a69a-1d96358778f5\") " pod="calico-system/csi-node-driver-8dsvj" Jan 14 23:50:12.899834 kubelet[3513]: E0114 23:50:12.899773 3513 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 23:50:12.899834 kubelet[3513]: W0114 23:50:12.899823 3513 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 23:50:12.900042 kubelet[3513]: E0114 23:50:12.899875 3513 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 23:50:12.959238 kubelet[3513]: E0114 23:50:12.959183 3513 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 23:50:12.959238 kubelet[3513]: W0114 23:50:12.959226 3513 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 23:50:12.959556 kubelet[3513]: E0114 23:50:12.959259 3513 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 23:50:12.959981 kubelet[3513]: E0114 23:50:12.959935 3513 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 23:50:12.959981 kubelet[3513]: W0114 23:50:12.959971 3513 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 23:50:12.961000 kubelet[3513]: E0114 23:50:12.959999 3513 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 23:50:12.961565 kubelet[3513]: E0114 23:50:12.961297 3513 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 23:50:12.961565 kubelet[3513]: W0114 23:50:12.961328 3513 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 23:50:12.961565 kubelet[3513]: E0114 23:50:12.961359 3513 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 23:50:12.961994 kubelet[3513]: E0114 23:50:12.961969 3513 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 23:50:12.962100 kubelet[3513]: W0114 23:50:12.962077 3513 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 23:50:12.962384 kubelet[3513]: E0114 23:50:12.962196 3513 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 23:50:12.963500 kubelet[3513]: E0114 23:50:12.963452 3513 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 23:50:12.963830 kubelet[3513]: W0114 23:50:12.963798 3513 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 23:50:12.963997 kubelet[3513]: E0114 23:50:12.963962 3513 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 23:50:12.964563 kubelet[3513]: E0114 23:50:12.964535 3513 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 23:50:12.964990 kubelet[3513]: W0114 23:50:12.964739 3513 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 23:50:12.964990 kubelet[3513]: E0114 23:50:12.964777 3513 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 23:50:12.965267 kubelet[3513]: E0114 23:50:12.965243 3513 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 23:50:12.965610 kubelet[3513]: W0114 23:50:12.965355 3513 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 23:50:12.965610 kubelet[3513]: E0114 23:50:12.965389 3513 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 23:50:12.965984 kubelet[3513]: E0114 23:50:12.965956 3513 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 23:50:12.966325 kubelet[3513]: W0114 23:50:12.966091 3513 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 23:50:12.966325 kubelet[3513]: E0114 23:50:12.966128 3513 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 23:50:12.966680 kubelet[3513]: E0114 23:50:12.966566 3513 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 23:50:12.966837 kubelet[3513]: W0114 23:50:12.966810 3513 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 23:50:12.966954 kubelet[3513]: E0114 23:50:12.966931 3513 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 23:50:12.967426 kubelet[3513]: E0114 23:50:12.967398 3513 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 23:50:12.967655 kubelet[3513]: W0114 23:50:12.967554 3513 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 23:50:12.967805 kubelet[3513]: E0114 23:50:12.967776 3513 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 23:50:12.968635 kubelet[3513]: E0114 23:50:12.968504 3513 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 23:50:12.968635 kubelet[3513]: W0114 23:50:12.968535 3513 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 23:50:12.968635 kubelet[3513]: E0114 23:50:12.968565 3513 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 23:50:12.969923 kubelet[3513]: E0114 23:50:12.969510 3513 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 23:50:12.970375 kubelet[3513]: W0114 23:50:12.970110 3513 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 23:50:12.970375 kubelet[3513]: E0114 23:50:12.970157 3513 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 23:50:12.971222 kubelet[3513]: E0114 23:50:12.970866 3513 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 23:50:12.971880 kubelet[3513]: W0114 23:50:12.971395 3513 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 23:50:12.971880 kubelet[3513]: E0114 23:50:12.971442 3513 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 23:50:12.972802 kubelet[3513]: E0114 23:50:12.972766 3513 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 23:50:12.972987 kubelet[3513]: W0114 23:50:12.972955 3513 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 23:50:12.973148 kubelet[3513]: E0114 23:50:12.973118 3513 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 23:50:12.974537 kubelet[3513]: E0114 23:50:12.973794 3513 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 23:50:12.974537 kubelet[3513]: W0114 23:50:12.973822 3513 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 23:50:12.974537 kubelet[3513]: E0114 23:50:12.973848 3513 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 23:50:12.975277 kubelet[3513]: E0114 23:50:12.975015 3513 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 23:50:12.975277 kubelet[3513]: W0114 23:50:12.975044 3513 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 23:50:12.975277 kubelet[3513]: E0114 23:50:12.975073 3513 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 23:50:12.975688 kubelet[3513]: E0114 23:50:12.975649 3513 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 23:50:12.975865 kubelet[3513]: W0114 23:50:12.975821 3513 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 23:50:12.976332 kubelet[3513]: E0114 23:50:12.975991 3513 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 23:50:12.976788 kubelet[3513]: E0114 23:50:12.976758 3513 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 23:50:12.977425 kubelet[3513]: W0114 23:50:12.977386 3513 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 23:50:12.977663 kubelet[3513]: E0114 23:50:12.977636 3513 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 23:50:12.978520 kubelet[3513]: E0114 23:50:12.978283 3513 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 23:50:12.978520 kubelet[3513]: W0114 23:50:12.978315 3513 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 23:50:12.978520 kubelet[3513]: E0114 23:50:12.978344 3513 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 23:50:12.980068 kubelet[3513]: E0114 23:50:12.979763 3513 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 23:50:12.980068 kubelet[3513]: W0114 23:50:12.979797 3513 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 23:50:12.980068 kubelet[3513]: E0114 23:50:12.979829 3513 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 23:50:12.980626 kubelet[3513]: E0114 23:50:12.980557 3513 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 23:50:12.980846 kubelet[3513]: W0114 23:50:12.980723 3513 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 23:50:12.980846 kubelet[3513]: E0114 23:50:12.980758 3513 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 23:50:12.982233 kubelet[3513]: E0114 23:50:12.981803 3513 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 23:50:12.982233 kubelet[3513]: W0114 23:50:12.981835 3513 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 23:50:12.982233 kubelet[3513]: E0114 23:50:12.981865 3513 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 23:50:12.982914 kubelet[3513]: E0114 23:50:12.982792 3513 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 23:50:12.983118 kubelet[3513]: W0114 23:50:12.983045 3513 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 23:50:12.983260 kubelet[3513]: E0114 23:50:12.983236 3513 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 23:50:12.983859 kubelet[3513]: E0114 23:50:12.983831 3513 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 23:50:12.984405 kubelet[3513]: W0114 23:50:12.983998 3513 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 23:50:12.984405 kubelet[3513]: E0114 23:50:12.984031 3513 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 23:50:12.985652 kubelet[3513]: E0114 23:50:12.985553 3513 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 23:50:12.986005 kubelet[3513]: W0114 23:50:12.985787 3513 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 23:50:12.986005 kubelet[3513]: E0114 23:50:12.985823 3513 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 23:50:12.993060 containerd[1996]: time="2026-01-14T23:50:12.992958825Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-gxr77,Uid:39557301-0163-4a1a-aa02-feb533edac3f,Namespace:calico-system,Attempt:0,}" Jan 14 23:50:13.013496 kubelet[3513]: E0114 23:50:13.013445 3513 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 23:50:13.013687 kubelet[3513]: W0114 23:50:13.013547 3513 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 23:50:13.013687 kubelet[3513]: E0114 23:50:13.013602 3513 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 23:50:13.072670 containerd[1996]: time="2026-01-14T23:50:13.072236790Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-ddd99bf55-zqzzj,Uid:ee52f487-fda1-4ab4-8c6f-3892fa353d5b,Namespace:calico-system,Attempt:0,} returns sandbox id \"8e5186c56ce659fe90e33816652d041d6cb347562dbe8eb638bc476323e4acee\"" Jan 14 23:50:13.074415 containerd[1996]: time="2026-01-14T23:50:13.073996362Z" level=info msg="connecting to shim f2a2d21c167ae874f2963e361baaa1a6f68c833769edbb536182cf9dbbec6944" address="unix:///run/containerd/s/ac5b095456ce1b5d00daa7691e82cbc2b077d83324b628876608eefa07e7ad8c" namespace=k8s.io protocol=ttrpc version=3 Jan 14 23:50:13.080176 containerd[1996]: time="2026-01-14T23:50:13.080041746Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Jan 14 23:50:13.141940 systemd[1]: Started cri-containerd-f2a2d21c167ae874f2963e361baaa1a6f68c833769edbb536182cf9dbbec6944.scope - libcontainer container f2a2d21c167ae874f2963e361baaa1a6f68c833769edbb536182cf9dbbec6944. Jan 14 23:50:13.177000 audit: BPF prog-id=162 op=LOAD Jan 14 23:50:13.179000 audit: BPF prog-id=163 op=LOAD Jan 14 23:50:13.179000 audit[4019]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001a0180 a2=98 a3=0 items=0 ppid=4008 pid=4019 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:50:13.179000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6632613264323163313637616538373466323936336533363162616161 Jan 14 23:50:13.179000 audit: BPF prog-id=163 op=UNLOAD Jan 14 23:50:13.179000 audit[4019]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4008 pid=4019 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:50:13.179000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6632613264323163313637616538373466323936336533363162616161 Jan 14 23:50:13.179000 audit: BPF prog-id=164 op=LOAD Jan 14 23:50:13.179000 audit[4019]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001a03e8 a2=98 a3=0 items=0 ppid=4008 pid=4019 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:50:13.179000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6632613264323163313637616538373466323936336533363162616161 Jan 14 23:50:13.179000 audit: BPF prog-id=165 op=LOAD Jan 14 23:50:13.179000 audit[4019]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=40001a0168 a2=98 a3=0 items=0 ppid=4008 pid=4019 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:50:13.179000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6632613264323163313637616538373466323936336533363162616161 Jan 14 23:50:13.179000 audit: BPF prog-id=165 op=UNLOAD Jan 14 23:50:13.179000 audit[4019]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4008 pid=4019 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:50:13.179000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6632613264323163313637616538373466323936336533363162616161 Jan 14 23:50:13.179000 audit: BPF prog-id=164 op=UNLOAD Jan 14 23:50:13.179000 audit[4019]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4008 pid=4019 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:50:13.179000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6632613264323163313637616538373466323936336533363162616161 Jan 14 23:50:13.180000 audit: BPF prog-id=166 op=LOAD Jan 14 23:50:13.180000 audit[4019]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001a0648 a2=98 a3=0 items=0 ppid=4008 pid=4019 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:50:13.180000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6632613264323163313637616538373466323936336533363162616161 Jan 14 23:50:13.212363 containerd[1996]: time="2026-01-14T23:50:13.212250331Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-gxr77,Uid:39557301-0163-4a1a-aa02-feb533edac3f,Namespace:calico-system,Attempt:0,} returns sandbox id \"f2a2d21c167ae874f2963e361baaa1a6f68c833769edbb536182cf9dbbec6944\"" Jan 14 23:50:13.290000 audit[4045]: NETFILTER_CFG table=filter:119 family=2 entries=22 op=nft_register_rule pid=4045 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 23:50:13.290000 audit[4045]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=8224 a0=3 a1=ffffddfbb760 a2=0 a3=1 items=0 ppid=3619 pid=4045 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:50:13.290000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 23:50:13.294000 audit[4045]: NETFILTER_CFG table=nat:120 family=2 entries=12 op=nft_register_rule pid=4045 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 23:50:13.294000 audit[4045]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffddfbb760 a2=0 a3=1 items=0 ppid=3619 pid=4045 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:50:13.294000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 23:50:14.291429 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1301709253.mount: Deactivated successfully. Jan 14 23:50:14.461818 kubelet[3513]: E0114 23:50:14.461304 3513 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-8dsvj" podUID="23b28fd4-dc96-481b-a69a-1d96358778f5" Jan 14 23:50:15.000666 containerd[1996]: time="2026-01-14T23:50:15.000504103Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 23:50:15.003304 containerd[1996]: time="2026-01-14T23:50:15.003142975Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=31716861" Jan 14 23:50:15.004525 containerd[1996]: time="2026-01-14T23:50:15.004472779Z" level=info msg="ImageCreate event name:\"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 23:50:15.018178 containerd[1996]: time="2026-01-14T23:50:15.018108799Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 23:50:15.020321 containerd[1996]: time="2026-01-14T23:50:15.020186599Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"33090541\" in 1.940055201s" Jan 14 23:50:15.020321 containerd[1996]: time="2026-01-14T23:50:15.020276059Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\"" Jan 14 23:50:15.024448 containerd[1996]: time="2026-01-14T23:50:15.023325152Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Jan 14 23:50:15.058653 containerd[1996]: time="2026-01-14T23:50:15.058569884Z" level=info msg="CreateContainer within sandbox \"8e5186c56ce659fe90e33816652d041d6cb347562dbe8eb638bc476323e4acee\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jan 14 23:50:15.072621 containerd[1996]: time="2026-01-14T23:50:15.071882816Z" level=info msg="Container 89cf9e89afd533451f0656cf4ada5a368b2dfb93c1e86f568856eab092521a32: CDI devices from CRI Config.CDIDevices: []" Jan 14 23:50:15.083757 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2924927952.mount: Deactivated successfully. Jan 14 23:50:15.094719 containerd[1996]: time="2026-01-14T23:50:15.094622528Z" level=info msg="CreateContainer within sandbox \"8e5186c56ce659fe90e33816652d041d6cb347562dbe8eb638bc476323e4acee\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"89cf9e89afd533451f0656cf4ada5a368b2dfb93c1e86f568856eab092521a32\"" Jan 14 23:50:15.096404 containerd[1996]: time="2026-01-14T23:50:15.096266672Z" level=info msg="StartContainer for \"89cf9e89afd533451f0656cf4ada5a368b2dfb93c1e86f568856eab092521a32\"" Jan 14 23:50:15.100318 containerd[1996]: time="2026-01-14T23:50:15.100220924Z" level=info msg="connecting to shim 89cf9e89afd533451f0656cf4ada5a368b2dfb93c1e86f568856eab092521a32" address="unix:///run/containerd/s/ce6ab9049192dcc8ea29a62dd9b4df83b36112bb0e8e87bcdb0b06bfac5eaaa3" protocol=ttrpc version=3 Jan 14 23:50:15.137949 systemd[1]: Started cri-containerd-89cf9e89afd533451f0656cf4ada5a368b2dfb93c1e86f568856eab092521a32.scope - libcontainer container 89cf9e89afd533451f0656cf4ada5a368b2dfb93c1e86f568856eab092521a32. Jan 14 23:50:15.166000 audit: BPF prog-id=167 op=LOAD Jan 14 23:50:15.167000 audit: BPF prog-id=168 op=LOAD Jan 14 23:50:15.167000 audit[4056]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130180 a2=98 a3=0 items=0 ppid=3932 pid=4056 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:50:15.167000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3839636639653839616664353333343531663036353663663461646135 Jan 14 23:50:15.168000 audit: BPF prog-id=168 op=UNLOAD Jan 14 23:50:15.168000 audit[4056]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3932 pid=4056 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:50:15.168000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3839636639653839616664353333343531663036353663663461646135 Jan 14 23:50:15.168000 audit: BPF prog-id=169 op=LOAD Jan 14 23:50:15.168000 audit[4056]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001303e8 a2=98 a3=0 items=0 ppid=3932 pid=4056 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:50:15.168000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3839636639653839616664353333343531663036353663663461646135 Jan 14 23:50:15.168000 audit: BPF prog-id=170 op=LOAD Jan 14 23:50:15.168000 audit[4056]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000130168 a2=98 a3=0 items=0 ppid=3932 pid=4056 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:50:15.168000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3839636639653839616664353333343531663036353663663461646135 Jan 14 23:50:15.169000 audit: BPF prog-id=170 op=UNLOAD Jan 14 23:50:15.169000 audit[4056]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3932 pid=4056 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:50:15.169000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3839636639653839616664353333343531663036353663663461646135 Jan 14 23:50:15.169000 audit: BPF prog-id=169 op=UNLOAD Jan 14 23:50:15.169000 audit[4056]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3932 pid=4056 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:50:15.169000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3839636639653839616664353333343531663036353663663461646135 Jan 14 23:50:15.169000 audit: BPF prog-id=171 op=LOAD Jan 14 23:50:15.169000 audit[4056]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130648 a2=98 a3=0 items=0 ppid=3932 pid=4056 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:50:15.169000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3839636639653839616664353333343531663036353663663461646135 Jan 14 23:50:15.236298 containerd[1996]: time="2026-01-14T23:50:15.236219133Z" level=info msg="StartContainer for \"89cf9e89afd533451f0656cf4ada5a368b2dfb93c1e86f568856eab092521a32\" returns successfully" Jan 14 23:50:15.708458 kubelet[3513]: I0114 23:50:15.708264 3513 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-ddd99bf55-zqzzj" podStartSLOduration=1.764181454 podStartE2EDuration="3.708244067s" podCreationTimestamp="2026-01-14 23:50:12 +0000 UTC" firstStartedPulling="2026-01-14 23:50:13.078311814 +0000 UTC m=+37.912772277" lastFinishedPulling="2026-01-14 23:50:15.022374415 +0000 UTC m=+39.856834890" observedRunningTime="2026-01-14 23:50:15.707873339 +0000 UTC m=+40.542333826" watchObservedRunningTime="2026-01-14 23:50:15.708244067 +0000 UTC m=+40.542704554" Jan 14 23:50:15.755696 kubelet[3513]: E0114 23:50:15.755339 3513 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 23:50:15.755696 kubelet[3513]: W0114 23:50:15.755382 3513 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 23:50:15.755696 kubelet[3513]: E0114 23:50:15.755419 3513 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 23:50:15.758886 kubelet[3513]: E0114 23:50:15.757439 3513 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 23:50:15.758886 kubelet[3513]: W0114 23:50:15.757510 3513 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 23:50:15.758886 kubelet[3513]: E0114 23:50:15.757716 3513 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 23:50:15.759219 kubelet[3513]: E0114 23:50:15.759174 3513 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 23:50:15.759345 kubelet[3513]: W0114 23:50:15.759249 3513 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 23:50:15.759399 kubelet[3513]: E0114 23:50:15.759363 3513 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 23:50:15.761651 kubelet[3513]: E0114 23:50:15.761263 3513 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 23:50:15.761651 kubelet[3513]: W0114 23:50:15.761308 3513 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 23:50:15.761651 kubelet[3513]: E0114 23:50:15.761460 3513 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 23:50:15.764337 kubelet[3513]: E0114 23:50:15.764283 3513 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 23:50:15.764337 kubelet[3513]: W0114 23:50:15.764323 3513 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 23:50:15.764865 kubelet[3513]: E0114 23:50:15.764359 3513 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 23:50:15.766862 kubelet[3513]: E0114 23:50:15.766816 3513 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 23:50:15.767277 kubelet[3513]: W0114 23:50:15.767241 3513 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 23:50:15.767277 kubelet[3513]: E0114 23:50:15.767332 3513 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 23:50:15.768519 kubelet[3513]: E0114 23:50:15.768456 3513 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 23:50:15.769128 kubelet[3513]: W0114 23:50:15.768649 3513 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 23:50:15.769128 kubelet[3513]: E0114 23:50:15.768687 3513 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 23:50:15.772075 kubelet[3513]: E0114 23:50:15.772020 3513 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 23:50:15.772253 kubelet[3513]: W0114 23:50:15.772062 3513 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 23:50:15.772870 kubelet[3513]: E0114 23:50:15.772309 3513 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 23:50:15.774598 kubelet[3513]: E0114 23:50:15.773336 3513 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 23:50:15.774598 kubelet[3513]: W0114 23:50:15.773373 3513 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 23:50:15.774598 kubelet[3513]: E0114 23:50:15.773685 3513 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 23:50:15.777065 kubelet[3513]: E0114 23:50:15.777016 3513 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 23:50:15.777065 kubelet[3513]: W0114 23:50:15.777054 3513 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 23:50:15.777065 kubelet[3513]: E0114 23:50:15.777088 3513 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 23:50:15.779842 kubelet[3513]: E0114 23:50:15.779733 3513 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 23:50:15.779842 kubelet[3513]: W0114 23:50:15.779774 3513 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 23:50:15.779842 kubelet[3513]: E0114 23:50:15.779808 3513 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 23:50:15.781917 kubelet[3513]: E0114 23:50:15.781152 3513 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 23:50:15.781917 kubelet[3513]: W0114 23:50:15.781180 3513 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 23:50:15.781917 kubelet[3513]: E0114 23:50:15.781211 3513 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 23:50:15.784970 kubelet[3513]: E0114 23:50:15.784169 3513 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 23:50:15.784970 kubelet[3513]: W0114 23:50:15.784680 3513 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 23:50:15.784970 kubelet[3513]: E0114 23:50:15.784761 3513 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 23:50:15.787289 kubelet[3513]: E0114 23:50:15.787218 3513 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 23:50:15.787454 kubelet[3513]: W0114 23:50:15.787261 3513 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 23:50:15.787454 kubelet[3513]: E0114 23:50:15.787338 3513 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 23:50:15.788214 kubelet[3513]: E0114 23:50:15.788060 3513 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 23:50:15.788214 kubelet[3513]: W0114 23:50:15.788094 3513 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 23:50:15.788395 kubelet[3513]: E0114 23:50:15.788183 3513 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 23:50:15.790896 kubelet[3513]: E0114 23:50:15.790843 3513 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 23:50:15.791053 kubelet[3513]: W0114 23:50:15.791003 3513 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 23:50:15.791053 kubelet[3513]: E0114 23:50:15.791039 3513 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 23:50:15.793719 kubelet[3513]: E0114 23:50:15.793667 3513 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 23:50:15.793719 kubelet[3513]: W0114 23:50:15.793707 3513 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 23:50:15.793967 kubelet[3513]: E0114 23:50:15.793742 3513 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 23:50:15.797538 kubelet[3513]: E0114 23:50:15.796793 3513 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 23:50:15.797908 kubelet[3513]: W0114 23:50:15.797493 3513 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 23:50:15.797908 kubelet[3513]: E0114 23:50:15.797770 3513 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 23:50:15.800072 kubelet[3513]: E0114 23:50:15.799986 3513 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 23:50:15.800072 kubelet[3513]: W0114 23:50:15.800022 3513 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 23:50:15.800436 kubelet[3513]: E0114 23:50:15.800264 3513 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 23:50:15.803362 kubelet[3513]: E0114 23:50:15.803318 3513 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 23:50:15.803993 kubelet[3513]: W0114 23:50:15.803690 3513 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 23:50:15.803993 kubelet[3513]: E0114 23:50:15.803738 3513 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 23:50:15.805685 kubelet[3513]: E0114 23:50:15.805639 3513 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 23:50:15.805924 kubelet[3513]: W0114 23:50:15.805896 3513 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 23:50:15.806230 kubelet[3513]: E0114 23:50:15.806067 3513 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 23:50:15.807373 kubelet[3513]: E0114 23:50:15.807307 3513 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 23:50:15.807849 kubelet[3513]: W0114 23:50:15.807633 3513 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 23:50:15.807849 kubelet[3513]: E0114 23:50:15.807681 3513 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 23:50:15.809545 kubelet[3513]: E0114 23:50:15.809370 3513 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 23:50:15.809545 kubelet[3513]: W0114 23:50:15.809438 3513 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 23:50:15.809953 kubelet[3513]: E0114 23:50:15.809847 3513 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 23:50:15.811865 kubelet[3513]: E0114 23:50:15.811828 3513 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 23:50:15.812200 kubelet[3513]: W0114 23:50:15.811999 3513 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 23:50:15.812200 kubelet[3513]: E0114 23:50:15.812036 3513 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 23:50:15.812887 kubelet[3513]: E0114 23:50:15.812829 3513 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 23:50:15.813570 kubelet[3513]: W0114 23:50:15.813071 3513 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 23:50:15.813570 kubelet[3513]: E0114 23:50:15.813113 3513 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 23:50:15.816008 kubelet[3513]: E0114 23:50:15.815752 3513 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 23:50:15.816008 kubelet[3513]: W0114 23:50:15.815786 3513 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 23:50:15.816008 kubelet[3513]: E0114 23:50:15.815830 3513 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 23:50:15.817568 kubelet[3513]: E0114 23:50:15.817456 3513 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 23:50:15.817730 kubelet[3513]: W0114 23:50:15.817630 3513 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 23:50:15.817730 kubelet[3513]: E0114 23:50:15.817666 3513 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 23:50:15.819129 kubelet[3513]: E0114 23:50:15.819033 3513 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 23:50:15.819129 kubelet[3513]: W0114 23:50:15.819067 3513 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 23:50:15.819129 kubelet[3513]: E0114 23:50:15.819100 3513 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 23:50:15.821754 kubelet[3513]: E0114 23:50:15.821719 3513 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 23:50:15.822391 kubelet[3513]: W0114 23:50:15.822092 3513 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 23:50:15.822391 kubelet[3513]: E0114 23:50:15.822136 3513 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 23:50:15.822000 audit[4116]: NETFILTER_CFG table=filter:121 family=2 entries=21 op=nft_register_rule pid=4116 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 23:50:15.822000 audit[4116]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=7480 a0=3 a1=ffffe9943d00 a2=0 a3=1 items=0 ppid=3619 pid=4116 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:50:15.822000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 23:50:15.824651 kubelet[3513]: E0114 23:50:15.824015 3513 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 23:50:15.824651 kubelet[3513]: W0114 23:50:15.824619 3513 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 23:50:15.824879 kubelet[3513]: E0114 23:50:15.824685 3513 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 23:50:15.825144 kubelet[3513]: E0114 23:50:15.825104 3513 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 23:50:15.825144 kubelet[3513]: W0114 23:50:15.825134 3513 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 23:50:15.825274 kubelet[3513]: E0114 23:50:15.825158 3513 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 23:50:15.825936 kubelet[3513]: E0114 23:50:15.825884 3513 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 23:50:15.826067 kubelet[3513]: W0114 23:50:15.825963 3513 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 23:50:15.826129 kubelet[3513]: E0114 23:50:15.825992 3513 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 23:50:15.826771 kubelet[3513]: E0114 23:50:15.826736 3513 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 23:50:15.827061 kubelet[3513]: W0114 23:50:15.827010 3513 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 23:50:15.827279 kubelet[3513]: E0114 23:50:15.827243 3513 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 23:50:15.829000 audit[4116]: NETFILTER_CFG table=nat:122 family=2 entries=19 op=nft_register_chain pid=4116 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 23:50:15.829000 audit[4116]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=6276 a0=3 a1=ffffe9943d00 a2=0 a3=1 items=0 ppid=3619 pid=4116 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:50:15.829000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 23:50:16.320234 containerd[1996]: time="2026-01-14T23:50:16.320163322Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 23:50:16.322297 containerd[1996]: time="2026-01-14T23:50:16.322221802Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=2517" Jan 14 23:50:16.323867 containerd[1996]: time="2026-01-14T23:50:16.323805454Z" level=info msg="ImageCreate event name:\"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 23:50:16.328761 containerd[1996]: time="2026-01-14T23:50:16.328676446Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 23:50:16.330236 containerd[1996]: time="2026-01-14T23:50:16.329961982Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5636392\" in 1.306542918s" Jan 14 23:50:16.330236 containerd[1996]: time="2026-01-14T23:50:16.330030466Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\"" Jan 14 23:50:16.337306 containerd[1996]: time="2026-01-14T23:50:16.337247914Z" level=info msg="CreateContainer within sandbox \"f2a2d21c167ae874f2963e361baaa1a6f68c833769edbb536182cf9dbbec6944\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 14 23:50:16.352028 containerd[1996]: time="2026-01-14T23:50:16.351955630Z" level=info msg="Container 38588b2b344d89578cab8b5b3efc79f6d164dcf5f62fdf93c5406a38b6544086: CDI devices from CRI Config.CDIDevices: []" Jan 14 23:50:16.358658 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount301903524.mount: Deactivated successfully. Jan 14 23:50:16.371887 containerd[1996]: time="2026-01-14T23:50:16.371694022Z" level=info msg="CreateContainer within sandbox \"f2a2d21c167ae874f2963e361baaa1a6f68c833769edbb536182cf9dbbec6944\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"38588b2b344d89578cab8b5b3efc79f6d164dcf5f62fdf93c5406a38b6544086\"" Jan 14 23:50:16.373570 containerd[1996]: time="2026-01-14T23:50:16.373077142Z" level=info msg="StartContainer for \"38588b2b344d89578cab8b5b3efc79f6d164dcf5f62fdf93c5406a38b6544086\"" Jan 14 23:50:16.380528 containerd[1996]: time="2026-01-14T23:50:16.380386630Z" level=info msg="connecting to shim 38588b2b344d89578cab8b5b3efc79f6d164dcf5f62fdf93c5406a38b6544086" address="unix:///run/containerd/s/ac5b095456ce1b5d00daa7691e82cbc2b077d83324b628876608eefa07e7ad8c" protocol=ttrpc version=3 Jan 14 23:50:16.420266 systemd[1]: Started cri-containerd-38588b2b344d89578cab8b5b3efc79f6d164dcf5f62fdf93c5406a38b6544086.scope - libcontainer container 38588b2b344d89578cab8b5b3efc79f6d164dcf5f62fdf93c5406a38b6544086. Jan 14 23:50:16.462117 kubelet[3513]: E0114 23:50:16.462016 3513 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-8dsvj" podUID="23b28fd4-dc96-481b-a69a-1d96358778f5" Jan 14 23:50:16.500000 audit: BPF prog-id=172 op=LOAD Jan 14 23:50:16.500000 audit[4137]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001383e8 a2=98 a3=0 items=0 ppid=4008 pid=4137 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:50:16.500000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3338353838623262333434643839353738636162386235623365666337 Jan 14 23:50:16.501000 audit: BPF prog-id=173 op=LOAD Jan 14 23:50:16.501000 audit[4137]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=4000138168 a2=98 a3=0 items=0 ppid=4008 pid=4137 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:50:16.501000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3338353838623262333434643839353738636162386235623365666337 Jan 14 23:50:16.501000 audit: BPF prog-id=173 op=UNLOAD Jan 14 23:50:16.501000 audit[4137]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4008 pid=4137 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:50:16.501000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3338353838623262333434643839353738636162386235623365666337 Jan 14 23:50:16.502000 audit: BPF prog-id=172 op=UNLOAD Jan 14 23:50:16.502000 audit[4137]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4008 pid=4137 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:50:16.502000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3338353838623262333434643839353738636162386235623365666337 Jan 14 23:50:16.502000 audit: BPF prog-id=174 op=LOAD Jan 14 23:50:16.502000 audit[4137]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000138648 a2=98 a3=0 items=0 ppid=4008 pid=4137 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:50:16.502000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3338353838623262333434643839353738636162386235623365666337 Jan 14 23:50:16.543530 containerd[1996]: time="2026-01-14T23:50:16.543391727Z" level=info msg="StartContainer for \"38588b2b344d89578cab8b5b3efc79f6d164dcf5f62fdf93c5406a38b6544086\" returns successfully" Jan 14 23:50:16.573905 systemd[1]: cri-containerd-38588b2b344d89578cab8b5b3efc79f6d164dcf5f62fdf93c5406a38b6544086.scope: Deactivated successfully. Jan 14 23:50:16.577000 audit: BPF prog-id=174 op=UNLOAD Jan 14 23:50:16.582626 containerd[1996]: time="2026-01-14T23:50:16.582504299Z" level=info msg="received container exit event container_id:\"38588b2b344d89578cab8b5b3efc79f6d164dcf5f62fdf93c5406a38b6544086\" id:\"38588b2b344d89578cab8b5b3efc79f6d164dcf5f62fdf93c5406a38b6544086\" pid:4150 exited_at:{seconds:1768434616 nanos:580445387}" Jan 14 23:50:16.625937 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-38588b2b344d89578cab8b5b3efc79f6d164dcf5f62fdf93c5406a38b6544086-rootfs.mount: Deactivated successfully. Jan 14 23:50:17.706639 containerd[1996]: time="2026-01-14T23:50:17.705032953Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Jan 14 23:50:18.461518 kubelet[3513]: E0114 23:50:18.461278 3513 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-8dsvj" podUID="23b28fd4-dc96-481b-a69a-1d96358778f5" Jan 14 23:50:20.461610 kubelet[3513]: E0114 23:50:20.461522 3513 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-8dsvj" podUID="23b28fd4-dc96-481b-a69a-1d96358778f5" Jan 14 23:50:20.549604 containerd[1996]: time="2026-01-14T23:50:20.549502647Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 23:50:20.550920 containerd[1996]: time="2026-01-14T23:50:20.550825443Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=65921248" Jan 14 23:50:20.552563 containerd[1996]: time="2026-01-14T23:50:20.552473799Z" level=info msg="ImageCreate event name:\"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 23:50:20.557980 containerd[1996]: time="2026-01-14T23:50:20.557900271Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 23:50:20.560821 containerd[1996]: time="2026-01-14T23:50:20.560728347Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"67295507\" in 2.85490889s" Jan 14 23:50:20.560821 containerd[1996]: time="2026-01-14T23:50:20.560791371Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\"" Jan 14 23:50:20.569329 containerd[1996]: time="2026-01-14T23:50:20.569273475Z" level=info msg="CreateContainer within sandbox \"f2a2d21c167ae874f2963e361baaa1a6f68c833769edbb536182cf9dbbec6944\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 14 23:50:20.586612 containerd[1996]: time="2026-01-14T23:50:20.585420135Z" level=info msg="Container a526a4dced8852c414d4d7626837a63aa02072b5f59f5d30dcb66d6b841da91c: CDI devices from CRI Config.CDIDevices: []" Jan 14 23:50:20.594784 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1663374252.mount: Deactivated successfully. Jan 14 23:50:20.608755 containerd[1996]: time="2026-01-14T23:50:20.608556051Z" level=info msg="CreateContainer within sandbox \"f2a2d21c167ae874f2963e361baaa1a6f68c833769edbb536182cf9dbbec6944\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"a526a4dced8852c414d4d7626837a63aa02072b5f59f5d30dcb66d6b841da91c\"" Jan 14 23:50:20.611136 containerd[1996]: time="2026-01-14T23:50:20.609828219Z" level=info msg="StartContainer for \"a526a4dced8852c414d4d7626837a63aa02072b5f59f5d30dcb66d6b841da91c\"" Jan 14 23:50:20.613568 containerd[1996]: time="2026-01-14T23:50:20.613502439Z" level=info msg="connecting to shim a526a4dced8852c414d4d7626837a63aa02072b5f59f5d30dcb66d6b841da91c" address="unix:///run/containerd/s/ac5b095456ce1b5d00daa7691e82cbc2b077d83324b628876608eefa07e7ad8c" protocol=ttrpc version=3 Jan 14 23:50:20.658961 systemd[1]: Started cri-containerd-a526a4dced8852c414d4d7626837a63aa02072b5f59f5d30dcb66d6b841da91c.scope - libcontainer container a526a4dced8852c414d4d7626837a63aa02072b5f59f5d30dcb66d6b841da91c. Jan 14 23:50:20.738000 audit: BPF prog-id=175 op=LOAD Jan 14 23:50:20.740853 kernel: kauditd_printk_skb: 96 callbacks suppressed Jan 14 23:50:20.740937 kernel: audit: type=1334 audit(1768434620.738:580): prog-id=175 op=LOAD Jan 14 23:50:20.738000 audit[4195]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001763e8 a2=98 a3=0 items=0 ppid=4008 pid=4195 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:50:20.749478 kernel: audit: type=1300 audit(1768434620.738:580): arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001763e8 a2=98 a3=0 items=0 ppid=4008 pid=4195 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:50:20.738000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6135323661346463656438383532633431346434643736323638333761 Jan 14 23:50:20.756248 kernel: audit: type=1327 audit(1768434620.738:580): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6135323661346463656438383532633431346434643736323638333761 Jan 14 23:50:20.742000 audit: BPF prog-id=176 op=LOAD Jan 14 23:50:20.757899 kernel: audit: type=1334 audit(1768434620.742:581): prog-id=176 op=LOAD Jan 14 23:50:20.758089 kernel: audit: type=1300 audit(1768434620.742:581): arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=4000176168 a2=98 a3=0 items=0 ppid=4008 pid=4195 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:50:20.742000 audit[4195]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=4000176168 a2=98 a3=0 items=0 ppid=4008 pid=4195 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:50:20.742000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6135323661346463656438383532633431346434643736323638333761 Jan 14 23:50:20.771481 kernel: audit: type=1327 audit(1768434620.742:581): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6135323661346463656438383532633431346434643736323638333761 Jan 14 23:50:20.742000 audit: BPF prog-id=176 op=UNLOAD Jan 14 23:50:20.774398 kernel: audit: type=1334 audit(1768434620.742:582): prog-id=176 op=UNLOAD Jan 14 23:50:20.742000 audit[4195]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4008 pid=4195 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:50:20.782496 kernel: audit: type=1300 audit(1768434620.742:582): arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4008 pid=4195 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:50:20.782648 kernel: audit: type=1327 audit(1768434620.742:582): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6135323661346463656438383532633431346434643736323638333761 Jan 14 23:50:20.742000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6135323661346463656438383532633431346434643736323638333761 Jan 14 23:50:20.742000 audit: BPF prog-id=175 op=UNLOAD Jan 14 23:50:20.789984 kernel: audit: type=1334 audit(1768434620.742:583): prog-id=175 op=UNLOAD Jan 14 23:50:20.742000 audit[4195]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4008 pid=4195 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:50:20.742000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6135323661346463656438383532633431346434643736323638333761 Jan 14 23:50:20.742000 audit: BPF prog-id=177 op=LOAD Jan 14 23:50:20.742000 audit[4195]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000176648 a2=98 a3=0 items=0 ppid=4008 pid=4195 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:50:20.742000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6135323661346463656438383532633431346434643736323638333761 Jan 14 23:50:20.814005 containerd[1996]: time="2026-01-14T23:50:20.813948940Z" level=info msg="StartContainer for \"a526a4dced8852c414d4d7626837a63aa02072b5f59f5d30dcb66d6b841da91c\" returns successfully" Jan 14 23:50:21.903708 containerd[1996]: time="2026-01-14T23:50:21.903618882Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 14 23:50:21.907820 systemd[1]: cri-containerd-a526a4dced8852c414d4d7626837a63aa02072b5f59f5d30dcb66d6b841da91c.scope: Deactivated successfully. Jan 14 23:50:21.909145 systemd[1]: cri-containerd-a526a4dced8852c414d4d7626837a63aa02072b5f59f5d30dcb66d6b841da91c.scope: Consumed 1.013s CPU time, 190.1M memory peak, 165.9M written to disk. Jan 14 23:50:21.911000 audit: BPF prog-id=177 op=UNLOAD Jan 14 23:50:21.914457 kubelet[3513]: I0114 23:50:21.914365 3513 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jan 14 23:50:21.920395 containerd[1996]: time="2026-01-14T23:50:21.920001822Z" level=info msg="received container exit event container_id:\"a526a4dced8852c414d4d7626837a63aa02072b5f59f5d30dcb66d6b841da91c\" id:\"a526a4dced8852c414d4d7626837a63aa02072b5f59f5d30dcb66d6b841da91c\" pid:4208 exited_at:{seconds:1768434621 nanos:919529238}" Jan 14 23:50:22.006991 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a526a4dced8852c414d4d7626837a63aa02072b5f59f5d30dcb66d6b841da91c-rootfs.mount: Deactivated successfully. Jan 14 23:50:22.032997 systemd[1]: Created slice kubepods-burstable-podcf34a3a4_eba9_46e0_9f84_01c9f3710543.slice - libcontainer container kubepods-burstable-podcf34a3a4_eba9_46e0_9f84_01c9f3710543.slice. Jan 14 23:50:22.044541 kubelet[3513]: I0114 23:50:22.044436 3513 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fd982afe-6d4f-4aff-a998-9f2578e041f1-config-volume\") pod \"coredns-674b8bbfcf-28nlv\" (UID: \"fd982afe-6d4f-4aff-a998-9f2578e041f1\") " pod="kube-system/coredns-674b8bbfcf-28nlv" Jan 14 23:50:22.045126 kubelet[3513]: I0114 23:50:22.044891 3513 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kt4q8\" (UniqueName: \"kubernetes.io/projected/fd982afe-6d4f-4aff-a998-9f2578e041f1-kube-api-access-kt4q8\") pod \"coredns-674b8bbfcf-28nlv\" (UID: \"fd982afe-6d4f-4aff-a998-9f2578e041f1\") " pod="kube-system/coredns-674b8bbfcf-28nlv" Jan 14 23:50:22.045126 kubelet[3513]: I0114 23:50:22.045045 3513 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/21b369b7-d986-41d1-8a2e-a01d832685f7-tigera-ca-bundle\") pod \"calico-kube-controllers-fdc6fb9d4-p5lnk\" (UID: \"21b369b7-d986-41d1-8a2e-a01d832685f7\") " pod="calico-system/calico-kube-controllers-fdc6fb9d4-p5lnk" Jan 14 23:50:22.045472 kubelet[3513]: I0114 23:50:22.045407 3513 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jktsv\" (UniqueName: \"kubernetes.io/projected/cf34a3a4-eba9-46e0-9f84-01c9f3710543-kube-api-access-jktsv\") pod \"coredns-674b8bbfcf-cqpz4\" (UID: \"cf34a3a4-eba9-46e0-9f84-01c9f3710543\") " pod="kube-system/coredns-674b8bbfcf-cqpz4" Jan 14 23:50:22.051686 kubelet[3513]: I0114 23:50:22.046922 3513 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9fmf5\" (UniqueName: \"kubernetes.io/projected/21b369b7-d986-41d1-8a2e-a01d832685f7-kube-api-access-9fmf5\") pod \"calico-kube-controllers-fdc6fb9d4-p5lnk\" (UID: \"21b369b7-d986-41d1-8a2e-a01d832685f7\") " pod="calico-system/calico-kube-controllers-fdc6fb9d4-p5lnk" Jan 14 23:50:22.051686 kubelet[3513]: I0114 23:50:22.051020 3513 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cf34a3a4-eba9-46e0-9f84-01c9f3710543-config-volume\") pod \"coredns-674b8bbfcf-cqpz4\" (UID: \"cf34a3a4-eba9-46e0-9f84-01c9f3710543\") " pod="kube-system/coredns-674b8bbfcf-cqpz4" Jan 14 23:50:22.067735 systemd[1]: Created slice kubepods-besteffort-pod21b369b7_d986_41d1_8a2e_a01d832685f7.slice - libcontainer container kubepods-besteffort-pod21b369b7_d986_41d1_8a2e_a01d832685f7.slice. Jan 14 23:50:22.107492 systemd[1]: Created slice kubepods-burstable-podfd982afe_6d4f_4aff_a998_9f2578e041f1.slice - libcontainer container kubepods-burstable-podfd982afe_6d4f_4aff_a998_9f2578e041f1.slice. Jan 14 23:50:22.151889 kubelet[3513]: I0114 23:50:22.151768 3513 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/255a4468-7378-413d-92bd-8056478658d3-calico-apiserver-certs\") pod \"calico-apiserver-6d86655bcb-l8j4w\" (UID: \"255a4468-7378-413d-92bd-8056478658d3\") " pod="calico-apiserver/calico-apiserver-6d86655bcb-l8j4w" Jan 14 23:50:22.151889 kubelet[3513]: I0114 23:50:22.151848 3513 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m5xqc\" (UniqueName: \"kubernetes.io/projected/255a4468-7378-413d-92bd-8056478658d3-kube-api-access-m5xqc\") pod \"calico-apiserver-6d86655bcb-l8j4w\" (UID: \"255a4468-7378-413d-92bd-8056478658d3\") " pod="calico-apiserver/calico-apiserver-6d86655bcb-l8j4w" Jan 14 23:50:22.169309 systemd[1]: Created slice kubepods-besteffort-pod255a4468_7378_413d_92bd_8056478658d3.slice - libcontainer container kubepods-besteffort-pod255a4468_7378_413d_92bd_8056478658d3.slice. Jan 14 23:50:22.255335 kubelet[3513]: I0114 23:50:22.255142 3513 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2054a7f1-33a6-4a0b-8079-7e8881899bb3-config\") pod \"goldmane-666569f655-fd2bc\" (UID: \"2054a7f1-33a6-4a0b-8079-7e8881899bb3\") " pod="calico-system/goldmane-666569f655-fd2bc" Jan 14 23:50:22.255335 kubelet[3513]: I0114 23:50:22.255249 3513 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2054a7f1-33a6-4a0b-8079-7e8881899bb3-goldmane-ca-bundle\") pod \"goldmane-666569f655-fd2bc\" (UID: \"2054a7f1-33a6-4a0b-8079-7e8881899bb3\") " pod="calico-system/goldmane-666569f655-fd2bc" Jan 14 23:50:22.256534 kubelet[3513]: I0114 23:50:22.255446 3513 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/296d9b29-20aa-492b-aa70-e26652feb8da-calico-apiserver-certs\") pod \"calico-apiserver-6d86655bcb-8jvgv\" (UID: \"296d9b29-20aa-492b-aa70-e26652feb8da\") " pod="calico-apiserver/calico-apiserver-6d86655bcb-8jvgv" Jan 14 23:50:22.256534 kubelet[3513]: I0114 23:50:22.255517 3513 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mr2sb\" (UniqueName: \"kubernetes.io/projected/2054a7f1-33a6-4a0b-8079-7e8881899bb3-kube-api-access-mr2sb\") pod \"goldmane-666569f655-fd2bc\" (UID: \"2054a7f1-33a6-4a0b-8079-7e8881899bb3\") " pod="calico-system/goldmane-666569f655-fd2bc" Jan 14 23:50:22.256534 kubelet[3513]: I0114 23:50:22.255622 3513 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gdgll\" (UniqueName: \"kubernetes.io/projected/296d9b29-20aa-492b-aa70-e26652feb8da-kube-api-access-gdgll\") pod \"calico-apiserver-6d86655bcb-8jvgv\" (UID: \"296d9b29-20aa-492b-aa70-e26652feb8da\") " pod="calico-apiserver/calico-apiserver-6d86655bcb-8jvgv" Jan 14 23:50:22.256534 kubelet[3513]: I0114 23:50:22.255677 3513 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/2054a7f1-33a6-4a0b-8079-7e8881899bb3-goldmane-key-pair\") pod \"goldmane-666569f655-fd2bc\" (UID: \"2054a7f1-33a6-4a0b-8079-7e8881899bb3\") " pod="calico-system/goldmane-666569f655-fd2bc" Jan 14 23:50:22.269438 systemd[1]: Created slice kubepods-besteffort-pod296d9b29_20aa_492b_aa70_e26652feb8da.slice - libcontainer container kubepods-besteffort-pod296d9b29_20aa_492b_aa70_e26652feb8da.slice. Jan 14 23:50:22.319484 systemd[1]: Created slice kubepods-besteffort-pod2054a7f1_33a6_4a0b_8079_7e8881899bb3.slice - libcontainer container kubepods-besteffort-pod2054a7f1_33a6_4a0b_8079_7e8881899bb3.slice. Jan 14 23:50:22.343505 systemd[1]: Created slice kubepods-besteffort-pod49be0d3d_84d7_4994_b2e7_37cea1fa9624.slice - libcontainer container kubepods-besteffort-pod49be0d3d_84d7_4994_b2e7_37cea1fa9624.slice. Jan 14 23:50:22.356527 kubelet[3513]: I0114 23:50:22.356388 3513 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/49be0d3d-84d7-4994-b2e7-37cea1fa9624-calico-apiserver-certs\") pod \"calico-apiserver-7cc6d978d-lsv59\" (UID: \"49be0d3d-84d7-4994-b2e7-37cea1fa9624\") " pod="calico-apiserver/calico-apiserver-7cc6d978d-lsv59" Jan 14 23:50:22.357953 kubelet[3513]: I0114 23:50:22.357834 3513 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fcwsd\" (UniqueName: \"kubernetes.io/projected/49be0d3d-84d7-4994-b2e7-37cea1fa9624-kube-api-access-fcwsd\") pod \"calico-apiserver-7cc6d978d-lsv59\" (UID: \"49be0d3d-84d7-4994-b2e7-37cea1fa9624\") " pod="calico-apiserver/calico-apiserver-7cc6d978d-lsv59" Jan 14 23:50:22.394845 containerd[1996]: time="2026-01-14T23:50:22.394467052Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-fdc6fb9d4-p5lnk,Uid:21b369b7-d986-41d1-8a2e-a01d832685f7,Namespace:calico-system,Attempt:0,}" Jan 14 23:50:22.397007 containerd[1996]: time="2026-01-14T23:50:22.396883936Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-cqpz4,Uid:cf34a3a4-eba9-46e0-9f84-01c9f3710543,Namespace:kube-system,Attempt:0,}" Jan 14 23:50:22.405717 systemd[1]: Created slice kubepods-besteffort-pod758c893c_34e9_4998_9ac9_398fc12846e5.slice - libcontainer container kubepods-besteffort-pod758c893c_34e9_4998_9ac9_398fc12846e5.slice. Jan 14 23:50:22.439698 containerd[1996]: time="2026-01-14T23:50:22.439517800Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-28nlv,Uid:fd982afe-6d4f-4aff-a998-9f2578e041f1,Namespace:kube-system,Attempt:0,}" Jan 14 23:50:22.465629 kubelet[3513]: I0114 23:50:22.464100 3513 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/758c893c-34e9-4998-9ac9-398fc12846e5-whisker-backend-key-pair\") pod \"whisker-5d797fc78b-ddqxp\" (UID: \"758c893c-34e9-4998-9ac9-398fc12846e5\") " pod="calico-system/whisker-5d797fc78b-ddqxp" Jan 14 23:50:22.465629 kubelet[3513]: I0114 23:50:22.464208 3513 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-94w6k\" (UniqueName: \"kubernetes.io/projected/758c893c-34e9-4998-9ac9-398fc12846e5-kube-api-access-94w6k\") pod \"whisker-5d797fc78b-ddqxp\" (UID: \"758c893c-34e9-4998-9ac9-398fc12846e5\") " pod="calico-system/whisker-5d797fc78b-ddqxp" Jan 14 23:50:22.465629 kubelet[3513]: I0114 23:50:22.464278 3513 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/758c893c-34e9-4998-9ac9-398fc12846e5-whisker-ca-bundle\") pod \"whisker-5d797fc78b-ddqxp\" (UID: \"758c893c-34e9-4998-9ac9-398fc12846e5\") " pod="calico-system/whisker-5d797fc78b-ddqxp" Jan 14 23:50:22.484909 systemd[1]: Created slice kubepods-besteffort-pod23b28fd4_dc96_481b_a69a_1d96358778f5.slice - libcontainer container kubepods-besteffort-pod23b28fd4_dc96_481b_a69a_1d96358778f5.slice. Jan 14 23:50:22.487562 containerd[1996]: time="2026-01-14T23:50:22.487349669Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6d86655bcb-l8j4w,Uid:255a4468-7378-413d-92bd-8056478658d3,Namespace:calico-apiserver,Attempt:0,}" Jan 14 23:50:22.509289 containerd[1996]: time="2026-01-14T23:50:22.509236301Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-8dsvj,Uid:23b28fd4-dc96-481b-a69a-1d96358778f5,Namespace:calico-system,Attempt:0,}" Jan 14 23:50:22.591178 containerd[1996]: time="2026-01-14T23:50:22.591009737Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6d86655bcb-8jvgv,Uid:296d9b29-20aa-492b-aa70-e26652feb8da,Namespace:calico-apiserver,Attempt:0,}" Jan 14 23:50:22.654678 containerd[1996]: time="2026-01-14T23:50:22.654626825Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-fd2bc,Uid:2054a7f1-33a6-4a0b-8079-7e8881899bb3,Namespace:calico-system,Attempt:0,}" Jan 14 23:50:22.655461 containerd[1996]: time="2026-01-14T23:50:22.655302473Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7cc6d978d-lsv59,Uid:49be0d3d-84d7-4994-b2e7-37cea1fa9624,Namespace:calico-apiserver,Attempt:0,}" Jan 14 23:50:22.714409 containerd[1996]: time="2026-01-14T23:50:22.714226806Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5d797fc78b-ddqxp,Uid:758c893c-34e9-4998-9ac9-398fc12846e5,Namespace:calico-system,Attempt:0,}" Jan 14 23:50:22.772632 containerd[1996]: time="2026-01-14T23:50:22.771926970Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Jan 14 23:50:23.090612 containerd[1996]: time="2026-01-14T23:50:23.088694548Z" level=error msg="Failed to destroy network for sandbox \"db8ab24b6117c7d8b71f5cb2f846792be6f74bfae66f8e928a330be2eb73281c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 23:50:23.096606 containerd[1996]: time="2026-01-14T23:50:23.095107180Z" level=error msg="Failed to destroy network for sandbox \"973eeddbf0c8944a0703c49cadf48c9d946b212667ecbeb5d8ce37708d36ca6c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 23:50:23.095392 systemd[1]: run-netns-cni\x2d4aa2e09a\x2df7b7\x2db013\x2d7bdc\x2d5dcbc03c3fca.mount: Deactivated successfully. Jan 14 23:50:23.098772 containerd[1996]: time="2026-01-14T23:50:23.098111056Z" level=error msg="Failed to destroy network for sandbox \"475a90d4ef9cfe00156c66c8c08e8cccc3665934f4f67ef3a50210393f017e26\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 23:50:23.106478 systemd[1]: run-netns-cni\x2d321bb538\x2db58e\x2d9eb4\x2dda4f\x2dee22a8578255.mount: Deactivated successfully. Jan 14 23:50:23.106877 systemd[1]: run-netns-cni\x2dd130bd67\x2d28d6\x2d4c75\x2dedf3\x2d61e0af228ba3.mount: Deactivated successfully. Jan 14 23:50:23.120332 containerd[1996]: time="2026-01-14T23:50:23.120165628Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6d86655bcb-l8j4w,Uid:255a4468-7378-413d-92bd-8056478658d3,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"475a90d4ef9cfe00156c66c8c08e8cccc3665934f4f67ef3a50210393f017e26\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 23:50:23.121688 kubelet[3513]: E0114 23:50:23.121534 3513 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"475a90d4ef9cfe00156c66c8c08e8cccc3665934f4f67ef3a50210393f017e26\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 23:50:23.124024 kubelet[3513]: E0114 23:50:23.121671 3513 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"475a90d4ef9cfe00156c66c8c08e8cccc3665934f4f67ef3a50210393f017e26\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6d86655bcb-l8j4w" Jan 14 23:50:23.124024 kubelet[3513]: E0114 23:50:23.121836 3513 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"475a90d4ef9cfe00156c66c8c08e8cccc3665934f4f67ef3a50210393f017e26\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6d86655bcb-l8j4w" Jan 14 23:50:23.124024 kubelet[3513]: E0114 23:50:23.121923 3513 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6d86655bcb-l8j4w_calico-apiserver(255a4468-7378-413d-92bd-8056478658d3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6d86655bcb-l8j4w_calico-apiserver(255a4468-7378-413d-92bd-8056478658d3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"475a90d4ef9cfe00156c66c8c08e8cccc3665934f4f67ef3a50210393f017e26\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6d86655bcb-l8j4w" podUID="255a4468-7378-413d-92bd-8056478658d3" Jan 14 23:50:23.135014 containerd[1996]: time="2026-01-14T23:50:23.134796736Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-cqpz4,Uid:cf34a3a4-eba9-46e0-9f84-01c9f3710543,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"973eeddbf0c8944a0703c49cadf48c9d946b212667ecbeb5d8ce37708d36ca6c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 23:50:23.135771 kubelet[3513]: E0114 23:50:23.135696 3513 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"973eeddbf0c8944a0703c49cadf48c9d946b212667ecbeb5d8ce37708d36ca6c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 23:50:23.137798 kubelet[3513]: E0114 23:50:23.135794 3513 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"973eeddbf0c8944a0703c49cadf48c9d946b212667ecbeb5d8ce37708d36ca6c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-cqpz4" Jan 14 23:50:23.137798 kubelet[3513]: E0114 23:50:23.135844 3513 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"973eeddbf0c8944a0703c49cadf48c9d946b212667ecbeb5d8ce37708d36ca6c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-cqpz4" Jan 14 23:50:23.137798 kubelet[3513]: E0114 23:50:23.135947 3513 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-cqpz4_kube-system(cf34a3a4-eba9-46e0-9f84-01c9f3710543)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-cqpz4_kube-system(cf34a3a4-eba9-46e0-9f84-01c9f3710543)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"973eeddbf0c8944a0703c49cadf48c9d946b212667ecbeb5d8ce37708d36ca6c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-cqpz4" podUID="cf34a3a4-eba9-46e0-9f84-01c9f3710543" Jan 14 23:50:23.144626 containerd[1996]: time="2026-01-14T23:50:23.144272836Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-fdc6fb9d4-p5lnk,Uid:21b369b7-d986-41d1-8a2e-a01d832685f7,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"db8ab24b6117c7d8b71f5cb2f846792be6f74bfae66f8e928a330be2eb73281c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 23:50:23.144979 kubelet[3513]: E0114 23:50:23.144906 3513 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"db8ab24b6117c7d8b71f5cb2f846792be6f74bfae66f8e928a330be2eb73281c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 23:50:23.145094 kubelet[3513]: E0114 23:50:23.144983 3513 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"db8ab24b6117c7d8b71f5cb2f846792be6f74bfae66f8e928a330be2eb73281c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-fdc6fb9d4-p5lnk" Jan 14 23:50:23.145094 kubelet[3513]: E0114 23:50:23.145019 3513 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"db8ab24b6117c7d8b71f5cb2f846792be6f74bfae66f8e928a330be2eb73281c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-fdc6fb9d4-p5lnk" Jan 14 23:50:23.145880 kubelet[3513]: E0114 23:50:23.145795 3513 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-fdc6fb9d4-p5lnk_calico-system(21b369b7-d986-41d1-8a2e-a01d832685f7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-fdc6fb9d4-p5lnk_calico-system(21b369b7-d986-41d1-8a2e-a01d832685f7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"db8ab24b6117c7d8b71f5cb2f846792be6f74bfae66f8e928a330be2eb73281c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-fdc6fb9d4-p5lnk" podUID="21b369b7-d986-41d1-8a2e-a01d832685f7" Jan 14 23:50:23.154976 containerd[1996]: time="2026-01-14T23:50:23.154916560Z" level=error msg="Failed to destroy network for sandbox \"593588a1f4a6d7d2a7cd24c5d9c5e7d7dc05b2d7618cec1c2af3f0ca2b7ee214\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 23:50:23.161234 systemd[1]: run-netns-cni\x2dd1126389\x2de6c0\x2d3a9f\x2db4bb\x2dc7ba85faac85.mount: Deactivated successfully. Jan 14 23:50:23.170545 containerd[1996]: time="2026-01-14T23:50:23.170478316Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-28nlv,Uid:fd982afe-6d4f-4aff-a998-9f2578e041f1,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"593588a1f4a6d7d2a7cd24c5d9c5e7d7dc05b2d7618cec1c2af3f0ca2b7ee214\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 23:50:23.171391 kubelet[3513]: E0114 23:50:23.171247 3513 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"593588a1f4a6d7d2a7cd24c5d9c5e7d7dc05b2d7618cec1c2af3f0ca2b7ee214\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 23:50:23.171391 kubelet[3513]: E0114 23:50:23.171324 3513 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"593588a1f4a6d7d2a7cd24c5d9c5e7d7dc05b2d7618cec1c2af3f0ca2b7ee214\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-28nlv" Jan 14 23:50:23.172092 kubelet[3513]: E0114 23:50:23.171928 3513 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"593588a1f4a6d7d2a7cd24c5d9c5e7d7dc05b2d7618cec1c2af3f0ca2b7ee214\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-28nlv" Jan 14 23:50:23.173686 kubelet[3513]: E0114 23:50:23.172222 3513 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-28nlv_kube-system(fd982afe-6d4f-4aff-a998-9f2578e041f1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-28nlv_kube-system(fd982afe-6d4f-4aff-a998-9f2578e041f1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"593588a1f4a6d7d2a7cd24c5d9c5e7d7dc05b2d7618cec1c2af3f0ca2b7ee214\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-28nlv" podUID="fd982afe-6d4f-4aff-a998-9f2578e041f1" Jan 14 23:50:23.193764 containerd[1996]: time="2026-01-14T23:50:23.193698016Z" level=error msg="Failed to destroy network for sandbox \"302aaa6d6b70f4290d5b53d8d2b624a1b6a220b9ad2703bd240a37fcc59e5e34\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 23:50:23.194531 containerd[1996]: time="2026-01-14T23:50:23.193914772Z" level=error msg="Failed to destroy network for sandbox \"77af105e9fb13ba15b0b60a798e746c0ab1a2b376719c3c904c6175809fa6e7b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 23:50:23.198293 containerd[1996]: time="2026-01-14T23:50:23.198144580Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6d86655bcb-8jvgv,Uid:296d9b29-20aa-492b-aa70-e26652feb8da,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"302aaa6d6b70f4290d5b53d8d2b624a1b6a220b9ad2703bd240a37fcc59e5e34\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 23:50:23.199805 kubelet[3513]: E0114 23:50:23.198821 3513 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"302aaa6d6b70f4290d5b53d8d2b624a1b6a220b9ad2703bd240a37fcc59e5e34\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 23:50:23.199805 kubelet[3513]: E0114 23:50:23.198926 3513 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"302aaa6d6b70f4290d5b53d8d2b624a1b6a220b9ad2703bd240a37fcc59e5e34\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6d86655bcb-8jvgv" Jan 14 23:50:23.199805 kubelet[3513]: E0114 23:50:23.198961 3513 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"302aaa6d6b70f4290d5b53d8d2b624a1b6a220b9ad2703bd240a37fcc59e5e34\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6d86655bcb-8jvgv" Jan 14 23:50:23.200502 kubelet[3513]: E0114 23:50:23.199562 3513 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6d86655bcb-8jvgv_calico-apiserver(296d9b29-20aa-492b-aa70-e26652feb8da)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6d86655bcb-8jvgv_calico-apiserver(296d9b29-20aa-492b-aa70-e26652feb8da)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"302aaa6d6b70f4290d5b53d8d2b624a1b6a220b9ad2703bd240a37fcc59e5e34\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6d86655bcb-8jvgv" podUID="296d9b29-20aa-492b-aa70-e26652feb8da" Jan 14 23:50:23.204108 containerd[1996]: time="2026-01-14T23:50:23.203321452Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-8dsvj,Uid:23b28fd4-dc96-481b-a69a-1d96358778f5,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"77af105e9fb13ba15b0b60a798e746c0ab1a2b376719c3c904c6175809fa6e7b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 23:50:23.206271 kubelet[3513]: E0114 23:50:23.204663 3513 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"77af105e9fb13ba15b0b60a798e746c0ab1a2b376719c3c904c6175809fa6e7b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 23:50:23.206271 kubelet[3513]: E0114 23:50:23.206044 3513 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"77af105e9fb13ba15b0b60a798e746c0ab1a2b376719c3c904c6175809fa6e7b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-8dsvj" Jan 14 23:50:23.206271 kubelet[3513]: E0114 23:50:23.206121 3513 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"77af105e9fb13ba15b0b60a798e746c0ab1a2b376719c3c904c6175809fa6e7b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-8dsvj" Jan 14 23:50:23.206705 kubelet[3513]: E0114 23:50:23.206244 3513 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-8dsvj_calico-system(23b28fd4-dc96-481b-a69a-1d96358778f5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-8dsvj_calico-system(23b28fd4-dc96-481b-a69a-1d96358778f5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"77af105e9fb13ba15b0b60a798e746c0ab1a2b376719c3c904c6175809fa6e7b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-8dsvj" podUID="23b28fd4-dc96-481b-a69a-1d96358778f5" Jan 14 23:50:23.221890 containerd[1996]: time="2026-01-14T23:50:23.221826964Z" level=error msg="Failed to destroy network for sandbox \"0d0a6824601d60e52bf91376d748118858c5e425d0173193a1ee241f95a0f2ea\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 23:50:23.227953 containerd[1996]: time="2026-01-14T23:50:23.227864872Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7cc6d978d-lsv59,Uid:49be0d3d-84d7-4994-b2e7-37cea1fa9624,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"0d0a6824601d60e52bf91376d748118858c5e425d0173193a1ee241f95a0f2ea\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 23:50:23.229480 kubelet[3513]: E0114 23:50:23.229400 3513 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0d0a6824601d60e52bf91376d748118858c5e425d0173193a1ee241f95a0f2ea\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 23:50:23.229853 kubelet[3513]: E0114 23:50:23.229488 3513 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0d0a6824601d60e52bf91376d748118858c5e425d0173193a1ee241f95a0f2ea\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7cc6d978d-lsv59" Jan 14 23:50:23.229853 kubelet[3513]: E0114 23:50:23.229524 3513 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0d0a6824601d60e52bf91376d748118858c5e425d0173193a1ee241f95a0f2ea\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7cc6d978d-lsv59" Jan 14 23:50:23.230642 kubelet[3513]: E0114 23:50:23.229676 3513 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7cc6d978d-lsv59_calico-apiserver(49be0d3d-84d7-4994-b2e7-37cea1fa9624)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7cc6d978d-lsv59_calico-apiserver(49be0d3d-84d7-4994-b2e7-37cea1fa9624)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0d0a6824601d60e52bf91376d748118858c5e425d0173193a1ee241f95a0f2ea\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7cc6d978d-lsv59" podUID="49be0d3d-84d7-4994-b2e7-37cea1fa9624" Jan 14 23:50:23.239977 containerd[1996]: time="2026-01-14T23:50:23.239853472Z" level=error msg="Failed to destroy network for sandbox \"b5d3d000223c1cc990ae62eab235b758e00437a56e16fbb1b205d72e02666650\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 23:50:23.243371 containerd[1996]: time="2026-01-14T23:50:23.243246772Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-fd2bc,Uid:2054a7f1-33a6-4a0b-8079-7e8881899bb3,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"b5d3d000223c1cc990ae62eab235b758e00437a56e16fbb1b205d72e02666650\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 23:50:23.244095 kubelet[3513]: E0114 23:50:23.244017 3513 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b5d3d000223c1cc990ae62eab235b758e00437a56e16fbb1b205d72e02666650\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 23:50:23.244564 kubelet[3513]: E0114 23:50:23.244323 3513 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b5d3d000223c1cc990ae62eab235b758e00437a56e16fbb1b205d72e02666650\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-fd2bc" Jan 14 23:50:23.244564 kubelet[3513]: E0114 23:50:23.244504 3513 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b5d3d000223c1cc990ae62eab235b758e00437a56e16fbb1b205d72e02666650\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-fd2bc" Jan 14 23:50:23.245025 kubelet[3513]: E0114 23:50:23.244768 3513 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-fd2bc_calico-system(2054a7f1-33a6-4a0b-8079-7e8881899bb3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-fd2bc_calico-system(2054a7f1-33a6-4a0b-8079-7e8881899bb3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b5d3d000223c1cc990ae62eab235b758e00437a56e16fbb1b205d72e02666650\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-fd2bc" podUID="2054a7f1-33a6-4a0b-8079-7e8881899bb3" Jan 14 23:50:23.258019 containerd[1996]: time="2026-01-14T23:50:23.257853916Z" level=error msg="Failed to destroy network for sandbox \"735f7f9c7cc7b68957672005b7e006607e675c1caad015b40caf33a7bc32c481\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 23:50:23.261446 containerd[1996]: time="2026-01-14T23:50:23.261334120Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5d797fc78b-ddqxp,Uid:758c893c-34e9-4998-9ac9-398fc12846e5,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"735f7f9c7cc7b68957672005b7e006607e675c1caad015b40caf33a7bc32c481\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 23:50:23.261976 kubelet[3513]: E0114 23:50:23.261925 3513 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"735f7f9c7cc7b68957672005b7e006607e675c1caad015b40caf33a7bc32c481\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 23:50:23.262066 kubelet[3513]: E0114 23:50:23.261998 3513 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"735f7f9c7cc7b68957672005b7e006607e675c1caad015b40caf33a7bc32c481\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-5d797fc78b-ddqxp" Jan 14 23:50:23.262066 kubelet[3513]: E0114 23:50:23.262034 3513 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"735f7f9c7cc7b68957672005b7e006607e675c1caad015b40caf33a7bc32c481\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-5d797fc78b-ddqxp" Jan 14 23:50:23.262197 kubelet[3513]: E0114 23:50:23.262112 3513 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-5d797fc78b-ddqxp_calico-system(758c893c-34e9-4998-9ac9-398fc12846e5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-5d797fc78b-ddqxp_calico-system(758c893c-34e9-4998-9ac9-398fc12846e5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"735f7f9c7cc7b68957672005b7e006607e675c1caad015b40caf33a7bc32c481\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-5d797fc78b-ddqxp" podUID="758c893c-34e9-4998-9ac9-398fc12846e5" Jan 14 23:50:24.009275 systemd[1]: run-netns-cni\x2db4a0cd2f\x2d5614\x2d175a\x2dd0b6\x2d4a867fc70392.mount: Deactivated successfully. Jan 14 23:50:24.009934 systemd[1]: run-netns-cni\x2d729be35f\x2df0bb\x2d21d3\x2d8847\x2d88367a3a5527.mount: Deactivated successfully. Jan 14 23:50:24.010447 systemd[1]: run-netns-cni\x2d9c8da9e6\x2dfc59\x2d7fbc\x2d60a3\x2d37730633fd7b.mount: Deactivated successfully. Jan 14 23:50:24.010576 systemd[1]: run-netns-cni\x2db91ce06f\x2d54d5\x2d4458\x2d261b\x2d057ac9d7d45f.mount: Deactivated successfully. Jan 14 23:50:24.010740 systemd[1]: run-netns-cni\x2d131f70a5\x2d830b\x2dc222\x2d7e2c\x2d682080eb5677.mount: Deactivated successfully. Jan 14 23:50:28.995820 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount519767114.mount: Deactivated successfully. Jan 14 23:50:29.078734 containerd[1996]: time="2026-01-14T23:50:29.078648369Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 23:50:29.080562 containerd[1996]: time="2026-01-14T23:50:29.080406921Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=150930912" Jan 14 23:50:29.088579 containerd[1996]: time="2026-01-14T23:50:29.088342773Z" level=info msg="ImageCreate event name:\"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 23:50:29.095849 containerd[1996]: time="2026-01-14T23:50:29.095685657Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 23:50:29.097122 containerd[1996]: time="2026-01-14T23:50:29.096768381Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"150934424\" in 6.324110059s" Jan 14 23:50:29.097122 containerd[1996]: time="2026-01-14T23:50:29.096845685Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\"" Jan 14 23:50:29.141915 containerd[1996]: time="2026-01-14T23:50:29.141865834Z" level=info msg="CreateContainer within sandbox \"f2a2d21c167ae874f2963e361baaa1a6f68c833769edbb536182cf9dbbec6944\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 14 23:50:29.158635 containerd[1996]: time="2026-01-14T23:50:29.157882930Z" level=info msg="Container df84be4fa7b4e2c4b6fa5db3a3b1656ff49087c707e61be94ebcf51485a717b6: CDI devices from CRI Config.CDIDevices: []" Jan 14 23:50:29.184133 containerd[1996]: time="2026-01-14T23:50:29.184052494Z" level=info msg="CreateContainer within sandbox \"f2a2d21c167ae874f2963e361baaa1a6f68c833769edbb536182cf9dbbec6944\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"df84be4fa7b4e2c4b6fa5db3a3b1656ff49087c707e61be94ebcf51485a717b6\"" Jan 14 23:50:29.185104 containerd[1996]: time="2026-01-14T23:50:29.185029474Z" level=info msg="StartContainer for \"df84be4fa7b4e2c4b6fa5db3a3b1656ff49087c707e61be94ebcf51485a717b6\"" Jan 14 23:50:29.189353 containerd[1996]: time="2026-01-14T23:50:29.189203926Z" level=info msg="connecting to shim df84be4fa7b4e2c4b6fa5db3a3b1656ff49087c707e61be94ebcf51485a717b6" address="unix:///run/containerd/s/ac5b095456ce1b5d00daa7691e82cbc2b077d83324b628876608eefa07e7ad8c" protocol=ttrpc version=3 Jan 14 23:50:29.230298 systemd[1]: Started cri-containerd-df84be4fa7b4e2c4b6fa5db3a3b1656ff49087c707e61be94ebcf51485a717b6.scope - libcontainer container df84be4fa7b4e2c4b6fa5db3a3b1656ff49087c707e61be94ebcf51485a717b6. Jan 14 23:50:29.326000 audit: BPF prog-id=178 op=LOAD Jan 14 23:50:29.329651 kernel: kauditd_printk_skb: 6 callbacks suppressed Jan 14 23:50:29.329741 kernel: audit: type=1334 audit(1768434629.326:586): prog-id=178 op=LOAD Jan 14 23:50:29.326000 audit[4488]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001763e8 a2=98 a3=0 items=0 ppid=4008 pid=4488 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:50:29.337718 kernel: audit: type=1300 audit(1768434629.326:586): arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001763e8 a2=98 a3=0 items=0 ppid=4008 pid=4488 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:50:29.338347 kernel: audit: type=1327 audit(1768434629.326:586): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6466383462653466613762346532633462366661356462336133623136 Jan 14 23:50:29.326000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6466383462653466613762346532633462366661356462336133623136 Jan 14 23:50:29.345648 kernel: audit: type=1334 audit(1768434629.330:587): prog-id=179 op=LOAD Jan 14 23:50:29.330000 audit: BPF prog-id=179 op=LOAD Jan 14 23:50:29.330000 audit[4488]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=4000176168 a2=98 a3=0 items=0 ppid=4008 pid=4488 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:50:29.352479 kernel: audit: type=1300 audit(1768434629.330:587): arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=4000176168 a2=98 a3=0 items=0 ppid=4008 pid=4488 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:50:29.355719 kernel: audit: type=1327 audit(1768434629.330:587): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6466383462653466613762346532633462366661356462336133623136 Jan 14 23:50:29.330000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6466383462653466613762346532633462366661356462336133623136 Jan 14 23:50:29.330000 audit: BPF prog-id=179 op=UNLOAD Jan 14 23:50:29.361806 kernel: audit: type=1334 audit(1768434629.330:588): prog-id=179 op=UNLOAD Jan 14 23:50:29.330000 audit[4488]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4008 pid=4488 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:50:29.370257 kernel: audit: type=1300 audit(1768434629.330:588): arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4008 pid=4488 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:50:29.376640 kernel: audit: type=1327 audit(1768434629.330:588): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6466383462653466613762346532633462366661356462336133623136 Jan 14 23:50:29.330000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6466383462653466613762346532633462366661356462336133623136 Jan 14 23:50:29.330000 audit: BPF prog-id=178 op=UNLOAD Jan 14 23:50:29.378454 kernel: audit: type=1334 audit(1768434629.330:589): prog-id=178 op=UNLOAD Jan 14 23:50:29.330000 audit[4488]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4008 pid=4488 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:50:29.330000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6466383462653466613762346532633462366661356462336133623136 Jan 14 23:50:29.330000 audit: BPF prog-id=180 op=LOAD Jan 14 23:50:29.330000 audit[4488]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000176648 a2=98 a3=0 items=0 ppid=4008 pid=4488 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:50:29.330000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6466383462653466613762346532633462366661356462336133623136 Jan 14 23:50:29.410230 containerd[1996]: time="2026-01-14T23:50:29.410157695Z" level=info msg="StartContainer for \"df84be4fa7b4e2c4b6fa5db3a3b1656ff49087c707e61be94ebcf51485a717b6\" returns successfully" Jan 14 23:50:29.736693 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jan 14 23:50:29.736847 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jan 14 23:50:30.081176 kubelet[3513]: I0114 23:50:30.080954 3513 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-gxr77" podStartSLOduration=2.201241844 podStartE2EDuration="18.080918302s" podCreationTimestamp="2026-01-14 23:50:12 +0000 UTC" firstStartedPulling="2026-01-14 23:50:13.219004855 +0000 UTC m=+38.053465318" lastFinishedPulling="2026-01-14 23:50:29.098681301 +0000 UTC m=+53.933141776" observedRunningTime="2026-01-14 23:50:29.866665525 +0000 UTC m=+54.701126012" watchObservedRunningTime="2026-01-14 23:50:30.080918302 +0000 UTC m=+54.915378777" Jan 14 23:50:30.131786 kubelet[3513]: I0114 23:50:30.131710 3513 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/758c893c-34e9-4998-9ac9-398fc12846e5-whisker-backend-key-pair\") pod \"758c893c-34e9-4998-9ac9-398fc12846e5\" (UID: \"758c893c-34e9-4998-9ac9-398fc12846e5\") " Jan 14 23:50:30.131953 kubelet[3513]: I0114 23:50:30.131801 3513 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/758c893c-34e9-4998-9ac9-398fc12846e5-whisker-ca-bundle\") pod \"758c893c-34e9-4998-9ac9-398fc12846e5\" (UID: \"758c893c-34e9-4998-9ac9-398fc12846e5\") " Jan 14 23:50:30.131953 kubelet[3513]: I0114 23:50:30.131865 3513 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-94w6k\" (UniqueName: \"kubernetes.io/projected/758c893c-34e9-4998-9ac9-398fc12846e5-kube-api-access-94w6k\") pod \"758c893c-34e9-4998-9ac9-398fc12846e5\" (UID: \"758c893c-34e9-4998-9ac9-398fc12846e5\") " Jan 14 23:50:30.138413 kubelet[3513]: I0114 23:50:30.138281 3513 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/758c893c-34e9-4998-9ac9-398fc12846e5-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "758c893c-34e9-4998-9ac9-398fc12846e5" (UID: "758c893c-34e9-4998-9ac9-398fc12846e5"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 14 23:50:30.148685 kubelet[3513]: I0114 23:50:30.148551 3513 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/758c893c-34e9-4998-9ac9-398fc12846e5-kube-api-access-94w6k" (OuterVolumeSpecName: "kube-api-access-94w6k") pod "758c893c-34e9-4998-9ac9-398fc12846e5" (UID: "758c893c-34e9-4998-9ac9-398fc12846e5"). InnerVolumeSpecName "kube-api-access-94w6k". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 14 23:50:30.149123 systemd[1]: var-lib-kubelet-pods-758c893c\x2d34e9\x2d4998\x2d9ac9\x2d398fc12846e5-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d94w6k.mount: Deactivated successfully. Jan 14 23:50:30.157185 kubelet[3513]: I0114 23:50:30.156902 3513 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/758c893c-34e9-4998-9ac9-398fc12846e5-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "758c893c-34e9-4998-9ac9-398fc12846e5" (UID: "758c893c-34e9-4998-9ac9-398fc12846e5"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 14 23:50:30.160796 systemd[1]: var-lib-kubelet-pods-758c893c\x2d34e9\x2d4998\x2d9ac9\x2d398fc12846e5-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Jan 14 23:50:30.232437 kubelet[3513]: I0114 23:50:30.232329 3513 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-94w6k\" (UniqueName: \"kubernetes.io/projected/758c893c-34e9-4998-9ac9-398fc12846e5-kube-api-access-94w6k\") on node \"ip-172-31-18-197\" DevicePath \"\"" Jan 14 23:50:30.232437 kubelet[3513]: I0114 23:50:30.232390 3513 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/758c893c-34e9-4998-9ac9-398fc12846e5-whisker-backend-key-pair\") on node \"ip-172-31-18-197\" DevicePath \"\"" Jan 14 23:50:30.232437 kubelet[3513]: I0114 23:50:30.232418 3513 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/758c893c-34e9-4998-9ac9-398fc12846e5-whisker-ca-bundle\") on node \"ip-172-31-18-197\" DevicePath \"\"" Jan 14 23:50:30.830976 systemd[1]: Removed slice kubepods-besteffort-pod758c893c_34e9_4998_9ac9_398fc12846e5.slice - libcontainer container kubepods-besteffort-pod758c893c_34e9_4998_9ac9_398fc12846e5.slice. Jan 14 23:50:30.964182 systemd[1]: Created slice kubepods-besteffort-pod54b5943d_5205_46b5_af4e_d4680f06e390.slice - libcontainer container kubepods-besteffort-pod54b5943d_5205_46b5_af4e_d4680f06e390.slice. Jan 14 23:50:31.039148 kubelet[3513]: I0114 23:50:31.038936 3513 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/54b5943d-5205-46b5-af4e-d4680f06e390-whisker-backend-key-pair\") pod \"whisker-789ccfdc9b-bggd8\" (UID: \"54b5943d-5205-46b5-af4e-d4680f06e390\") " pod="calico-system/whisker-789ccfdc9b-bggd8" Jan 14 23:50:31.039148 kubelet[3513]: I0114 23:50:31.039012 3513 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/54b5943d-5205-46b5-af4e-d4680f06e390-whisker-ca-bundle\") pod \"whisker-789ccfdc9b-bggd8\" (UID: \"54b5943d-5205-46b5-af4e-d4680f06e390\") " pod="calico-system/whisker-789ccfdc9b-bggd8" Jan 14 23:50:31.039148 kubelet[3513]: I0114 23:50:31.039058 3513 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hpxsc\" (UniqueName: \"kubernetes.io/projected/54b5943d-5205-46b5-af4e-d4680f06e390-kube-api-access-hpxsc\") pod \"whisker-789ccfdc9b-bggd8\" (UID: \"54b5943d-5205-46b5-af4e-d4680f06e390\") " pod="calico-system/whisker-789ccfdc9b-bggd8" Jan 14 23:50:31.275622 containerd[1996]: time="2026-01-14T23:50:31.275464848Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-789ccfdc9b-bggd8,Uid:54b5943d-5205-46b5-af4e-d4680f06e390,Namespace:calico-system,Attempt:0,}" Jan 14 23:50:31.465899 kubelet[3513]: I0114 23:50:31.465846 3513 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="758c893c-34e9-4998-9ac9-398fc12846e5" path="/var/lib/kubelet/pods/758c893c-34e9-4998-9ac9-398fc12846e5/volumes" Jan 14 23:50:32.806000 audit: BPF prog-id=181 op=LOAD Jan 14 23:50:32.806000 audit[4739]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffe9062988 a2=98 a3=ffffe9062978 items=0 ppid=4631 pid=4739 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:50:32.806000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 14 23:50:32.806000 audit: BPF prog-id=181 op=UNLOAD Jan 14 23:50:32.806000 audit[4739]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=ffffe9062958 a3=0 items=0 ppid=4631 pid=4739 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:50:32.806000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 14 23:50:32.806000 audit: BPF prog-id=182 op=LOAD Jan 14 23:50:32.806000 audit[4739]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffe9062838 a2=74 a3=95 items=0 ppid=4631 pid=4739 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:50:32.806000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 14 23:50:32.806000 audit: BPF prog-id=182 op=UNLOAD Jan 14 23:50:32.806000 audit[4739]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=74 a3=95 items=0 ppid=4631 pid=4739 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:50:32.806000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 14 23:50:32.806000 audit: BPF prog-id=183 op=LOAD Jan 14 23:50:32.806000 audit[4739]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffe9062868 a2=40 a3=ffffe9062898 items=0 ppid=4631 pid=4739 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:50:32.806000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 14 23:50:32.807000 audit: BPF prog-id=183 op=UNLOAD Jan 14 23:50:32.807000 audit[4739]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=40 a3=ffffe9062898 items=0 ppid=4631 pid=4739 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:50:32.807000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 14 23:50:32.811000 audit: BPF prog-id=184 op=LOAD Jan 14 23:50:32.811000 audit[4740]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffe5d06f78 a2=98 a3=ffffe5d06f68 items=0 ppid=4631 pid=4740 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:50:32.811000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 23:50:32.811000 audit: BPF prog-id=184 op=UNLOAD Jan 14 23:50:32.811000 audit[4740]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=ffffe5d06f48 a3=0 items=0 ppid=4631 pid=4740 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:50:32.811000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 23:50:32.811000 audit: BPF prog-id=185 op=LOAD Jan 14 23:50:32.811000 audit[4740]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=ffffe5d06c08 a2=74 a3=95 items=0 ppid=4631 pid=4740 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:50:32.811000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 23:50:32.812000 audit: BPF prog-id=185 op=UNLOAD Jan 14 23:50:32.812000 audit[4740]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=4 a1=57156c a2=74 a3=95 items=0 ppid=4631 pid=4740 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:50:32.812000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 23:50:32.812000 audit: BPF prog-id=186 op=LOAD Jan 14 23:50:32.812000 audit[4740]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=ffffe5d06c68 a2=94 a3=2 items=0 ppid=4631 pid=4740 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:50:32.812000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 23:50:32.812000 audit: BPF prog-id=186 op=UNLOAD Jan 14 23:50:32.812000 audit[4740]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=4 a1=57156c a2=70 a3=2 items=0 ppid=4631 pid=4740 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:50:32.812000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 23:50:32.895042 systemd-networkd[1746]: cali6f660655851: Link UP Jan 14 23:50:32.896751 systemd-networkd[1746]: cali6f660655851: Gained carrier Jan 14 23:50:32.902987 (udev-worker)[4752]: Network interface NamePolicy= disabled on kernel command line. Jan 14 23:50:33.041461 containerd[1996]: 2026-01-14 23:50:31.376 [INFO][4607] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 14 23:50:33.041461 containerd[1996]: 2026-01-14 23:50:32.526 [INFO][4607] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--18--197-k8s-whisker--789ccfdc9b--bggd8-eth0 whisker-789ccfdc9b- calico-system 54b5943d-5205-46b5-af4e-d4680f06e390 982 0 2026-01-14 23:50:30 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:789ccfdc9b projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ip-172-31-18-197 whisker-789ccfdc9b-bggd8 eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali6f660655851 [] [] }} ContainerID="e2b77d557f26792546e9d05c8b6b56667cdd6068eeb06b1b6331d3b83c514553" Namespace="calico-system" Pod="whisker-789ccfdc9b-bggd8" WorkloadEndpoint="ip--172--31--18--197-k8s-whisker--789ccfdc9b--bggd8-" Jan 14 23:50:33.041461 containerd[1996]: 2026-01-14 23:50:32.526 [INFO][4607] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="e2b77d557f26792546e9d05c8b6b56667cdd6068eeb06b1b6331d3b83c514553" Namespace="calico-system" Pod="whisker-789ccfdc9b-bggd8" WorkloadEndpoint="ip--172--31--18--197-k8s-whisker--789ccfdc9b--bggd8-eth0" Jan 14 23:50:33.041461 containerd[1996]: 2026-01-14 23:50:32.721 [INFO][4710] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e2b77d557f26792546e9d05c8b6b56667cdd6068eeb06b1b6331d3b83c514553" HandleID="k8s-pod-network.e2b77d557f26792546e9d05c8b6b56667cdd6068eeb06b1b6331d3b83c514553" Workload="ip--172--31--18--197-k8s-whisker--789ccfdc9b--bggd8-eth0" Jan 14 23:50:33.042322 containerd[1996]: 2026-01-14 23:50:32.722 [INFO][4710] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="e2b77d557f26792546e9d05c8b6b56667cdd6068eeb06b1b6331d3b83c514553" HandleID="k8s-pod-network.e2b77d557f26792546e9d05c8b6b56667cdd6068eeb06b1b6331d3b83c514553" Workload="ip--172--31--18--197-k8s-whisker--789ccfdc9b--bggd8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000345250), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-18-197", "pod":"whisker-789ccfdc9b-bggd8", "timestamp":"2026-01-14 23:50:32.721489371 +0000 UTC"}, Hostname:"ip-172-31-18-197", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 14 23:50:33.042322 containerd[1996]: 2026-01-14 23:50:32.722 [INFO][4710] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 14 23:50:33.042322 containerd[1996]: 2026-01-14 23:50:32.725 [INFO][4710] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 14 23:50:33.042322 containerd[1996]: 2026-01-14 23:50:32.726 [INFO][4710] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-18-197' Jan 14 23:50:33.042322 containerd[1996]: 2026-01-14 23:50:32.755 [INFO][4710] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.e2b77d557f26792546e9d05c8b6b56667cdd6068eeb06b1b6331d3b83c514553" host="ip-172-31-18-197" Jan 14 23:50:33.042322 containerd[1996]: 2026-01-14 23:50:32.769 [INFO][4710] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-18-197" Jan 14 23:50:33.042322 containerd[1996]: 2026-01-14 23:50:32.779 [INFO][4710] ipam/ipam.go 511: Trying affinity for 192.168.99.64/26 host="ip-172-31-18-197" Jan 14 23:50:33.042322 containerd[1996]: 2026-01-14 23:50:32.784 [INFO][4710] ipam/ipam.go 158: Attempting to load block cidr=192.168.99.64/26 host="ip-172-31-18-197" Jan 14 23:50:33.042322 containerd[1996]: 2026-01-14 23:50:32.791 [INFO][4710] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.99.64/26 host="ip-172-31-18-197" Jan 14 23:50:33.042825 containerd[1996]: 2026-01-14 23:50:32.791 [INFO][4710] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.99.64/26 handle="k8s-pod-network.e2b77d557f26792546e9d05c8b6b56667cdd6068eeb06b1b6331d3b83c514553" host="ip-172-31-18-197" Jan 14 23:50:33.042825 containerd[1996]: 2026-01-14 23:50:32.795 [INFO][4710] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.e2b77d557f26792546e9d05c8b6b56667cdd6068eeb06b1b6331d3b83c514553 Jan 14 23:50:33.042825 containerd[1996]: 2026-01-14 23:50:32.801 [INFO][4710] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.99.64/26 handle="k8s-pod-network.e2b77d557f26792546e9d05c8b6b56667cdd6068eeb06b1b6331d3b83c514553" host="ip-172-31-18-197" Jan 14 23:50:33.042825 containerd[1996]: 2026-01-14 23:50:32.818 [INFO][4710] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.99.65/26] block=192.168.99.64/26 handle="k8s-pod-network.e2b77d557f26792546e9d05c8b6b56667cdd6068eeb06b1b6331d3b83c514553" host="ip-172-31-18-197" Jan 14 23:50:33.042825 containerd[1996]: 2026-01-14 23:50:32.818 [INFO][4710] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.99.65/26] handle="k8s-pod-network.e2b77d557f26792546e9d05c8b6b56667cdd6068eeb06b1b6331d3b83c514553" host="ip-172-31-18-197" Jan 14 23:50:33.042825 containerd[1996]: 2026-01-14 23:50:32.819 [INFO][4710] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 14 23:50:33.042825 containerd[1996]: 2026-01-14 23:50:32.819 [INFO][4710] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.99.65/26] IPv6=[] ContainerID="e2b77d557f26792546e9d05c8b6b56667cdd6068eeb06b1b6331d3b83c514553" HandleID="k8s-pod-network.e2b77d557f26792546e9d05c8b6b56667cdd6068eeb06b1b6331d3b83c514553" Workload="ip--172--31--18--197-k8s-whisker--789ccfdc9b--bggd8-eth0" Jan 14 23:50:33.043126 containerd[1996]: 2026-01-14 23:50:32.829 [INFO][4607] cni-plugin/k8s.go 418: Populated endpoint ContainerID="e2b77d557f26792546e9d05c8b6b56667cdd6068eeb06b1b6331d3b83c514553" Namespace="calico-system" Pod="whisker-789ccfdc9b-bggd8" WorkloadEndpoint="ip--172--31--18--197-k8s-whisker--789ccfdc9b--bggd8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--197-k8s-whisker--789ccfdc9b--bggd8-eth0", GenerateName:"whisker-789ccfdc9b-", Namespace:"calico-system", SelfLink:"", UID:"54b5943d-5205-46b5-af4e-d4680f06e390", ResourceVersion:"982", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 23, 50, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"789ccfdc9b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-197", ContainerID:"", Pod:"whisker-789ccfdc9b-bggd8", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.99.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali6f660655851", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 23:50:33.043126 containerd[1996]: 2026-01-14 23:50:32.830 [INFO][4607] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.99.65/32] ContainerID="e2b77d557f26792546e9d05c8b6b56667cdd6068eeb06b1b6331d3b83c514553" Namespace="calico-system" Pod="whisker-789ccfdc9b-bggd8" WorkloadEndpoint="ip--172--31--18--197-k8s-whisker--789ccfdc9b--bggd8-eth0" Jan 14 23:50:33.043300 containerd[1996]: 2026-01-14 23:50:32.830 [INFO][4607] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6f660655851 ContainerID="e2b77d557f26792546e9d05c8b6b56667cdd6068eeb06b1b6331d3b83c514553" Namespace="calico-system" Pod="whisker-789ccfdc9b-bggd8" WorkloadEndpoint="ip--172--31--18--197-k8s-whisker--789ccfdc9b--bggd8-eth0" Jan 14 23:50:33.043300 containerd[1996]: 2026-01-14 23:50:32.927 [INFO][4607] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e2b77d557f26792546e9d05c8b6b56667cdd6068eeb06b1b6331d3b83c514553" Namespace="calico-system" Pod="whisker-789ccfdc9b-bggd8" WorkloadEndpoint="ip--172--31--18--197-k8s-whisker--789ccfdc9b--bggd8-eth0" Jan 14 23:50:33.043401 containerd[1996]: 2026-01-14 23:50:32.928 [INFO][4607] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="e2b77d557f26792546e9d05c8b6b56667cdd6068eeb06b1b6331d3b83c514553" Namespace="calico-system" Pod="whisker-789ccfdc9b-bggd8" WorkloadEndpoint="ip--172--31--18--197-k8s-whisker--789ccfdc9b--bggd8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--197-k8s-whisker--789ccfdc9b--bggd8-eth0", GenerateName:"whisker-789ccfdc9b-", Namespace:"calico-system", SelfLink:"", UID:"54b5943d-5205-46b5-af4e-d4680f06e390", ResourceVersion:"982", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 23, 50, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"789ccfdc9b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-197", ContainerID:"e2b77d557f26792546e9d05c8b6b56667cdd6068eeb06b1b6331d3b83c514553", Pod:"whisker-789ccfdc9b-bggd8", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.99.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali6f660655851", MAC:"8a:e1:03:b5:91:45", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 23:50:33.043522 containerd[1996]: 2026-01-14 23:50:33.036 [INFO][4607] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="e2b77d557f26792546e9d05c8b6b56667cdd6068eeb06b1b6331d3b83c514553" Namespace="calico-system" Pod="whisker-789ccfdc9b-bggd8" WorkloadEndpoint="ip--172--31--18--197-k8s-whisker--789ccfdc9b--bggd8-eth0" Jan 14 23:50:33.076000 audit: BPF prog-id=187 op=LOAD Jan 14 23:50:33.076000 audit[4740]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=ffffe5d06c28 a2=40 a3=ffffe5d06c58 items=0 ppid=4631 pid=4740 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:50:33.076000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 23:50:33.077000 audit: BPF prog-id=187 op=UNLOAD Jan 14 23:50:33.077000 audit[4740]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=4 a1=57156c a2=40 a3=ffffe5d06c58 items=0 ppid=4631 pid=4740 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:50:33.077000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 23:50:33.090121 containerd[1996]: time="2026-01-14T23:50:33.090050017Z" level=info msg="connecting to shim e2b77d557f26792546e9d05c8b6b56667cdd6068eeb06b1b6331d3b83c514553" address="unix:///run/containerd/s/9ff3445bf462f51219d6502463339312ca2e7670287400d8c18bc17affceb5b3" namespace=k8s.io protocol=ttrpc version=3 Jan 14 23:50:33.104000 audit: BPF prog-id=188 op=LOAD Jan 14 23:50:33.104000 audit[4740]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=5 a1=ffffe5d06c38 a2=94 a3=4 items=0 ppid=4631 pid=4740 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:50:33.104000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 23:50:33.105000 audit: BPF prog-id=188 op=UNLOAD Jan 14 23:50:33.105000 audit[4740]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=5 a1=57156c a2=70 a3=4 items=0 ppid=4631 pid=4740 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:50:33.105000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 23:50:33.105000 audit: BPF prog-id=189 op=LOAD Jan 14 23:50:33.105000 audit[4740]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=ffffe5d06a78 a2=94 a3=5 items=0 ppid=4631 pid=4740 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:50:33.105000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 23:50:33.105000 audit: BPF prog-id=189 op=UNLOAD Jan 14 23:50:33.105000 audit[4740]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=6 a1=57156c a2=70 a3=5 items=0 ppid=4631 pid=4740 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:50:33.105000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 23:50:33.105000 audit: BPF prog-id=190 op=LOAD Jan 14 23:50:33.105000 audit[4740]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=5 a1=ffffe5d06ca8 a2=94 a3=6 items=0 ppid=4631 pid=4740 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:50:33.105000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 23:50:33.105000 audit: BPF prog-id=190 op=UNLOAD Jan 14 23:50:33.105000 audit[4740]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=5 a1=57156c a2=70 a3=6 items=0 ppid=4631 pid=4740 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:50:33.105000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 23:50:33.109000 audit: BPF prog-id=191 op=LOAD Jan 14 23:50:33.109000 audit[4740]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=5 a1=ffffe5d06478 a2=94 a3=83 items=0 ppid=4631 pid=4740 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:50:33.109000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 23:50:33.109000 audit: BPF prog-id=192 op=LOAD Jan 14 23:50:33.109000 audit[4740]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=7 a0=5 a1=ffffe5d06238 a2=94 a3=2 items=0 ppid=4631 pid=4740 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:50:33.109000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 23:50:33.109000 audit: BPF prog-id=192 op=UNLOAD Jan 14 23:50:33.109000 audit[4740]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=7 a1=57156c a2=c a3=0 items=0 ppid=4631 pid=4740 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:50:33.109000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 23:50:33.112000 audit: BPF prog-id=191 op=UNLOAD Jan 14 23:50:33.112000 audit[4740]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=5 a1=57156c a2=8093620 a3=8086b00 items=0 ppid=4631 pid=4740 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:50:33.112000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 23:50:33.143000 audit: BPF prog-id=193 op=LOAD Jan 14 23:50:33.143000 audit[4795]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffe21a3198 a2=98 a3=ffffe21a3188 items=0 ppid=4631 pid=4795 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:50:33.143000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 14 23:50:33.144000 audit: BPF prog-id=193 op=UNLOAD Jan 14 23:50:33.144000 audit[4795]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=ffffe21a3168 a3=0 items=0 ppid=4631 pid=4795 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:50:33.144000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 14 23:50:33.145000 audit: BPF prog-id=194 op=LOAD Jan 14 23:50:33.145000 audit[4795]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffe21a3048 a2=74 a3=95 items=0 ppid=4631 pid=4795 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:50:33.145000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 14 23:50:33.145000 audit: BPF prog-id=194 op=UNLOAD Jan 14 23:50:33.145000 audit[4795]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=74 a3=95 items=0 ppid=4631 pid=4795 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:50:33.145000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 14 23:50:33.145000 audit: BPF prog-id=195 op=LOAD Jan 14 23:50:33.145000 audit[4795]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffe21a3078 a2=40 a3=ffffe21a30a8 items=0 ppid=4631 pid=4795 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:50:33.145000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 14 23:50:33.145000 audit: BPF prog-id=195 op=UNLOAD Jan 14 23:50:33.145000 audit[4795]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=40 a3=ffffe21a30a8 items=0 ppid=4631 pid=4795 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:50:33.145000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 14 23:50:33.163384 systemd[1]: Started cri-containerd-e2b77d557f26792546e9d05c8b6b56667cdd6068eeb06b1b6331d3b83c514553.scope - libcontainer container e2b77d557f26792546e9d05c8b6b56667cdd6068eeb06b1b6331d3b83c514553. Jan 14 23:50:33.218000 audit: BPF prog-id=196 op=LOAD Jan 14 23:50:33.220000 audit: BPF prog-id=197 op=LOAD Jan 14 23:50:33.220000 audit[4781]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000176180 a2=98 a3=0 items=0 ppid=4769 pid=4781 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:50:33.220000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6532623737643535376632363739323534366539643035633862366235 Jan 14 23:50:33.221000 audit: BPF prog-id=197 op=UNLOAD Jan 14 23:50:33.221000 audit[4781]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4769 pid=4781 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:50:33.221000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6532623737643535376632363739323534366539643035633862366235 Jan 14 23:50:33.221000 audit: BPF prog-id=198 op=LOAD Jan 14 23:50:33.221000 audit[4781]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001763e8 a2=98 a3=0 items=0 ppid=4769 pid=4781 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:50:33.221000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6532623737643535376632363739323534366539643035633862366235 Jan 14 23:50:33.222000 audit: BPF prog-id=199 op=LOAD Jan 14 23:50:33.222000 audit[4781]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000176168 a2=98 a3=0 items=0 ppid=4769 pid=4781 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:50:33.222000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6532623737643535376632363739323534366539643035633862366235 Jan 14 23:50:33.222000 audit: BPF prog-id=199 op=UNLOAD Jan 14 23:50:33.222000 audit[4781]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4769 pid=4781 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:50:33.222000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6532623737643535376632363739323534366539643035633862366235 Jan 14 23:50:33.222000 audit: BPF prog-id=198 op=UNLOAD Jan 14 23:50:33.222000 audit[4781]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4769 pid=4781 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:50:33.222000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6532623737643535376632363739323534366539643035633862366235 Jan 14 23:50:33.222000 audit: BPF prog-id=200 op=LOAD Jan 14 23:50:33.222000 audit[4781]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000176648 a2=98 a3=0 items=0 ppid=4769 pid=4781 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:50:33.222000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6532623737643535376632363739323534366539643035633862366235 Jan 14 23:50:33.305997 containerd[1996]: time="2026-01-14T23:50:33.305921006Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-789ccfdc9b-bggd8,Uid:54b5943d-5205-46b5-af4e-d4680f06e390,Namespace:calico-system,Attempt:0,} returns sandbox id \"e2b77d557f26792546e9d05c8b6b56667cdd6068eeb06b1b6331d3b83c514553\"" Jan 14 23:50:33.311728 containerd[1996]: time="2026-01-14T23:50:33.311161550Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 14 23:50:33.335769 (udev-worker)[4751]: Network interface NamePolicy= disabled on kernel command line. Jan 14 23:50:33.346961 systemd-networkd[1746]: vxlan.calico: Link UP Jan 14 23:50:33.346975 systemd-networkd[1746]: vxlan.calico: Gained carrier Jan 14 23:50:33.390000 audit: BPF prog-id=201 op=LOAD Jan 14 23:50:33.390000 audit[4834]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=fffff2964578 a2=98 a3=fffff2964568 items=0 ppid=4631 pid=4834 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:50:33.390000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 14 23:50:33.390000 audit: BPF prog-id=201 op=UNLOAD Jan 14 23:50:33.390000 audit[4834]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=fffff2964548 a3=0 items=0 ppid=4631 pid=4834 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:50:33.390000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 14 23:50:33.390000 audit: BPF prog-id=202 op=LOAD Jan 14 23:50:33.390000 audit[4834]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=fffff2964258 a2=74 a3=95 items=0 ppid=4631 pid=4834 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:50:33.390000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 14 23:50:33.390000 audit: BPF prog-id=202 op=UNLOAD Jan 14 23:50:33.390000 audit[4834]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=74 a3=95 items=0 ppid=4631 pid=4834 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:50:33.390000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 14 23:50:33.390000 audit: BPF prog-id=203 op=LOAD Jan 14 23:50:33.390000 audit[4834]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=fffff29642b8 a2=94 a3=2 items=0 ppid=4631 pid=4834 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:50:33.390000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 14 23:50:33.391000 audit: BPF prog-id=203 op=UNLOAD Jan 14 23:50:33.391000 audit[4834]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=70 a3=2 items=0 ppid=4631 pid=4834 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:50:33.391000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 14 23:50:33.391000 audit: BPF prog-id=204 op=LOAD Jan 14 23:50:33.391000 audit[4834]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=fffff2964138 a2=40 a3=fffff2964168 items=0 ppid=4631 pid=4834 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:50:33.391000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 14 23:50:33.392000 audit: BPF prog-id=204 op=UNLOAD Jan 14 23:50:33.392000 audit[4834]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=6 a1=57156c a2=40 a3=fffff2964168 items=0 ppid=4631 pid=4834 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:50:33.392000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 14 23:50:33.392000 audit: BPF prog-id=205 op=LOAD Jan 14 23:50:33.392000 audit[4834]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=fffff2964288 a2=94 a3=b7 items=0 ppid=4631 pid=4834 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:50:33.392000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 14 23:50:33.392000 audit: BPF prog-id=205 op=UNLOAD Jan 14 23:50:33.392000 audit[4834]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=6 a1=57156c a2=70 a3=b7 items=0 ppid=4631 pid=4834 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:50:33.392000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 14 23:50:33.394000 audit: BPF prog-id=206 op=LOAD Jan 14 23:50:33.394000 audit[4834]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=fffff2963938 a2=94 a3=2 items=0 ppid=4631 pid=4834 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:50:33.394000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 14 23:50:33.394000 audit: BPF prog-id=206 op=UNLOAD Jan 14 23:50:33.394000 audit[4834]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=6 a1=57156c a2=70 a3=2 items=0 ppid=4631 pid=4834 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:50:33.394000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 14 23:50:33.394000 audit: BPF prog-id=207 op=LOAD Jan 14 23:50:33.394000 audit[4834]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=fffff2963ac8 a2=94 a3=30 items=0 ppid=4631 pid=4834 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:50:33.394000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 14 23:50:33.403000 audit: BPF prog-id=208 op=LOAD Jan 14 23:50:33.403000 audit[4838]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffdbe08778 a2=98 a3=ffffdbe08768 items=0 ppid=4631 pid=4838 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:50:33.403000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 23:50:33.403000 audit: BPF prog-id=208 op=UNLOAD Jan 14 23:50:33.403000 audit[4838]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=ffffdbe08748 a3=0 items=0 ppid=4631 pid=4838 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:50:33.403000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 23:50:33.404000 audit: BPF prog-id=209 op=LOAD Jan 14 23:50:33.404000 audit[4838]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=ffffdbe08408 a2=74 a3=95 items=0 ppid=4631 pid=4838 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:50:33.404000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 23:50:33.404000 audit: BPF prog-id=209 op=UNLOAD Jan 14 23:50:33.404000 audit[4838]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=4 a1=57156c a2=74 a3=95 items=0 ppid=4631 pid=4838 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:50:33.404000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 23:50:33.404000 audit: BPF prog-id=210 op=LOAD Jan 14 23:50:33.404000 audit[4838]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=ffffdbe08468 a2=94 a3=2 items=0 ppid=4631 pid=4838 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:50:33.404000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 23:50:33.404000 audit: BPF prog-id=210 op=UNLOAD Jan 14 23:50:33.404000 audit[4838]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=4 a1=57156c a2=70 a3=2 items=0 ppid=4631 pid=4838 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:50:33.404000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 23:50:33.464335 containerd[1996]: time="2026-01-14T23:50:33.464116983Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-fdc6fb9d4-p5lnk,Uid:21b369b7-d986-41d1-8a2e-a01d832685f7,Namespace:calico-system,Attempt:0,}" Jan 14 23:50:33.606936 containerd[1996]: time="2026-01-14T23:50:33.606772336Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 23:50:33.611866 containerd[1996]: time="2026-01-14T23:50:33.611714368Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 14 23:50:33.611866 containerd[1996]: time="2026-01-14T23:50:33.611789188Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 14 23:50:33.612234 kubelet[3513]: E0114 23:50:33.612177 3513 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 14 23:50:33.612834 kubelet[3513]: E0114 23:50:33.612248 3513 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 14 23:50:33.617832 kubelet[3513]: E0114 23:50:33.617705 3513 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:5c071b4136e94af696b69447927c640d,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-hpxsc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-789ccfdc9b-bggd8_calico-system(54b5943d-5205-46b5-af4e-d4680f06e390): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 14 23:50:33.621849 containerd[1996]: time="2026-01-14T23:50:33.621787504Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 14 23:50:33.695000 audit: BPF prog-id=211 op=LOAD Jan 14 23:50:33.695000 audit[4838]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=ffffdbe08428 a2=40 a3=ffffdbe08458 items=0 ppid=4631 pid=4838 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:50:33.695000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 23:50:33.696000 audit: BPF prog-id=211 op=UNLOAD Jan 14 23:50:33.696000 audit[4838]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=4 a1=57156c a2=40 a3=ffffdbe08458 items=0 ppid=4631 pid=4838 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:50:33.696000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 23:50:33.698214 (udev-worker)[4839]: Network interface NamePolicy= disabled on kernel command line. Jan 14 23:50:33.701575 systemd-networkd[1746]: cali0fc9eb70906: Link UP Jan 14 23:50:33.704367 systemd-networkd[1746]: cali0fc9eb70906: Gained carrier Jan 14 23:50:33.733000 audit: BPF prog-id=212 op=LOAD Jan 14 23:50:33.733000 audit[4838]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=5 a1=ffffdbe08438 a2=94 a3=4 items=0 ppid=4631 pid=4838 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:50:33.733000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 23:50:33.733000 audit: BPF prog-id=212 op=UNLOAD Jan 14 23:50:33.733000 audit[4838]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=5 a1=57156c a2=70 a3=4 items=0 ppid=4631 pid=4838 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:50:33.733000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 23:50:33.733000 audit: BPF prog-id=213 op=LOAD Jan 14 23:50:33.733000 audit[4838]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=ffffdbe08278 a2=94 a3=5 items=0 ppid=4631 pid=4838 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:50:33.733000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 23:50:33.733000 audit: BPF prog-id=213 op=UNLOAD Jan 14 23:50:33.733000 audit[4838]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=6 a1=57156c a2=70 a3=5 items=0 ppid=4631 pid=4838 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:50:33.733000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 23:50:33.733000 audit: BPF prog-id=214 op=LOAD Jan 14 23:50:33.733000 audit[4838]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=5 a1=ffffdbe084a8 a2=94 a3=6 items=0 ppid=4631 pid=4838 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:50:33.733000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 23:50:33.734000 audit: BPF prog-id=214 op=UNLOAD Jan 14 23:50:33.734000 audit[4838]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=5 a1=57156c a2=70 a3=6 items=0 ppid=4631 pid=4838 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:50:33.734000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 23:50:33.734000 audit: BPF prog-id=215 op=LOAD Jan 14 23:50:33.734000 audit[4838]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=5 a1=ffffdbe07c78 a2=94 a3=83 items=0 ppid=4631 pid=4838 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:50:33.734000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 23:50:33.735000 audit: BPF prog-id=216 op=LOAD Jan 14 23:50:33.735000 audit[4838]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=7 a0=5 a1=ffffdbe07a38 a2=94 a3=2 items=0 ppid=4631 pid=4838 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:50:33.735000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 23:50:33.735000 audit: BPF prog-id=216 op=UNLOAD Jan 14 23:50:33.735000 audit[4838]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=7 a1=57156c a2=c a3=0 items=0 ppid=4631 pid=4838 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:50:33.735000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 23:50:33.735000 audit: BPF prog-id=215 op=UNLOAD Jan 14 23:50:33.735000 audit[4838]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=5 a1=57156c a2=3afb2620 a3=3afa5b00 items=0 ppid=4631 pid=4838 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:50:33.735000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 23:50:33.747856 containerd[1996]: 2026-01-14 23:50:33.547 [INFO][4841] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--18--197-k8s-calico--kube--controllers--fdc6fb9d4--p5lnk-eth0 calico-kube-controllers-fdc6fb9d4- calico-system 21b369b7-d986-41d1-8a2e-a01d832685f7 908 0 2026-01-14 23:50:12 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:fdc6fb9d4 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ip-172-31-18-197 calico-kube-controllers-fdc6fb9d4-p5lnk eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali0fc9eb70906 [] [] }} ContainerID="421ae303bd88898bccc77e9978423c8271c5a30451bda28bd5cd0dd334b2cd04" Namespace="calico-system" Pod="calico-kube-controllers-fdc6fb9d4-p5lnk" WorkloadEndpoint="ip--172--31--18--197-k8s-calico--kube--controllers--fdc6fb9d4--p5lnk-" Jan 14 23:50:33.747856 containerd[1996]: 2026-01-14 23:50:33.547 [INFO][4841] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="421ae303bd88898bccc77e9978423c8271c5a30451bda28bd5cd0dd334b2cd04" Namespace="calico-system" Pod="calico-kube-controllers-fdc6fb9d4-p5lnk" WorkloadEndpoint="ip--172--31--18--197-k8s-calico--kube--controllers--fdc6fb9d4--p5lnk-eth0" Jan 14 23:50:33.747856 containerd[1996]: 2026-01-14 23:50:33.602 [INFO][4853] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="421ae303bd88898bccc77e9978423c8271c5a30451bda28bd5cd0dd334b2cd04" HandleID="k8s-pod-network.421ae303bd88898bccc77e9978423c8271c5a30451bda28bd5cd0dd334b2cd04" Workload="ip--172--31--18--197-k8s-calico--kube--controllers--fdc6fb9d4--p5lnk-eth0" Jan 14 23:50:33.748948 containerd[1996]: 2026-01-14 23:50:33.603 [INFO][4853] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="421ae303bd88898bccc77e9978423c8271c5a30451bda28bd5cd0dd334b2cd04" HandleID="k8s-pod-network.421ae303bd88898bccc77e9978423c8271c5a30451bda28bd5cd0dd334b2cd04" Workload="ip--172--31--18--197-k8s-calico--kube--controllers--fdc6fb9d4--p5lnk-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002d3660), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-18-197", "pod":"calico-kube-controllers-fdc6fb9d4-p5lnk", "timestamp":"2026-01-14 23:50:33.60295564 +0000 UTC"}, Hostname:"ip-172-31-18-197", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 14 23:50:33.748948 containerd[1996]: 2026-01-14 23:50:33.603 [INFO][4853] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 14 23:50:33.748948 containerd[1996]: 2026-01-14 23:50:33.603 [INFO][4853] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 14 23:50:33.748948 containerd[1996]: 2026-01-14 23:50:33.603 [INFO][4853] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-18-197' Jan 14 23:50:33.748948 containerd[1996]: 2026-01-14 23:50:33.629 [INFO][4853] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.421ae303bd88898bccc77e9978423c8271c5a30451bda28bd5cd0dd334b2cd04" host="ip-172-31-18-197" Jan 14 23:50:33.748948 containerd[1996]: 2026-01-14 23:50:33.640 [INFO][4853] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-18-197" Jan 14 23:50:33.748948 containerd[1996]: 2026-01-14 23:50:33.647 [INFO][4853] ipam/ipam.go 511: Trying affinity for 192.168.99.64/26 host="ip-172-31-18-197" Jan 14 23:50:33.748948 containerd[1996]: 2026-01-14 23:50:33.651 [INFO][4853] ipam/ipam.go 158: Attempting to load block cidr=192.168.99.64/26 host="ip-172-31-18-197" Jan 14 23:50:33.748948 containerd[1996]: 2026-01-14 23:50:33.660 [INFO][4853] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.99.64/26 host="ip-172-31-18-197" Jan 14 23:50:33.751341 containerd[1996]: 2026-01-14 23:50:33.661 [INFO][4853] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.99.64/26 handle="k8s-pod-network.421ae303bd88898bccc77e9978423c8271c5a30451bda28bd5cd0dd334b2cd04" host="ip-172-31-18-197" Jan 14 23:50:33.751341 containerd[1996]: 2026-01-14 23:50:33.666 [INFO][4853] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.421ae303bd88898bccc77e9978423c8271c5a30451bda28bd5cd0dd334b2cd04 Jan 14 23:50:33.751341 containerd[1996]: 2026-01-14 23:50:33.676 [INFO][4853] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.99.64/26 handle="k8s-pod-network.421ae303bd88898bccc77e9978423c8271c5a30451bda28bd5cd0dd334b2cd04" host="ip-172-31-18-197" Jan 14 23:50:33.751341 containerd[1996]: 2026-01-14 23:50:33.686 [INFO][4853] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.99.66/26] block=192.168.99.64/26 handle="k8s-pod-network.421ae303bd88898bccc77e9978423c8271c5a30451bda28bd5cd0dd334b2cd04" host="ip-172-31-18-197" Jan 14 23:50:33.751341 containerd[1996]: 2026-01-14 23:50:33.686 [INFO][4853] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.99.66/26] handle="k8s-pod-network.421ae303bd88898bccc77e9978423c8271c5a30451bda28bd5cd0dd334b2cd04" host="ip-172-31-18-197" Jan 14 23:50:33.751341 containerd[1996]: 2026-01-14 23:50:33.686 [INFO][4853] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 14 23:50:33.751341 containerd[1996]: 2026-01-14 23:50:33.686 [INFO][4853] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.99.66/26] IPv6=[] ContainerID="421ae303bd88898bccc77e9978423c8271c5a30451bda28bd5cd0dd334b2cd04" HandleID="k8s-pod-network.421ae303bd88898bccc77e9978423c8271c5a30451bda28bd5cd0dd334b2cd04" Workload="ip--172--31--18--197-k8s-calico--kube--controllers--fdc6fb9d4--p5lnk-eth0" Jan 14 23:50:33.752082 containerd[1996]: 2026-01-14 23:50:33.692 [INFO][4841] cni-plugin/k8s.go 418: Populated endpoint ContainerID="421ae303bd88898bccc77e9978423c8271c5a30451bda28bd5cd0dd334b2cd04" Namespace="calico-system" Pod="calico-kube-controllers-fdc6fb9d4-p5lnk" WorkloadEndpoint="ip--172--31--18--197-k8s-calico--kube--controllers--fdc6fb9d4--p5lnk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--197-k8s-calico--kube--controllers--fdc6fb9d4--p5lnk-eth0", GenerateName:"calico-kube-controllers-fdc6fb9d4-", Namespace:"calico-system", SelfLink:"", UID:"21b369b7-d986-41d1-8a2e-a01d832685f7", ResourceVersion:"908", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 23, 50, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"fdc6fb9d4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-197", ContainerID:"", Pod:"calico-kube-controllers-fdc6fb9d4-p5lnk", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.99.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali0fc9eb70906", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 23:50:33.752228 containerd[1996]: 2026-01-14 23:50:33.693 [INFO][4841] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.99.66/32] ContainerID="421ae303bd88898bccc77e9978423c8271c5a30451bda28bd5cd0dd334b2cd04" Namespace="calico-system" Pod="calico-kube-controllers-fdc6fb9d4-p5lnk" WorkloadEndpoint="ip--172--31--18--197-k8s-calico--kube--controllers--fdc6fb9d4--p5lnk-eth0" Jan 14 23:50:33.752228 containerd[1996]: 2026-01-14 23:50:33.693 [INFO][4841] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali0fc9eb70906 ContainerID="421ae303bd88898bccc77e9978423c8271c5a30451bda28bd5cd0dd334b2cd04" Namespace="calico-system" Pod="calico-kube-controllers-fdc6fb9d4-p5lnk" WorkloadEndpoint="ip--172--31--18--197-k8s-calico--kube--controllers--fdc6fb9d4--p5lnk-eth0" Jan 14 23:50:33.752228 containerd[1996]: 2026-01-14 23:50:33.707 [INFO][4841] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="421ae303bd88898bccc77e9978423c8271c5a30451bda28bd5cd0dd334b2cd04" Namespace="calico-system" Pod="calico-kube-controllers-fdc6fb9d4-p5lnk" WorkloadEndpoint="ip--172--31--18--197-k8s-calico--kube--controllers--fdc6fb9d4--p5lnk-eth0" Jan 14 23:50:33.752377 containerd[1996]: 2026-01-14 23:50:33.710 [INFO][4841] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="421ae303bd88898bccc77e9978423c8271c5a30451bda28bd5cd0dd334b2cd04" Namespace="calico-system" Pod="calico-kube-controllers-fdc6fb9d4-p5lnk" WorkloadEndpoint="ip--172--31--18--197-k8s-calico--kube--controllers--fdc6fb9d4--p5lnk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--197-k8s-calico--kube--controllers--fdc6fb9d4--p5lnk-eth0", GenerateName:"calico-kube-controllers-fdc6fb9d4-", Namespace:"calico-system", SelfLink:"", UID:"21b369b7-d986-41d1-8a2e-a01d832685f7", ResourceVersion:"908", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 23, 50, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"fdc6fb9d4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-197", ContainerID:"421ae303bd88898bccc77e9978423c8271c5a30451bda28bd5cd0dd334b2cd04", Pod:"calico-kube-controllers-fdc6fb9d4-p5lnk", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.99.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali0fc9eb70906", MAC:"46:81:ad:74:c5:08", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 23:50:33.752492 containerd[1996]: 2026-01-14 23:50:33.741 [INFO][4841] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="421ae303bd88898bccc77e9978423c8271c5a30451bda28bd5cd0dd334b2cd04" Namespace="calico-system" Pod="calico-kube-controllers-fdc6fb9d4-p5lnk" WorkloadEndpoint="ip--172--31--18--197-k8s-calico--kube--controllers--fdc6fb9d4--p5lnk-eth0" Jan 14 23:50:33.775000 audit: BPF prog-id=207 op=UNLOAD Jan 14 23:50:33.775000 audit[4631]: SYSCALL arch=c00000b7 syscall=35 success=yes exit=0 a0=ffffffffffffff9c a1=4000395300 a2=0 a3=0 items=0 ppid=4619 pid=4631 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="calico-node" exe="/usr/bin/calico-node" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:50:33.775000 audit: PROCTITLE proctitle=63616C69636F2D6E6F6465002D66656C6978 Jan 14 23:50:33.821798 containerd[1996]: time="2026-01-14T23:50:33.821335049Z" level=info msg="connecting to shim 421ae303bd88898bccc77e9978423c8271c5a30451bda28bd5cd0dd334b2cd04" address="unix:///run/containerd/s/95d9bc52ef61460b80e1e8832519dfa430f92ca8bd8c2d7fd6768be812865425" namespace=k8s.io protocol=ttrpc version=3 Jan 14 23:50:33.905083 systemd[1]: Started cri-containerd-421ae303bd88898bccc77e9978423c8271c5a30451bda28bd5cd0dd334b2cd04.scope - libcontainer container 421ae303bd88898bccc77e9978423c8271c5a30451bda28bd5cd0dd334b2cd04. Jan 14 23:50:33.910722 containerd[1996]: time="2026-01-14T23:50:33.909269381Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 23:50:33.912398 containerd[1996]: time="2026-01-14T23:50:33.912190961Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 14 23:50:33.914142 containerd[1996]: time="2026-01-14T23:50:33.912346133Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 14 23:50:33.914247 kubelet[3513]: E0114 23:50:33.913982 3513 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 14 23:50:33.915183 kubelet[3513]: E0114 23:50:33.914531 3513 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 14 23:50:33.917831 kubelet[3513]: E0114 23:50:33.917698 3513 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hpxsc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-789ccfdc9b-bggd8_calico-system(54b5943d-5205-46b5-af4e-d4680f06e390): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 14 23:50:33.920155 kubelet[3513]: E0114 23:50:33.920029 3513 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-789ccfdc9b-bggd8" podUID="54b5943d-5205-46b5-af4e-d4680f06e390" Jan 14 23:50:34.030000 audit: BPF prog-id=217 op=LOAD Jan 14 23:50:34.033000 audit: BPF prog-id=218 op=LOAD Jan 14 23:50:34.033000 audit[4889]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000130180 a2=98 a3=0 items=0 ppid=4876 pid=4889 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:50:34.033000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3432316165333033626438383839386263636337376539393738343233 Jan 14 23:50:34.033000 audit: BPF prog-id=218 op=UNLOAD Jan 14 23:50:34.033000 audit[4889]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4876 pid=4889 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:50:34.033000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3432316165333033626438383839386263636337376539393738343233 Jan 14 23:50:34.033000 audit: BPF prog-id=219 op=LOAD Jan 14 23:50:34.033000 audit[4889]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001303e8 a2=98 a3=0 items=0 ppid=4876 pid=4889 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:50:34.033000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3432316165333033626438383839386263636337376539393738343233 Jan 14 23:50:34.034000 audit: BPF prog-id=220 op=LOAD Jan 14 23:50:34.034000 audit[4889]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=4000130168 a2=98 a3=0 items=0 ppid=4876 pid=4889 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:50:34.034000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3432316165333033626438383839386263636337376539393738343233 Jan 14 23:50:34.034000 audit: BPF prog-id=220 op=UNLOAD Jan 14 23:50:34.034000 audit[4889]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4876 pid=4889 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:50:34.034000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3432316165333033626438383839386263636337376539393738343233 Jan 14 23:50:34.034000 audit: BPF prog-id=219 op=UNLOAD Jan 14 23:50:34.034000 audit[4889]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4876 pid=4889 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:50:34.034000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3432316165333033626438383839386263636337376539393738343233 Jan 14 23:50:34.034000 audit: BPF prog-id=221 op=LOAD Jan 14 23:50:34.034000 audit[4889]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000130648 a2=98 a3=0 items=0 ppid=4876 pid=4889 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:50:34.034000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3432316165333033626438383839386263636337376539393738343233 Jan 14 23:50:34.167243 containerd[1996]: time="2026-01-14T23:50:34.166264527Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-fdc6fb9d4-p5lnk,Uid:21b369b7-d986-41d1-8a2e-a01d832685f7,Namespace:calico-system,Attempt:0,} returns sandbox id \"421ae303bd88898bccc77e9978423c8271c5a30451bda28bd5cd0dd334b2cd04\"" Jan 14 23:50:34.175803 containerd[1996]: time="2026-01-14T23:50:34.175730679Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 14 23:50:34.174000 audit[4928]: NETFILTER_CFG table=nat:123 family=2 entries=15 op=nft_register_chain pid=4928 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 14 23:50:34.174000 audit[4928]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5084 a0=3 a1=ffffeadf7040 a2=0 a3=ffffa951afa8 items=0 ppid=4631 pid=4928 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:50:34.174000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 14 23:50:34.181000 audit[4929]: NETFILTER_CFG table=mangle:124 family=2 entries=16 op=nft_register_chain pid=4929 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 14 23:50:34.181000 audit[4929]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=6868 a0=3 a1=ffffc6bc6200 a2=0 a3=ffffb684cfa8 items=0 ppid=4631 pid=4929 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:50:34.181000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 14 23:50:34.224000 audit[4941]: NETFILTER_CFG table=raw:125 family=2 entries=21 op=nft_register_chain pid=4941 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 14 23:50:34.224000 audit[4941]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=8452 a0=3 a1=ffffe2e776a0 a2=0 a3=ffff9021dfa8 items=0 ppid=4631 pid=4941 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:50:34.224000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 14 23:50:34.228000 audit[4938]: NETFILTER_CFG table=filter:126 family=2 entries=94 op=nft_register_chain pid=4938 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 14 23:50:34.228000 audit[4938]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=53116 a0=3 a1=ffffebb1b9d0 a2=0 a3=ffff8a9e9fa8 items=0 ppid=4631 pid=4938 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:50:34.228000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 14 23:50:34.333000 audit[4951]: NETFILTER_CFG table=filter:127 family=2 entries=36 op=nft_register_chain pid=4951 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 14 23:50:34.337294 kernel: kauditd_printk_skb: 247 callbacks suppressed Jan 14 23:50:34.337355 kernel: audit: type=1325 audit(1768434634.333:673): table=filter:127 family=2 entries=36 op=nft_register_chain pid=4951 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 14 23:50:34.333000 audit[4951]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=19576 a0=3 a1=fffff1bdbff0 a2=0 a3=ffff914aefa8 items=0 ppid=4631 pid=4951 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:50:34.343821 systemd-networkd[1746]: cali6f660655851: Gained IPv6LL Jan 14 23:50:34.349034 kernel: audit: type=1300 audit(1768434634.333:673): arch=c00000b7 syscall=211 success=yes exit=19576 a0=3 a1=fffff1bdbff0 a2=0 a3=ffff914aefa8 items=0 ppid=4631 pid=4951 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:50:34.371057 kernel: audit: type=1327 audit(1768434634.333:673): proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 14 23:50:34.333000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 14 23:50:34.462933 containerd[1996]: time="2026-01-14T23:50:34.462703156Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-8dsvj,Uid:23b28fd4-dc96-481b-a69a-1d96358778f5,Namespace:calico-system,Attempt:0,}" Jan 14 23:50:34.467688 containerd[1996]: time="2026-01-14T23:50:34.467617408Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 23:50:34.469382 containerd[1996]: time="2026-01-14T23:50:34.469203064Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 14 23:50:34.469382 containerd[1996]: time="2026-01-14T23:50:34.469346116Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 14 23:50:34.472659 kubelet[3513]: E0114 23:50:34.471727 3513 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 14 23:50:34.472659 kubelet[3513]: E0114 23:50:34.471839 3513 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 14 23:50:34.472659 kubelet[3513]: E0114 23:50:34.472078 3513 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9fmf5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-fdc6fb9d4-p5lnk_calico-system(21b369b7-d986-41d1-8a2e-a01d832685f7): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 14 23:50:34.471986 systemd-networkd[1746]: vxlan.calico: Gained IPv6LL Jan 14 23:50:34.480017 kubelet[3513]: E0114 23:50:34.479939 3513 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-fdc6fb9d4-p5lnk" podUID="21b369b7-d986-41d1-8a2e-a01d832685f7" Jan 14 23:50:34.683367 systemd-networkd[1746]: cali3d81a774607: Link UP Jan 14 23:50:34.683859 systemd-networkd[1746]: cali3d81a774607: Gained carrier Jan 14 23:50:34.722852 containerd[1996]: 2026-01-14 23:50:34.547 [INFO][4953] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--18--197-k8s-csi--node--driver--8dsvj-eth0 csi-node-driver- calico-system 23b28fd4-dc96-481b-a69a-1d96358778f5 808 0 2026-01-14 23:50:12 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:857b56db8f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ip-172-31-18-197 csi-node-driver-8dsvj eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali3d81a774607 [] [] }} ContainerID="443ca4822a208e587f1d2f504c4a025681b74d7fa92c4df05c5b7205b718e7c6" Namespace="calico-system" Pod="csi-node-driver-8dsvj" WorkloadEndpoint="ip--172--31--18--197-k8s-csi--node--driver--8dsvj-" Jan 14 23:50:34.722852 containerd[1996]: 2026-01-14 23:50:34.547 [INFO][4953] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="443ca4822a208e587f1d2f504c4a025681b74d7fa92c4df05c5b7205b718e7c6" Namespace="calico-system" Pod="csi-node-driver-8dsvj" WorkloadEndpoint="ip--172--31--18--197-k8s-csi--node--driver--8dsvj-eth0" Jan 14 23:50:34.722852 containerd[1996]: 2026-01-14 23:50:34.600 [INFO][4965] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="443ca4822a208e587f1d2f504c4a025681b74d7fa92c4df05c5b7205b718e7c6" HandleID="k8s-pod-network.443ca4822a208e587f1d2f504c4a025681b74d7fa92c4df05c5b7205b718e7c6" Workload="ip--172--31--18--197-k8s-csi--node--driver--8dsvj-eth0" Jan 14 23:50:34.723306 containerd[1996]: 2026-01-14 23:50:34.600 [INFO][4965] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="443ca4822a208e587f1d2f504c4a025681b74d7fa92c4df05c5b7205b718e7c6" HandleID="k8s-pod-network.443ca4822a208e587f1d2f504c4a025681b74d7fa92c4df05c5b7205b718e7c6" Workload="ip--172--31--18--197-k8s-csi--node--driver--8dsvj-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400024b7b0), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-18-197", "pod":"csi-node-driver-8dsvj", "timestamp":"2026-01-14 23:50:34.600210161 +0000 UTC"}, Hostname:"ip-172-31-18-197", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 14 23:50:34.723306 containerd[1996]: 2026-01-14 23:50:34.600 [INFO][4965] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 14 23:50:34.723306 containerd[1996]: 2026-01-14 23:50:34.600 [INFO][4965] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 14 23:50:34.723306 containerd[1996]: 2026-01-14 23:50:34.600 [INFO][4965] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-18-197' Jan 14 23:50:34.723306 containerd[1996]: 2026-01-14 23:50:34.620 [INFO][4965] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.443ca4822a208e587f1d2f504c4a025681b74d7fa92c4df05c5b7205b718e7c6" host="ip-172-31-18-197" Jan 14 23:50:34.723306 containerd[1996]: 2026-01-14 23:50:34.626 [INFO][4965] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-18-197" Jan 14 23:50:34.723306 containerd[1996]: 2026-01-14 23:50:34.635 [INFO][4965] ipam/ipam.go 511: Trying affinity for 192.168.99.64/26 host="ip-172-31-18-197" Jan 14 23:50:34.723306 containerd[1996]: 2026-01-14 23:50:34.639 [INFO][4965] ipam/ipam.go 158: Attempting to load block cidr=192.168.99.64/26 host="ip-172-31-18-197" Jan 14 23:50:34.723306 containerd[1996]: 2026-01-14 23:50:34.643 [INFO][4965] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.99.64/26 host="ip-172-31-18-197" Jan 14 23:50:34.727320 containerd[1996]: 2026-01-14 23:50:34.643 [INFO][4965] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.99.64/26 handle="k8s-pod-network.443ca4822a208e587f1d2f504c4a025681b74d7fa92c4df05c5b7205b718e7c6" host="ip-172-31-18-197" Jan 14 23:50:34.727320 containerd[1996]: 2026-01-14 23:50:34.646 [INFO][4965] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.443ca4822a208e587f1d2f504c4a025681b74d7fa92c4df05c5b7205b718e7c6 Jan 14 23:50:34.727320 containerd[1996]: 2026-01-14 23:50:34.654 [INFO][4965] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.99.64/26 handle="k8s-pod-network.443ca4822a208e587f1d2f504c4a025681b74d7fa92c4df05c5b7205b718e7c6" host="ip-172-31-18-197" Jan 14 23:50:34.727320 containerd[1996]: 2026-01-14 23:50:34.667 [INFO][4965] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.99.67/26] block=192.168.99.64/26 handle="k8s-pod-network.443ca4822a208e587f1d2f504c4a025681b74d7fa92c4df05c5b7205b718e7c6" host="ip-172-31-18-197" Jan 14 23:50:34.727320 containerd[1996]: 2026-01-14 23:50:34.667 [INFO][4965] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.99.67/26] handle="k8s-pod-network.443ca4822a208e587f1d2f504c4a025681b74d7fa92c4df05c5b7205b718e7c6" host="ip-172-31-18-197" Jan 14 23:50:34.727320 containerd[1996]: 2026-01-14 23:50:34.667 [INFO][4965] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 14 23:50:34.727320 containerd[1996]: 2026-01-14 23:50:34.667 [INFO][4965] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.99.67/26] IPv6=[] ContainerID="443ca4822a208e587f1d2f504c4a025681b74d7fa92c4df05c5b7205b718e7c6" HandleID="k8s-pod-network.443ca4822a208e587f1d2f504c4a025681b74d7fa92c4df05c5b7205b718e7c6" Workload="ip--172--31--18--197-k8s-csi--node--driver--8dsvj-eth0" Jan 14 23:50:34.728379 containerd[1996]: 2026-01-14 23:50:34.672 [INFO][4953] cni-plugin/k8s.go 418: Populated endpoint ContainerID="443ca4822a208e587f1d2f504c4a025681b74d7fa92c4df05c5b7205b718e7c6" Namespace="calico-system" Pod="csi-node-driver-8dsvj" WorkloadEndpoint="ip--172--31--18--197-k8s-csi--node--driver--8dsvj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--197-k8s-csi--node--driver--8dsvj-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"23b28fd4-dc96-481b-a69a-1d96358778f5", ResourceVersion:"808", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 23, 50, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-197", ContainerID:"", Pod:"csi-node-driver-8dsvj", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.99.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali3d81a774607", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 23:50:34.728552 containerd[1996]: 2026-01-14 23:50:34.672 [INFO][4953] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.99.67/32] ContainerID="443ca4822a208e587f1d2f504c4a025681b74d7fa92c4df05c5b7205b718e7c6" Namespace="calico-system" Pod="csi-node-driver-8dsvj" WorkloadEndpoint="ip--172--31--18--197-k8s-csi--node--driver--8dsvj-eth0" Jan 14 23:50:34.728552 containerd[1996]: 2026-01-14 23:50:34.672 [INFO][4953] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3d81a774607 ContainerID="443ca4822a208e587f1d2f504c4a025681b74d7fa92c4df05c5b7205b718e7c6" Namespace="calico-system" Pod="csi-node-driver-8dsvj" WorkloadEndpoint="ip--172--31--18--197-k8s-csi--node--driver--8dsvj-eth0" Jan 14 23:50:34.728552 containerd[1996]: 2026-01-14 23:50:34.685 [INFO][4953] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="443ca4822a208e587f1d2f504c4a025681b74d7fa92c4df05c5b7205b718e7c6" Namespace="calico-system" Pod="csi-node-driver-8dsvj" WorkloadEndpoint="ip--172--31--18--197-k8s-csi--node--driver--8dsvj-eth0" Jan 14 23:50:34.737483 containerd[1996]: 2026-01-14 23:50:34.686 [INFO][4953] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="443ca4822a208e587f1d2f504c4a025681b74d7fa92c4df05c5b7205b718e7c6" Namespace="calico-system" Pod="csi-node-driver-8dsvj" WorkloadEndpoint="ip--172--31--18--197-k8s-csi--node--driver--8dsvj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--197-k8s-csi--node--driver--8dsvj-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"23b28fd4-dc96-481b-a69a-1d96358778f5", ResourceVersion:"808", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 23, 50, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-197", ContainerID:"443ca4822a208e587f1d2f504c4a025681b74d7fa92c4df05c5b7205b718e7c6", Pod:"csi-node-driver-8dsvj", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.99.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali3d81a774607", MAC:"9a:20:a8:c6:88:a6", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 23:50:34.728890 systemd-networkd[1746]: cali0fc9eb70906: Gained IPv6LL Jan 14 23:50:34.737842 containerd[1996]: 2026-01-14 23:50:34.711 [INFO][4953] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="443ca4822a208e587f1d2f504c4a025681b74d7fa92c4df05c5b7205b718e7c6" Namespace="calico-system" Pod="csi-node-driver-8dsvj" WorkloadEndpoint="ip--172--31--18--197-k8s-csi--node--driver--8dsvj-eth0" Jan 14 23:50:34.771000 audit[4979]: NETFILTER_CFG table=filter:128 family=2 entries=40 op=nft_register_chain pid=4979 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 14 23:50:34.771000 audit[4979]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=20764 a0=3 a1=ffffc848fb10 a2=0 a3=ffffaae6cfa8 items=0 ppid=4631 pid=4979 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:50:34.787858 kernel: audit: type=1325 audit(1768434634.771:674): table=filter:128 family=2 entries=40 op=nft_register_chain pid=4979 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 14 23:50:34.787994 kernel: audit: type=1300 audit(1768434634.771:674): arch=c00000b7 syscall=211 success=yes exit=20764 a0=3 a1=ffffc848fb10 a2=0 a3=ffffaae6cfa8 items=0 ppid=4631 pid=4979 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:50:34.771000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 14 23:50:34.792438 kernel: audit: type=1327 audit(1768434634.771:674): proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 14 23:50:34.823130 containerd[1996]: time="2026-01-14T23:50:34.823060902Z" level=info msg="connecting to shim 443ca4822a208e587f1d2f504c4a025681b74d7fa92c4df05c5b7205b718e7c6" address="unix:///run/containerd/s/d7ed9b8604d0388727754541ff201fa26533fa441eb6db2dacbe4927f54fecf3" namespace=k8s.io protocol=ttrpc version=3 Jan 14 23:50:34.850534 kubelet[3513]: E0114 23:50:34.850448 3513 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-fdc6fb9d4-p5lnk" podUID="21b369b7-d986-41d1-8a2e-a01d832685f7" Jan 14 23:50:34.855620 kubelet[3513]: E0114 23:50:34.852752 3513 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-789ccfdc9b-bggd8" podUID="54b5943d-5205-46b5-af4e-d4680f06e390" Jan 14 23:50:34.905185 systemd[1]: Started cri-containerd-443ca4822a208e587f1d2f504c4a025681b74d7fa92c4df05c5b7205b718e7c6.scope - libcontainer container 443ca4822a208e587f1d2f504c4a025681b74d7fa92c4df05c5b7205b718e7c6. Jan 14 23:50:34.951000 audit[5015]: NETFILTER_CFG table=filter:129 family=2 entries=20 op=nft_register_rule pid=5015 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 23:50:34.951000 audit[5015]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=7480 a0=3 a1=fffff4c7d9c0 a2=0 a3=1 items=0 ppid=3619 pid=5015 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:50:34.966631 kernel: audit: type=1325 audit(1768434634.951:675): table=filter:129 family=2 entries=20 op=nft_register_rule pid=5015 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 23:50:34.966751 kernel: audit: type=1300 audit(1768434634.951:675): arch=c00000b7 syscall=211 success=yes exit=7480 a0=3 a1=fffff4c7d9c0 a2=0 a3=1 items=0 ppid=3619 pid=5015 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:50:34.971841 kernel: audit: type=1327 audit(1768434634.951:675): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 23:50:34.951000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 23:50:34.964000 audit[5015]: NETFILTER_CFG table=nat:130 family=2 entries=14 op=nft_register_rule pid=5015 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 23:50:34.979159 kernel: audit: type=1325 audit(1768434634.964:676): table=nat:130 family=2 entries=14 op=nft_register_rule pid=5015 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 23:50:34.964000 audit[5015]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=3468 a0=3 a1=fffff4c7d9c0 a2=0 a3=1 items=0 ppid=3619 pid=5015 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:50:34.964000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 23:50:34.977000 audit: BPF prog-id=222 op=LOAD Jan 14 23:50:34.978000 audit: BPF prog-id=223 op=LOAD Jan 14 23:50:34.978000 audit[4999]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a0180 a2=98 a3=0 items=0 ppid=4988 pid=4999 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:50:34.978000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3434336361343832326132303865353837663164326635303463346130 Jan 14 23:50:34.978000 audit: BPF prog-id=223 op=UNLOAD Jan 14 23:50:34.978000 audit[4999]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4988 pid=4999 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:50:34.978000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3434336361343832326132303865353837663164326635303463346130 Jan 14 23:50:34.978000 audit: BPF prog-id=224 op=LOAD Jan 14 23:50:34.978000 audit[4999]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a03e8 a2=98 a3=0 items=0 ppid=4988 pid=4999 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:50:34.978000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3434336361343832326132303865353837663164326635303463346130 Jan 14 23:50:34.978000 audit: BPF prog-id=225 op=LOAD Jan 14 23:50:34.978000 audit[4999]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=40001a0168 a2=98 a3=0 items=0 ppid=4988 pid=4999 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:50:34.978000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3434336361343832326132303865353837663164326635303463346130 Jan 14 23:50:34.978000 audit: BPF prog-id=225 op=UNLOAD Jan 14 23:50:34.978000 audit[4999]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4988 pid=4999 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:50:34.978000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3434336361343832326132303865353837663164326635303463346130 Jan 14 23:50:34.978000 audit: BPF prog-id=224 op=UNLOAD Jan 14 23:50:34.978000 audit[4999]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4988 pid=4999 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:50:34.978000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3434336361343832326132303865353837663164326635303463346130 Jan 14 23:50:34.978000 audit: BPF prog-id=226 op=LOAD Jan 14 23:50:34.978000 audit[4999]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a0648 a2=98 a3=0 items=0 ppid=4988 pid=4999 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:50:34.978000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3434336361343832326132303865353837663164326635303463346130 Jan 14 23:50:35.018346 containerd[1996]: time="2026-01-14T23:50:35.018279747Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-8dsvj,Uid:23b28fd4-dc96-481b-a69a-1d96358778f5,Namespace:calico-system,Attempt:0,} returns sandbox id \"443ca4822a208e587f1d2f504c4a025681b74d7fa92c4df05c5b7205b718e7c6\"" Jan 14 23:50:35.023363 containerd[1996]: time="2026-01-14T23:50:35.023261139Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 14 23:50:35.282725 containerd[1996]: time="2026-01-14T23:50:35.282033484Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 23:50:35.283950 containerd[1996]: time="2026-01-14T23:50:35.283873252Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 14 23:50:35.284262 containerd[1996]: time="2026-01-14T23:50:35.284001064Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 14 23:50:35.284843 kubelet[3513]: E0114 23:50:35.284695 3513 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 14 23:50:35.285325 kubelet[3513]: E0114 23:50:35.285117 3513 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 14 23:50:35.286036 kubelet[3513]: E0114 23:50:35.285864 3513 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kdx84,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-8dsvj_calico-system(23b28fd4-dc96-481b-a69a-1d96358778f5): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 14 23:50:35.291062 containerd[1996]: time="2026-01-14T23:50:35.290906440Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 14 23:50:35.471137 containerd[1996]: time="2026-01-14T23:50:35.471069401Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6d86655bcb-l8j4w,Uid:255a4468-7378-413d-92bd-8056478658d3,Namespace:calico-apiserver,Attempt:0,}" Jan 14 23:50:35.552576 containerd[1996]: time="2026-01-14T23:50:35.552059633Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 23:50:35.554439 containerd[1996]: time="2026-01-14T23:50:35.553844837Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 14 23:50:35.554865 containerd[1996]: time="2026-01-14T23:50:35.553887605Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 14 23:50:35.555424 kubelet[3513]: E0114 23:50:35.555371 3513 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 14 23:50:35.556095 kubelet[3513]: E0114 23:50:35.555914 3513 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 14 23:50:35.556440 kubelet[3513]: E0114 23:50:35.556358 3513 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kdx84,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-8dsvj_calico-system(23b28fd4-dc96-481b-a69a-1d96358778f5): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 14 23:50:35.559649 kubelet[3513]: E0114 23:50:35.558667 3513 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-8dsvj" podUID="23b28fd4-dc96-481b-a69a-1d96358778f5" Jan 14 23:50:35.687053 systemd-networkd[1746]: cali8ce4be00037: Link UP Jan 14 23:50:35.689242 systemd-networkd[1746]: cali8ce4be00037: Gained carrier Jan 14 23:50:35.725425 containerd[1996]: 2026-01-14 23:50:35.541 [INFO][5029] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--18--197-k8s-calico--apiserver--6d86655bcb--l8j4w-eth0 calico-apiserver-6d86655bcb- calico-apiserver 255a4468-7378-413d-92bd-8056478658d3 913 0 2026-01-14 23:49:58 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6d86655bcb projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-18-197 calico-apiserver-6d86655bcb-l8j4w eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali8ce4be00037 [] [] }} ContainerID="cd6ef398b5c7e658ccf503fa00bd701b343b9b6caeae79e9bd6634d37011fc41" Namespace="calico-apiserver" Pod="calico-apiserver-6d86655bcb-l8j4w" WorkloadEndpoint="ip--172--31--18--197-k8s-calico--apiserver--6d86655bcb--l8j4w-" Jan 14 23:50:35.725425 containerd[1996]: 2026-01-14 23:50:35.542 [INFO][5029] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="cd6ef398b5c7e658ccf503fa00bd701b343b9b6caeae79e9bd6634d37011fc41" Namespace="calico-apiserver" Pod="calico-apiserver-6d86655bcb-l8j4w" WorkloadEndpoint="ip--172--31--18--197-k8s-calico--apiserver--6d86655bcb--l8j4w-eth0" Jan 14 23:50:35.725425 containerd[1996]: 2026-01-14 23:50:35.618 [INFO][5042] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="cd6ef398b5c7e658ccf503fa00bd701b343b9b6caeae79e9bd6634d37011fc41" HandleID="k8s-pod-network.cd6ef398b5c7e658ccf503fa00bd701b343b9b6caeae79e9bd6634d37011fc41" Workload="ip--172--31--18--197-k8s-calico--apiserver--6d86655bcb--l8j4w-eth0" Jan 14 23:50:35.725813 containerd[1996]: 2026-01-14 23:50:35.618 [INFO][5042] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="cd6ef398b5c7e658ccf503fa00bd701b343b9b6caeae79e9bd6634d37011fc41" HandleID="k8s-pod-network.cd6ef398b5c7e658ccf503fa00bd701b343b9b6caeae79e9bd6634d37011fc41" Workload="ip--172--31--18--197-k8s-calico--apiserver--6d86655bcb--l8j4w-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002cb710), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ip-172-31-18-197", "pod":"calico-apiserver-6d86655bcb-l8j4w", "timestamp":"2026-01-14 23:50:35.618285138 +0000 UTC"}, Hostname:"ip-172-31-18-197", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 14 23:50:35.725813 containerd[1996]: 2026-01-14 23:50:35.618 [INFO][5042] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 14 23:50:35.725813 containerd[1996]: 2026-01-14 23:50:35.618 [INFO][5042] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 14 23:50:35.725813 containerd[1996]: 2026-01-14 23:50:35.618 [INFO][5042] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-18-197' Jan 14 23:50:35.725813 containerd[1996]: 2026-01-14 23:50:35.636 [INFO][5042] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.cd6ef398b5c7e658ccf503fa00bd701b343b9b6caeae79e9bd6634d37011fc41" host="ip-172-31-18-197" Jan 14 23:50:35.725813 containerd[1996]: 2026-01-14 23:50:35.643 [INFO][5042] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-18-197" Jan 14 23:50:35.725813 containerd[1996]: 2026-01-14 23:50:35.651 [INFO][5042] ipam/ipam.go 511: Trying affinity for 192.168.99.64/26 host="ip-172-31-18-197" Jan 14 23:50:35.725813 containerd[1996]: 2026-01-14 23:50:35.654 [INFO][5042] ipam/ipam.go 158: Attempting to load block cidr=192.168.99.64/26 host="ip-172-31-18-197" Jan 14 23:50:35.725813 containerd[1996]: 2026-01-14 23:50:35.658 [INFO][5042] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.99.64/26 host="ip-172-31-18-197" Jan 14 23:50:35.726941 containerd[1996]: 2026-01-14 23:50:35.658 [INFO][5042] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.99.64/26 handle="k8s-pod-network.cd6ef398b5c7e658ccf503fa00bd701b343b9b6caeae79e9bd6634d37011fc41" host="ip-172-31-18-197" Jan 14 23:50:35.726941 containerd[1996]: 2026-01-14 23:50:35.660 [INFO][5042] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.cd6ef398b5c7e658ccf503fa00bd701b343b9b6caeae79e9bd6634d37011fc41 Jan 14 23:50:35.726941 containerd[1996]: 2026-01-14 23:50:35.667 [INFO][5042] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.99.64/26 handle="k8s-pod-network.cd6ef398b5c7e658ccf503fa00bd701b343b9b6caeae79e9bd6634d37011fc41" host="ip-172-31-18-197" Jan 14 23:50:35.726941 containerd[1996]: 2026-01-14 23:50:35.678 [INFO][5042] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.99.68/26] block=192.168.99.64/26 handle="k8s-pod-network.cd6ef398b5c7e658ccf503fa00bd701b343b9b6caeae79e9bd6634d37011fc41" host="ip-172-31-18-197" Jan 14 23:50:35.726941 containerd[1996]: 2026-01-14 23:50:35.678 [INFO][5042] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.99.68/26] handle="k8s-pod-network.cd6ef398b5c7e658ccf503fa00bd701b343b9b6caeae79e9bd6634d37011fc41" host="ip-172-31-18-197" Jan 14 23:50:35.726941 containerd[1996]: 2026-01-14 23:50:35.678 [INFO][5042] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 14 23:50:35.726941 containerd[1996]: 2026-01-14 23:50:35.679 [INFO][5042] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.99.68/26] IPv6=[] ContainerID="cd6ef398b5c7e658ccf503fa00bd701b343b9b6caeae79e9bd6634d37011fc41" HandleID="k8s-pod-network.cd6ef398b5c7e658ccf503fa00bd701b343b9b6caeae79e9bd6634d37011fc41" Workload="ip--172--31--18--197-k8s-calico--apiserver--6d86655bcb--l8j4w-eth0" Jan 14 23:50:35.727287 containerd[1996]: 2026-01-14 23:50:35.682 [INFO][5029] cni-plugin/k8s.go 418: Populated endpoint ContainerID="cd6ef398b5c7e658ccf503fa00bd701b343b9b6caeae79e9bd6634d37011fc41" Namespace="calico-apiserver" Pod="calico-apiserver-6d86655bcb-l8j4w" WorkloadEndpoint="ip--172--31--18--197-k8s-calico--apiserver--6d86655bcb--l8j4w-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--197-k8s-calico--apiserver--6d86655bcb--l8j4w-eth0", GenerateName:"calico-apiserver-6d86655bcb-", Namespace:"calico-apiserver", SelfLink:"", UID:"255a4468-7378-413d-92bd-8056478658d3", ResourceVersion:"913", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 23, 49, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6d86655bcb", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-197", ContainerID:"", Pod:"calico-apiserver-6d86655bcb-l8j4w", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.99.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali8ce4be00037", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 23:50:35.727434 containerd[1996]: 2026-01-14 23:50:35.683 [INFO][5029] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.99.68/32] ContainerID="cd6ef398b5c7e658ccf503fa00bd701b343b9b6caeae79e9bd6634d37011fc41" Namespace="calico-apiserver" Pod="calico-apiserver-6d86655bcb-l8j4w" WorkloadEndpoint="ip--172--31--18--197-k8s-calico--apiserver--6d86655bcb--l8j4w-eth0" Jan 14 23:50:35.727434 containerd[1996]: 2026-01-14 23:50:35.683 [INFO][5029] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8ce4be00037 ContainerID="cd6ef398b5c7e658ccf503fa00bd701b343b9b6caeae79e9bd6634d37011fc41" Namespace="calico-apiserver" Pod="calico-apiserver-6d86655bcb-l8j4w" WorkloadEndpoint="ip--172--31--18--197-k8s-calico--apiserver--6d86655bcb--l8j4w-eth0" Jan 14 23:50:35.727434 containerd[1996]: 2026-01-14 23:50:35.691 [INFO][5029] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="cd6ef398b5c7e658ccf503fa00bd701b343b9b6caeae79e9bd6634d37011fc41" Namespace="calico-apiserver" Pod="calico-apiserver-6d86655bcb-l8j4w" WorkloadEndpoint="ip--172--31--18--197-k8s-calico--apiserver--6d86655bcb--l8j4w-eth0" Jan 14 23:50:35.728647 containerd[1996]: 2026-01-14 23:50:35.692 [INFO][5029] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="cd6ef398b5c7e658ccf503fa00bd701b343b9b6caeae79e9bd6634d37011fc41" Namespace="calico-apiserver" Pod="calico-apiserver-6d86655bcb-l8j4w" WorkloadEndpoint="ip--172--31--18--197-k8s-calico--apiserver--6d86655bcb--l8j4w-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--197-k8s-calico--apiserver--6d86655bcb--l8j4w-eth0", GenerateName:"calico-apiserver-6d86655bcb-", Namespace:"calico-apiserver", SelfLink:"", UID:"255a4468-7378-413d-92bd-8056478658d3", ResourceVersion:"913", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 23, 49, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6d86655bcb", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-197", ContainerID:"cd6ef398b5c7e658ccf503fa00bd701b343b9b6caeae79e9bd6634d37011fc41", Pod:"calico-apiserver-6d86655bcb-l8j4w", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.99.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali8ce4be00037", MAC:"4a:55:88:8c:66:47", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 23:50:35.728888 containerd[1996]: 2026-01-14 23:50:35.719 [INFO][5029] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="cd6ef398b5c7e658ccf503fa00bd701b343b9b6caeae79e9bd6634d37011fc41" Namespace="calico-apiserver" Pod="calico-apiserver-6d86655bcb-l8j4w" WorkloadEndpoint="ip--172--31--18--197-k8s-calico--apiserver--6d86655bcb--l8j4w-eth0" Jan 14 23:50:35.792736 containerd[1996]: time="2026-01-14T23:50:35.791623711Z" level=info msg="connecting to shim cd6ef398b5c7e658ccf503fa00bd701b343b9b6caeae79e9bd6634d37011fc41" address="unix:///run/containerd/s/86aeaf18e73e3f6a2c2095d171b378683752bb25b098c7fddbae3a3c38f7a6fd" namespace=k8s.io protocol=ttrpc version=3 Jan 14 23:50:35.792000 audit[5059]: NETFILTER_CFG table=filter:131 family=2 entries=58 op=nft_register_chain pid=5059 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 14 23:50:35.792000 audit[5059]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=30584 a0=3 a1=ffffc3a04460 a2=0 a3=ffffa1bd3fa8 items=0 ppid=4631 pid=5059 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:50:35.792000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 14 23:50:35.851021 systemd[1]: Started cri-containerd-cd6ef398b5c7e658ccf503fa00bd701b343b9b6caeae79e9bd6634d37011fc41.scope - libcontainer container cd6ef398b5c7e658ccf503fa00bd701b343b9b6caeae79e9bd6634d37011fc41. Jan 14 23:50:35.855775 kubelet[3513]: E0114 23:50:35.855704 3513 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-fdc6fb9d4-p5lnk" podUID="21b369b7-d986-41d1-8a2e-a01d832685f7" Jan 14 23:50:35.857640 kubelet[3513]: E0114 23:50:35.857135 3513 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-8dsvj" podUID="23b28fd4-dc96-481b-a69a-1d96358778f5" Jan 14 23:50:35.920000 audit: BPF prog-id=227 op=LOAD Jan 14 23:50:35.921000 audit: BPF prog-id=228 op=LOAD Jan 14 23:50:35.921000 audit[5076]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130180 a2=98 a3=0 items=0 ppid=5065 pid=5076 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:50:35.921000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6364366566333938623563376536353863636635303366613030626437 Jan 14 23:50:35.922000 audit: BPF prog-id=228 op=UNLOAD Jan 14 23:50:35.922000 audit[5076]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5065 pid=5076 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:50:35.922000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6364366566333938623563376536353863636635303366613030626437 Jan 14 23:50:35.923000 audit: BPF prog-id=229 op=LOAD Jan 14 23:50:35.923000 audit[5076]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001303e8 a2=98 a3=0 items=0 ppid=5065 pid=5076 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:50:35.923000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6364366566333938623563376536353863636635303366613030626437 Jan 14 23:50:35.923000 audit: BPF prog-id=230 op=LOAD Jan 14 23:50:35.923000 audit[5076]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000130168 a2=98 a3=0 items=0 ppid=5065 pid=5076 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:50:35.923000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6364366566333938623563376536353863636635303366613030626437 Jan 14 23:50:35.923000 audit: BPF prog-id=230 op=UNLOAD Jan 14 23:50:35.923000 audit[5076]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5065 pid=5076 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:50:35.923000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6364366566333938623563376536353863636635303366613030626437 Jan 14 23:50:35.923000 audit: BPF prog-id=229 op=UNLOAD Jan 14 23:50:35.923000 audit[5076]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5065 pid=5076 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:50:35.923000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6364366566333938623563376536353863636635303366613030626437 Jan 14 23:50:35.924000 audit: BPF prog-id=231 op=LOAD Jan 14 23:50:35.924000 audit[5076]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130648 a2=98 a3=0 items=0 ppid=5065 pid=5076 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:50:35.924000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6364366566333938623563376536353863636635303366613030626437 Jan 14 23:50:35.993643 containerd[1996]: time="2026-01-14T23:50:35.993533660Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6d86655bcb-l8j4w,Uid:255a4468-7378-413d-92bd-8056478658d3,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"cd6ef398b5c7e658ccf503fa00bd701b343b9b6caeae79e9bd6634d37011fc41\"" Jan 14 23:50:36.001708 containerd[1996]: time="2026-01-14T23:50:36.001282096Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 14 23:50:36.264535 systemd-networkd[1746]: cali3d81a774607: Gained IPv6LL Jan 14 23:50:36.267094 containerd[1996]: time="2026-01-14T23:50:36.266846897Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 23:50:36.268141 containerd[1996]: time="2026-01-14T23:50:36.268088933Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 14 23:50:36.268496 containerd[1996]: time="2026-01-14T23:50:36.268202825Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 14 23:50:36.269019 kubelet[3513]: E0114 23:50:36.268433 3513 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 23:50:36.269019 kubelet[3513]: E0114 23:50:36.268490 3513 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 23:50:36.269019 kubelet[3513]: E0114 23:50:36.268834 3513 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-m5xqc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6d86655bcb-l8j4w_calico-apiserver(255a4468-7378-413d-92bd-8056478658d3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 14 23:50:36.270492 kubelet[3513]: E0114 23:50:36.270412 3513 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6d86655bcb-l8j4w" podUID="255a4468-7378-413d-92bd-8056478658d3" Jan 14 23:50:36.462483 containerd[1996]: time="2026-01-14T23:50:36.461928306Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6d86655bcb-8jvgv,Uid:296d9b29-20aa-492b-aa70-e26652feb8da,Namespace:calico-apiserver,Attempt:0,}" Jan 14 23:50:36.464334 containerd[1996]: time="2026-01-14T23:50:36.464160522Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7cc6d978d-lsv59,Uid:49be0d3d-84d7-4994-b2e7-37cea1fa9624,Namespace:calico-apiserver,Attempt:0,}" Jan 14 23:50:36.465958 containerd[1996]: time="2026-01-14T23:50:36.465881682Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-cqpz4,Uid:cf34a3a4-eba9-46e0-9f84-01c9f3710543,Namespace:kube-system,Attempt:0,}" Jan 14 23:50:36.865363 kubelet[3513]: E0114 23:50:36.865191 3513 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6d86655bcb-l8j4w" podUID="255a4468-7378-413d-92bd-8056478658d3" Jan 14 23:50:36.866575 kubelet[3513]: E0114 23:50:36.865265 3513 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-8dsvj" podUID="23b28fd4-dc96-481b-a69a-1d96358778f5" Jan 14 23:50:36.939144 systemd-networkd[1746]: cali2912fead968: Link UP Jan 14 23:50:36.941805 systemd-networkd[1746]: cali2912fead968: Gained carrier Jan 14 23:50:36.987000 audit[5162]: NETFILTER_CFG table=filter:132 family=2 entries=20 op=nft_register_rule pid=5162 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 23:50:36.987000 audit[5162]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=7480 a0=3 a1=ffffd6496730 a2=0 a3=1 items=0 ppid=3619 pid=5162 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:50:36.987000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 23:50:36.995624 containerd[1996]: 2026-01-14 23:50:36.627 [INFO][5108] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--18--197-k8s-calico--apiserver--7cc6d978d--lsv59-eth0 calico-apiserver-7cc6d978d- calico-apiserver 49be0d3d-84d7-4994-b2e7-37cea1fa9624 916 0 2026-01-14 23:50:00 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7cc6d978d projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-18-197 calico-apiserver-7cc6d978d-lsv59 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali2912fead968 [] [] }} ContainerID="414890ba6bbbef556e40e500c05c1a55c8b426fa6b7567f3507ea3348b0f3b9e" Namespace="calico-apiserver" Pod="calico-apiserver-7cc6d978d-lsv59" WorkloadEndpoint="ip--172--31--18--197-k8s-calico--apiserver--7cc6d978d--lsv59-" Jan 14 23:50:36.995624 containerd[1996]: 2026-01-14 23:50:36.627 [INFO][5108] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="414890ba6bbbef556e40e500c05c1a55c8b426fa6b7567f3507ea3348b0f3b9e" Namespace="calico-apiserver" Pod="calico-apiserver-7cc6d978d-lsv59" WorkloadEndpoint="ip--172--31--18--197-k8s-calico--apiserver--7cc6d978d--lsv59-eth0" Jan 14 23:50:36.995624 containerd[1996]: 2026-01-14 23:50:36.795 [INFO][5138] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="414890ba6bbbef556e40e500c05c1a55c8b426fa6b7567f3507ea3348b0f3b9e" HandleID="k8s-pod-network.414890ba6bbbef556e40e500c05c1a55c8b426fa6b7567f3507ea3348b0f3b9e" Workload="ip--172--31--18--197-k8s-calico--apiserver--7cc6d978d--lsv59-eth0" Jan 14 23:50:36.996232 containerd[1996]: 2026-01-14 23:50:36.795 [INFO][5138] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="414890ba6bbbef556e40e500c05c1a55c8b426fa6b7567f3507ea3348b0f3b9e" HandleID="k8s-pod-network.414890ba6bbbef556e40e500c05c1a55c8b426fa6b7567f3507ea3348b0f3b9e" Workload="ip--172--31--18--197-k8s-calico--apiserver--7cc6d978d--lsv59-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400036f090), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ip-172-31-18-197", "pod":"calico-apiserver-7cc6d978d-lsv59", "timestamp":"2026-01-14 23:50:36.79503818 +0000 UTC"}, Hostname:"ip-172-31-18-197", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 14 23:50:36.996232 containerd[1996]: 2026-01-14 23:50:36.795 [INFO][5138] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 14 23:50:36.996232 containerd[1996]: 2026-01-14 23:50:36.795 [INFO][5138] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 14 23:50:36.996232 containerd[1996]: 2026-01-14 23:50:36.795 [INFO][5138] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-18-197' Jan 14 23:50:36.996232 containerd[1996]: 2026-01-14 23:50:36.823 [INFO][5138] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.414890ba6bbbef556e40e500c05c1a55c8b426fa6b7567f3507ea3348b0f3b9e" host="ip-172-31-18-197" Jan 14 23:50:36.996232 containerd[1996]: 2026-01-14 23:50:36.832 [INFO][5138] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-18-197" Jan 14 23:50:36.996232 containerd[1996]: 2026-01-14 23:50:36.846 [INFO][5138] ipam/ipam.go 511: Trying affinity for 192.168.99.64/26 host="ip-172-31-18-197" Jan 14 23:50:36.996232 containerd[1996]: 2026-01-14 23:50:36.852 [INFO][5138] ipam/ipam.go 158: Attempting to load block cidr=192.168.99.64/26 host="ip-172-31-18-197" Jan 14 23:50:36.996232 containerd[1996]: 2026-01-14 23:50:36.866 [INFO][5138] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.99.64/26 host="ip-172-31-18-197" Jan 14 23:50:36.996818 containerd[1996]: 2026-01-14 23:50:36.866 [INFO][5138] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.99.64/26 handle="k8s-pod-network.414890ba6bbbef556e40e500c05c1a55c8b426fa6b7567f3507ea3348b0f3b9e" host="ip-172-31-18-197" Jan 14 23:50:36.996818 containerd[1996]: 2026-01-14 23:50:36.870 [INFO][5138] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.414890ba6bbbef556e40e500c05c1a55c8b426fa6b7567f3507ea3348b0f3b9e Jan 14 23:50:36.996818 containerd[1996]: 2026-01-14 23:50:36.881 [INFO][5138] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.99.64/26 handle="k8s-pod-network.414890ba6bbbef556e40e500c05c1a55c8b426fa6b7567f3507ea3348b0f3b9e" host="ip-172-31-18-197" Jan 14 23:50:36.996818 containerd[1996]: 2026-01-14 23:50:36.900 [INFO][5138] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.99.69/26] block=192.168.99.64/26 handle="k8s-pod-network.414890ba6bbbef556e40e500c05c1a55c8b426fa6b7567f3507ea3348b0f3b9e" host="ip-172-31-18-197" Jan 14 23:50:36.996818 containerd[1996]: 2026-01-14 23:50:36.900 [INFO][5138] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.99.69/26] handle="k8s-pod-network.414890ba6bbbef556e40e500c05c1a55c8b426fa6b7567f3507ea3348b0f3b9e" host="ip-172-31-18-197" Jan 14 23:50:36.996818 containerd[1996]: 2026-01-14 23:50:36.900 [INFO][5138] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 14 23:50:36.996818 containerd[1996]: 2026-01-14 23:50:36.900 [INFO][5138] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.99.69/26] IPv6=[] ContainerID="414890ba6bbbef556e40e500c05c1a55c8b426fa6b7567f3507ea3348b0f3b9e" HandleID="k8s-pod-network.414890ba6bbbef556e40e500c05c1a55c8b426fa6b7567f3507ea3348b0f3b9e" Workload="ip--172--31--18--197-k8s-calico--apiserver--7cc6d978d--lsv59-eth0" Jan 14 23:50:36.997177 containerd[1996]: 2026-01-14 23:50:36.911 [INFO][5108] cni-plugin/k8s.go 418: Populated endpoint ContainerID="414890ba6bbbef556e40e500c05c1a55c8b426fa6b7567f3507ea3348b0f3b9e" Namespace="calico-apiserver" Pod="calico-apiserver-7cc6d978d-lsv59" WorkloadEndpoint="ip--172--31--18--197-k8s-calico--apiserver--7cc6d978d--lsv59-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--197-k8s-calico--apiserver--7cc6d978d--lsv59-eth0", GenerateName:"calico-apiserver-7cc6d978d-", Namespace:"calico-apiserver", SelfLink:"", UID:"49be0d3d-84d7-4994-b2e7-37cea1fa9624", ResourceVersion:"916", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 23, 50, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7cc6d978d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-197", ContainerID:"", Pod:"calico-apiserver-7cc6d978d-lsv59", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.99.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali2912fead968", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 23:50:36.997322 containerd[1996]: 2026-01-14 23:50:36.911 [INFO][5108] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.99.69/32] ContainerID="414890ba6bbbef556e40e500c05c1a55c8b426fa6b7567f3507ea3348b0f3b9e" Namespace="calico-apiserver" Pod="calico-apiserver-7cc6d978d-lsv59" WorkloadEndpoint="ip--172--31--18--197-k8s-calico--apiserver--7cc6d978d--lsv59-eth0" Jan 14 23:50:36.997322 containerd[1996]: 2026-01-14 23:50:36.913 [INFO][5108] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2912fead968 ContainerID="414890ba6bbbef556e40e500c05c1a55c8b426fa6b7567f3507ea3348b0f3b9e" Namespace="calico-apiserver" Pod="calico-apiserver-7cc6d978d-lsv59" WorkloadEndpoint="ip--172--31--18--197-k8s-calico--apiserver--7cc6d978d--lsv59-eth0" Jan 14 23:50:36.997322 containerd[1996]: 2026-01-14 23:50:36.944 [INFO][5108] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="414890ba6bbbef556e40e500c05c1a55c8b426fa6b7567f3507ea3348b0f3b9e" Namespace="calico-apiserver" Pod="calico-apiserver-7cc6d978d-lsv59" WorkloadEndpoint="ip--172--31--18--197-k8s-calico--apiserver--7cc6d978d--lsv59-eth0" Jan 14 23:50:36.997490 containerd[1996]: 2026-01-14 23:50:36.947 [INFO][5108] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="414890ba6bbbef556e40e500c05c1a55c8b426fa6b7567f3507ea3348b0f3b9e" Namespace="calico-apiserver" Pod="calico-apiserver-7cc6d978d-lsv59" WorkloadEndpoint="ip--172--31--18--197-k8s-calico--apiserver--7cc6d978d--lsv59-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--197-k8s-calico--apiserver--7cc6d978d--lsv59-eth0", GenerateName:"calico-apiserver-7cc6d978d-", Namespace:"calico-apiserver", SelfLink:"", UID:"49be0d3d-84d7-4994-b2e7-37cea1fa9624", ResourceVersion:"916", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 23, 50, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7cc6d978d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-197", ContainerID:"414890ba6bbbef556e40e500c05c1a55c8b426fa6b7567f3507ea3348b0f3b9e", Pod:"calico-apiserver-7cc6d978d-lsv59", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.99.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali2912fead968", MAC:"a2:da:9f:3b:11:23", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 23:50:37.001692 containerd[1996]: 2026-01-14 23:50:36.985 [INFO][5108] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="414890ba6bbbef556e40e500c05c1a55c8b426fa6b7567f3507ea3348b0f3b9e" Namespace="calico-apiserver" Pod="calico-apiserver-7cc6d978d-lsv59" WorkloadEndpoint="ip--172--31--18--197-k8s-calico--apiserver--7cc6d978d--lsv59-eth0" Jan 14 23:50:37.016000 audit[5162]: NETFILTER_CFG table=nat:133 family=2 entries=14 op=nft_register_rule pid=5162 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 23:50:37.016000 audit[5162]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=3468 a0=3 a1=ffffd6496730 a2=0 a3=1 items=0 ppid=3619 pid=5162 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:50:37.016000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 23:50:37.089238 containerd[1996]: time="2026-01-14T23:50:37.089169509Z" level=info msg="connecting to shim 414890ba6bbbef556e40e500c05c1a55c8b426fa6b7567f3507ea3348b0f3b9e" address="unix:///run/containerd/s/f201901e959850e5602599c13ac609c5d156d130710121d84bcfe82e2a74bb66" namespace=k8s.io protocol=ttrpc version=3 Jan 14 23:50:37.113084 systemd-networkd[1746]: cali16ed2dae3b7: Link UP Jan 14 23:50:37.113433 systemd-networkd[1746]: cali16ed2dae3b7: Gained carrier Jan 14 23:50:37.180947 containerd[1996]: 2026-01-14 23:50:36.668 [INFO][5104] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--18--197-k8s-coredns--674b8bbfcf--cqpz4-eth0 coredns-674b8bbfcf- kube-system cf34a3a4-eba9-46e0-9f84-01c9f3710543 905 0 2026-01-14 23:49:39 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-18-197 coredns-674b8bbfcf-cqpz4 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali16ed2dae3b7 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="b1b04d2c4f4d15ff7d96d287062076fbb06db3130c2410309907ce140d26fe27" Namespace="kube-system" Pod="coredns-674b8bbfcf-cqpz4" WorkloadEndpoint="ip--172--31--18--197-k8s-coredns--674b8bbfcf--cqpz4-" Jan 14 23:50:37.180947 containerd[1996]: 2026-01-14 23:50:36.669 [INFO][5104] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="b1b04d2c4f4d15ff7d96d287062076fbb06db3130c2410309907ce140d26fe27" Namespace="kube-system" Pod="coredns-674b8bbfcf-cqpz4" WorkloadEndpoint="ip--172--31--18--197-k8s-coredns--674b8bbfcf--cqpz4-eth0" Jan 14 23:50:37.180947 containerd[1996]: 2026-01-14 23:50:36.809 [INFO][5145] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b1b04d2c4f4d15ff7d96d287062076fbb06db3130c2410309907ce140d26fe27" HandleID="k8s-pod-network.b1b04d2c4f4d15ff7d96d287062076fbb06db3130c2410309907ce140d26fe27" Workload="ip--172--31--18--197-k8s-coredns--674b8bbfcf--cqpz4-eth0" Jan 14 23:50:37.181240 containerd[1996]: 2026-01-14 23:50:36.809 [INFO][5145] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="b1b04d2c4f4d15ff7d96d287062076fbb06db3130c2410309907ce140d26fe27" HandleID="k8s-pod-network.b1b04d2c4f4d15ff7d96d287062076fbb06db3130c2410309907ce140d26fe27" Workload="ip--172--31--18--197-k8s-coredns--674b8bbfcf--cqpz4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400024b130), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-18-197", "pod":"coredns-674b8bbfcf-cqpz4", "timestamp":"2026-01-14 23:50:36.809110916 +0000 UTC"}, Hostname:"ip-172-31-18-197", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 14 23:50:37.181240 containerd[1996]: 2026-01-14 23:50:36.809 [INFO][5145] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 14 23:50:37.181240 containerd[1996]: 2026-01-14 23:50:36.900 [INFO][5145] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 14 23:50:37.181240 containerd[1996]: 2026-01-14 23:50:36.901 [INFO][5145] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-18-197' Jan 14 23:50:37.181240 containerd[1996]: 2026-01-14 23:50:36.958 [INFO][5145] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.b1b04d2c4f4d15ff7d96d287062076fbb06db3130c2410309907ce140d26fe27" host="ip-172-31-18-197" Jan 14 23:50:37.181240 containerd[1996]: 2026-01-14 23:50:36.994 [INFO][5145] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-18-197" Jan 14 23:50:37.181240 containerd[1996]: 2026-01-14 23:50:37.007 [INFO][5145] ipam/ipam.go 511: Trying affinity for 192.168.99.64/26 host="ip-172-31-18-197" Jan 14 23:50:37.181240 containerd[1996]: 2026-01-14 23:50:37.012 [INFO][5145] ipam/ipam.go 158: Attempting to load block cidr=192.168.99.64/26 host="ip-172-31-18-197" Jan 14 23:50:37.181240 containerd[1996]: 2026-01-14 23:50:37.025 [INFO][5145] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.99.64/26 host="ip-172-31-18-197" Jan 14 23:50:37.183917 containerd[1996]: 2026-01-14 23:50:37.026 [INFO][5145] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.99.64/26 handle="k8s-pod-network.b1b04d2c4f4d15ff7d96d287062076fbb06db3130c2410309907ce140d26fe27" host="ip-172-31-18-197" Jan 14 23:50:37.183917 containerd[1996]: 2026-01-14 23:50:37.032 [INFO][5145] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.b1b04d2c4f4d15ff7d96d287062076fbb06db3130c2410309907ce140d26fe27 Jan 14 23:50:37.183917 containerd[1996]: 2026-01-14 23:50:37.069 [INFO][5145] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.99.64/26 handle="k8s-pod-network.b1b04d2c4f4d15ff7d96d287062076fbb06db3130c2410309907ce140d26fe27" host="ip-172-31-18-197" Jan 14 23:50:37.183917 containerd[1996]: 2026-01-14 23:50:37.091 [INFO][5145] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.99.70/26] block=192.168.99.64/26 handle="k8s-pod-network.b1b04d2c4f4d15ff7d96d287062076fbb06db3130c2410309907ce140d26fe27" host="ip-172-31-18-197" Jan 14 23:50:37.183917 containerd[1996]: 2026-01-14 23:50:37.092 [INFO][5145] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.99.70/26] handle="k8s-pod-network.b1b04d2c4f4d15ff7d96d287062076fbb06db3130c2410309907ce140d26fe27" host="ip-172-31-18-197" Jan 14 23:50:37.183917 containerd[1996]: 2026-01-14 23:50:37.093 [INFO][5145] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 14 23:50:37.183917 containerd[1996]: 2026-01-14 23:50:37.094 [INFO][5145] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.99.70/26] IPv6=[] ContainerID="b1b04d2c4f4d15ff7d96d287062076fbb06db3130c2410309907ce140d26fe27" HandleID="k8s-pod-network.b1b04d2c4f4d15ff7d96d287062076fbb06db3130c2410309907ce140d26fe27" Workload="ip--172--31--18--197-k8s-coredns--674b8bbfcf--cqpz4-eth0" Jan 14 23:50:37.184249 containerd[1996]: 2026-01-14 23:50:37.105 [INFO][5104] cni-plugin/k8s.go 418: Populated endpoint ContainerID="b1b04d2c4f4d15ff7d96d287062076fbb06db3130c2410309907ce140d26fe27" Namespace="kube-system" Pod="coredns-674b8bbfcf-cqpz4" WorkloadEndpoint="ip--172--31--18--197-k8s-coredns--674b8bbfcf--cqpz4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--197-k8s-coredns--674b8bbfcf--cqpz4-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"cf34a3a4-eba9-46e0-9f84-01c9f3710543", ResourceVersion:"905", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 23, 49, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-197", ContainerID:"", Pod:"coredns-674b8bbfcf-cqpz4", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.99.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali16ed2dae3b7", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 23:50:37.184249 containerd[1996]: 2026-01-14 23:50:37.105 [INFO][5104] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.99.70/32] ContainerID="b1b04d2c4f4d15ff7d96d287062076fbb06db3130c2410309907ce140d26fe27" Namespace="kube-system" Pod="coredns-674b8bbfcf-cqpz4" WorkloadEndpoint="ip--172--31--18--197-k8s-coredns--674b8bbfcf--cqpz4-eth0" Jan 14 23:50:37.184249 containerd[1996]: 2026-01-14 23:50:37.105 [INFO][5104] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali16ed2dae3b7 ContainerID="b1b04d2c4f4d15ff7d96d287062076fbb06db3130c2410309907ce140d26fe27" Namespace="kube-system" Pod="coredns-674b8bbfcf-cqpz4" WorkloadEndpoint="ip--172--31--18--197-k8s-coredns--674b8bbfcf--cqpz4-eth0" Jan 14 23:50:37.184249 containerd[1996]: 2026-01-14 23:50:37.123 [INFO][5104] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b1b04d2c4f4d15ff7d96d287062076fbb06db3130c2410309907ce140d26fe27" Namespace="kube-system" Pod="coredns-674b8bbfcf-cqpz4" WorkloadEndpoint="ip--172--31--18--197-k8s-coredns--674b8bbfcf--cqpz4-eth0" Jan 14 23:50:37.184249 containerd[1996]: 2026-01-14 23:50:37.129 [INFO][5104] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="b1b04d2c4f4d15ff7d96d287062076fbb06db3130c2410309907ce140d26fe27" Namespace="kube-system" Pod="coredns-674b8bbfcf-cqpz4" WorkloadEndpoint="ip--172--31--18--197-k8s-coredns--674b8bbfcf--cqpz4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--197-k8s-coredns--674b8bbfcf--cqpz4-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"cf34a3a4-eba9-46e0-9f84-01c9f3710543", ResourceVersion:"905", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 23, 49, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-197", ContainerID:"b1b04d2c4f4d15ff7d96d287062076fbb06db3130c2410309907ce140d26fe27", Pod:"coredns-674b8bbfcf-cqpz4", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.99.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali16ed2dae3b7", MAC:"26:c4:c4:22:6f:5f", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 23:50:37.184249 containerd[1996]: 2026-01-14 23:50:37.168 [INFO][5104] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="b1b04d2c4f4d15ff7d96d287062076fbb06db3130c2410309907ce140d26fe27" Namespace="kube-system" Pod="coredns-674b8bbfcf-cqpz4" WorkloadEndpoint="ip--172--31--18--197-k8s-coredns--674b8bbfcf--cqpz4-eth0" Jan 14 23:50:37.205000 audit[5188]: NETFILTER_CFG table=filter:134 family=2 entries=49 op=nft_register_chain pid=5188 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 14 23:50:37.205000 audit[5188]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=25452 a0=3 a1=ffffd39fdd60 a2=0 a3=ffffb5652fa8 items=0 ppid=4631 pid=5188 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:50:37.205000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 14 23:50:37.223832 systemd-networkd[1746]: cali8ce4be00037: Gained IPv6LL Jan 14 23:50:37.283319 systemd[1]: Started cri-containerd-414890ba6bbbef556e40e500c05c1a55c8b426fa6b7567f3507ea3348b0f3b9e.scope - libcontainer container 414890ba6bbbef556e40e500c05c1a55c8b426fa6b7567f3507ea3348b0f3b9e. Jan 14 23:50:37.292901 containerd[1996]: time="2026-01-14T23:50:37.292789806Z" level=info msg="connecting to shim b1b04d2c4f4d15ff7d96d287062076fbb06db3130c2410309907ce140d26fe27" address="unix:///run/containerd/s/e78d1982f7fbdb4a3def7043d908f78a92dc38c2923e1386166427cb3d0c67a4" namespace=k8s.io protocol=ttrpc version=3 Jan 14 23:50:37.335794 systemd-networkd[1746]: calia4709ed0c1b: Link UP Jan 14 23:50:37.339100 systemd-networkd[1746]: calia4709ed0c1b: Gained carrier Jan 14 23:50:37.405000 audit[5241]: NETFILTER_CFG table=filter:135 family=2 entries=58 op=nft_register_chain pid=5241 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 14 23:50:37.412000 audit: BPF prog-id=232 op=LOAD Jan 14 23:50:37.415891 containerd[1996]: 2026-01-14 23:50:36.662 [INFO][5101] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--18--197-k8s-calico--apiserver--6d86655bcb--8jvgv-eth0 calico-apiserver-6d86655bcb- calico-apiserver 296d9b29-20aa-492b-aa70-e26652feb8da 914 0 2026-01-14 23:49:58 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6d86655bcb projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-18-197 calico-apiserver-6d86655bcb-8jvgv eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calia4709ed0c1b [] [] }} ContainerID="94893aa07c2a9df4bfaa2a2a98af0ed1dd412343a9341916e5e9910d61d74fb8" Namespace="calico-apiserver" Pod="calico-apiserver-6d86655bcb-8jvgv" WorkloadEndpoint="ip--172--31--18--197-k8s-calico--apiserver--6d86655bcb--8jvgv-" Jan 14 23:50:37.415891 containerd[1996]: 2026-01-14 23:50:36.662 [INFO][5101] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="94893aa07c2a9df4bfaa2a2a98af0ed1dd412343a9341916e5e9910d61d74fb8" Namespace="calico-apiserver" Pod="calico-apiserver-6d86655bcb-8jvgv" WorkloadEndpoint="ip--172--31--18--197-k8s-calico--apiserver--6d86655bcb--8jvgv-eth0" Jan 14 23:50:37.415891 containerd[1996]: 2026-01-14 23:50:36.819 [INFO][5143] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="94893aa07c2a9df4bfaa2a2a98af0ed1dd412343a9341916e5e9910d61d74fb8" HandleID="k8s-pod-network.94893aa07c2a9df4bfaa2a2a98af0ed1dd412343a9341916e5e9910d61d74fb8" Workload="ip--172--31--18--197-k8s-calico--apiserver--6d86655bcb--8jvgv-eth0" Jan 14 23:50:37.415891 containerd[1996]: 2026-01-14 23:50:36.819 [INFO][5143] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="94893aa07c2a9df4bfaa2a2a98af0ed1dd412343a9341916e5e9910d61d74fb8" HandleID="k8s-pod-network.94893aa07c2a9df4bfaa2a2a98af0ed1dd412343a9341916e5e9910d61d74fb8" Workload="ip--172--31--18--197-k8s-calico--apiserver--6d86655bcb--8jvgv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000332140), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ip-172-31-18-197", "pod":"calico-apiserver-6d86655bcb-8jvgv", "timestamp":"2026-01-14 23:50:36.819232988 +0000 UTC"}, Hostname:"ip-172-31-18-197", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 14 23:50:37.415891 containerd[1996]: 2026-01-14 23:50:36.819 [INFO][5143] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 14 23:50:37.415891 containerd[1996]: 2026-01-14 23:50:37.093 [INFO][5143] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 14 23:50:37.415891 containerd[1996]: 2026-01-14 23:50:37.093 [INFO][5143] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-18-197' Jan 14 23:50:37.415891 containerd[1996]: 2026-01-14 23:50:37.141 [INFO][5143] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.94893aa07c2a9df4bfaa2a2a98af0ed1dd412343a9341916e5e9910d61d74fb8" host="ip-172-31-18-197" Jan 14 23:50:37.415891 containerd[1996]: 2026-01-14 23:50:37.189 [INFO][5143] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-18-197" Jan 14 23:50:37.415891 containerd[1996]: 2026-01-14 23:50:37.201 [INFO][5143] ipam/ipam.go 511: Trying affinity for 192.168.99.64/26 host="ip-172-31-18-197" Jan 14 23:50:37.415891 containerd[1996]: 2026-01-14 23:50:37.208 [INFO][5143] ipam/ipam.go 158: Attempting to load block cidr=192.168.99.64/26 host="ip-172-31-18-197" Jan 14 23:50:37.415891 containerd[1996]: 2026-01-14 23:50:37.218 [INFO][5143] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.99.64/26 host="ip-172-31-18-197" Jan 14 23:50:37.415891 containerd[1996]: 2026-01-14 23:50:37.218 [INFO][5143] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.99.64/26 handle="k8s-pod-network.94893aa07c2a9df4bfaa2a2a98af0ed1dd412343a9341916e5e9910d61d74fb8" host="ip-172-31-18-197" Jan 14 23:50:37.415891 containerd[1996]: 2026-01-14 23:50:37.227 [INFO][5143] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.94893aa07c2a9df4bfaa2a2a98af0ed1dd412343a9341916e5e9910d61d74fb8 Jan 14 23:50:37.415891 containerd[1996]: 2026-01-14 23:50:37.255 [INFO][5143] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.99.64/26 handle="k8s-pod-network.94893aa07c2a9df4bfaa2a2a98af0ed1dd412343a9341916e5e9910d61d74fb8" host="ip-172-31-18-197" Jan 14 23:50:37.415891 containerd[1996]: 2026-01-14 23:50:37.293 [INFO][5143] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.99.71/26] block=192.168.99.64/26 handle="k8s-pod-network.94893aa07c2a9df4bfaa2a2a98af0ed1dd412343a9341916e5e9910d61d74fb8" host="ip-172-31-18-197" Jan 14 23:50:37.415891 containerd[1996]: 2026-01-14 23:50:37.294 [INFO][5143] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.99.71/26] handle="k8s-pod-network.94893aa07c2a9df4bfaa2a2a98af0ed1dd412343a9341916e5e9910d61d74fb8" host="ip-172-31-18-197" Jan 14 23:50:37.415891 containerd[1996]: 2026-01-14 23:50:37.295 [INFO][5143] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 14 23:50:37.415891 containerd[1996]: 2026-01-14 23:50:37.297 [INFO][5143] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.99.71/26] IPv6=[] ContainerID="94893aa07c2a9df4bfaa2a2a98af0ed1dd412343a9341916e5e9910d61d74fb8" HandleID="k8s-pod-network.94893aa07c2a9df4bfaa2a2a98af0ed1dd412343a9341916e5e9910d61d74fb8" Workload="ip--172--31--18--197-k8s-calico--apiserver--6d86655bcb--8jvgv-eth0" Jan 14 23:50:37.418281 containerd[1996]: 2026-01-14 23:50:37.305 [INFO][5101] cni-plugin/k8s.go 418: Populated endpoint ContainerID="94893aa07c2a9df4bfaa2a2a98af0ed1dd412343a9341916e5e9910d61d74fb8" Namespace="calico-apiserver" Pod="calico-apiserver-6d86655bcb-8jvgv" WorkloadEndpoint="ip--172--31--18--197-k8s-calico--apiserver--6d86655bcb--8jvgv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--197-k8s-calico--apiserver--6d86655bcb--8jvgv-eth0", GenerateName:"calico-apiserver-6d86655bcb-", Namespace:"calico-apiserver", SelfLink:"", UID:"296d9b29-20aa-492b-aa70-e26652feb8da", ResourceVersion:"914", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 23, 49, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6d86655bcb", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-197", ContainerID:"", Pod:"calico-apiserver-6d86655bcb-8jvgv", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.99.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calia4709ed0c1b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 23:50:37.418281 containerd[1996]: 2026-01-14 23:50:37.306 [INFO][5101] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.99.71/32] ContainerID="94893aa07c2a9df4bfaa2a2a98af0ed1dd412343a9341916e5e9910d61d74fb8" Namespace="calico-apiserver" Pod="calico-apiserver-6d86655bcb-8jvgv" WorkloadEndpoint="ip--172--31--18--197-k8s-calico--apiserver--6d86655bcb--8jvgv-eth0" Jan 14 23:50:37.418281 containerd[1996]: 2026-01-14 23:50:37.307 [INFO][5101] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia4709ed0c1b ContainerID="94893aa07c2a9df4bfaa2a2a98af0ed1dd412343a9341916e5e9910d61d74fb8" Namespace="calico-apiserver" Pod="calico-apiserver-6d86655bcb-8jvgv" WorkloadEndpoint="ip--172--31--18--197-k8s-calico--apiserver--6d86655bcb--8jvgv-eth0" Jan 14 23:50:37.418281 containerd[1996]: 2026-01-14 23:50:37.346 [INFO][5101] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="94893aa07c2a9df4bfaa2a2a98af0ed1dd412343a9341916e5e9910d61d74fb8" Namespace="calico-apiserver" Pod="calico-apiserver-6d86655bcb-8jvgv" WorkloadEndpoint="ip--172--31--18--197-k8s-calico--apiserver--6d86655bcb--8jvgv-eth0" Jan 14 23:50:37.418281 containerd[1996]: 2026-01-14 23:50:37.350 [INFO][5101] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="94893aa07c2a9df4bfaa2a2a98af0ed1dd412343a9341916e5e9910d61d74fb8" Namespace="calico-apiserver" Pod="calico-apiserver-6d86655bcb-8jvgv" WorkloadEndpoint="ip--172--31--18--197-k8s-calico--apiserver--6d86655bcb--8jvgv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--197-k8s-calico--apiserver--6d86655bcb--8jvgv-eth0", GenerateName:"calico-apiserver-6d86655bcb-", Namespace:"calico-apiserver", SelfLink:"", UID:"296d9b29-20aa-492b-aa70-e26652feb8da", ResourceVersion:"914", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 23, 49, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6d86655bcb", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-197", ContainerID:"94893aa07c2a9df4bfaa2a2a98af0ed1dd412343a9341916e5e9910d61d74fb8", Pod:"calico-apiserver-6d86655bcb-8jvgv", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.99.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calia4709ed0c1b", MAC:"e6:d6:f0:33:a2:2c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 23:50:37.418281 containerd[1996]: 2026-01-14 23:50:37.396 [INFO][5101] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="94893aa07c2a9df4bfaa2a2a98af0ed1dd412343a9341916e5e9910d61d74fb8" Namespace="calico-apiserver" Pod="calico-apiserver-6d86655bcb-8jvgv" WorkloadEndpoint="ip--172--31--18--197-k8s-calico--apiserver--6d86655bcb--8jvgv-eth0" Jan 14 23:50:37.417000 audit: BPF prog-id=233 op=LOAD Jan 14 23:50:37.417000 audit[5191]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130180 a2=98 a3=0 items=0 ppid=5178 pid=5191 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:50:37.417000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3431343839306261366262626566353536653430653530306330356331 Jan 14 23:50:37.418000 audit: BPF prog-id=233 op=UNLOAD Jan 14 23:50:37.418000 audit[5191]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5178 pid=5191 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:50:37.418000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3431343839306261366262626566353536653430653530306330356331 Jan 14 23:50:37.420000 audit: BPF prog-id=234 op=LOAD Jan 14 23:50:37.420000 audit[5191]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001303e8 a2=98 a3=0 items=0 ppid=5178 pid=5191 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:50:37.420000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3431343839306261366262626566353536653430653530306330356331 Jan 14 23:50:37.422000 audit: BPF prog-id=235 op=LOAD Jan 14 23:50:37.422000 audit[5191]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000130168 a2=98 a3=0 items=0 ppid=5178 pid=5191 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:50:37.422000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3431343839306261366262626566353536653430653530306330356331 Jan 14 23:50:37.423000 audit: BPF prog-id=235 op=UNLOAD Jan 14 23:50:37.423000 audit[5191]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5178 pid=5191 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:50:37.423000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3431343839306261366262626566353536653430653530306330356331 Jan 14 23:50:37.424000 audit: BPF prog-id=234 op=UNLOAD Jan 14 23:50:37.424000 audit[5191]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5178 pid=5191 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:50:37.424000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3431343839306261366262626566353536653430653530306330356331 Jan 14 23:50:37.426000 audit: BPF prog-id=236 op=LOAD Jan 14 23:50:37.426000 audit[5191]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130648 a2=98 a3=0 items=0 ppid=5178 pid=5191 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:50:37.426000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3431343839306261366262626566353536653430653530306330356331 Jan 14 23:50:37.405000 audit[5241]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=27304 a0=3 a1=ffffe3d56540 a2=0 a3=ffff9aa18fa8 items=0 ppid=4631 pid=5241 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:50:37.405000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 14 23:50:37.454286 systemd[1]: Started cri-containerd-b1b04d2c4f4d15ff7d96d287062076fbb06db3130c2410309907ce140d26fe27.scope - libcontainer container b1b04d2c4f4d15ff7d96d287062076fbb06db3130c2410309907ce140d26fe27. Jan 14 23:50:37.556000 audit: BPF prog-id=237 op=LOAD Jan 14 23:50:37.559000 audit: BPF prog-id=238 op=LOAD Jan 14 23:50:37.559000 audit[5239]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a8180 a2=98 a3=0 items=0 ppid=5218 pid=5239 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:50:37.559000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6231623034643263346634643135666637643936643238373036323037 Jan 14 23:50:37.561000 audit: BPF prog-id=238 op=UNLOAD Jan 14 23:50:37.561000 audit[5239]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5218 pid=5239 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:50:37.561000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6231623034643263346634643135666637643936643238373036323037 Jan 14 23:50:37.561000 audit: BPF prog-id=239 op=LOAD Jan 14 23:50:37.561000 audit[5239]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a83e8 a2=98 a3=0 items=0 ppid=5218 pid=5239 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:50:37.561000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6231623034643263346634643135666637643936643238373036323037 Jan 14 23:50:37.561000 audit: BPF prog-id=240 op=LOAD Jan 14 23:50:37.561000 audit[5239]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=40001a8168 a2=98 a3=0 items=0 ppid=5218 pid=5239 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:50:37.561000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6231623034643263346634643135666637643936643238373036323037 Jan 14 23:50:37.561000 audit: BPF prog-id=240 op=UNLOAD Jan 14 23:50:37.561000 audit[5239]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5218 pid=5239 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:50:37.561000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6231623034643263346634643135666637643936643238373036323037 Jan 14 23:50:37.561000 audit: BPF prog-id=239 op=UNLOAD Jan 14 23:50:37.561000 audit[5239]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5218 pid=5239 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:50:37.561000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6231623034643263346634643135666637643936643238373036323037 Jan 14 23:50:37.562000 audit: BPF prog-id=241 op=LOAD Jan 14 23:50:37.562000 audit[5239]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a8648 a2=98 a3=0 items=0 ppid=5218 pid=5239 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:50:37.562000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6231623034643263346634643135666637643936643238373036323037 Jan 14 23:50:37.586517 containerd[1996]: time="2026-01-14T23:50:37.586445708Z" level=info msg="connecting to shim 94893aa07c2a9df4bfaa2a2a98af0ed1dd412343a9341916e5e9910d61d74fb8" address="unix:///run/containerd/s/9622c3cbcd2bf4c5df0ed98dce293fa9fb8632341ef3cc5861856d662e754acf" namespace=k8s.io protocol=ttrpc version=3 Jan 14 23:50:37.661496 systemd[1]: Started cri-containerd-94893aa07c2a9df4bfaa2a2a98af0ed1dd412343a9341916e5e9910d61d74fb8.scope - libcontainer container 94893aa07c2a9df4bfaa2a2a98af0ed1dd412343a9341916e5e9910d61d74fb8. Jan 14 23:50:37.680358 containerd[1996]: time="2026-01-14T23:50:37.680307920Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7cc6d978d-lsv59,Uid:49be0d3d-84d7-4994-b2e7-37cea1fa9624,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"414890ba6bbbef556e40e500c05c1a55c8b426fa6b7567f3507ea3348b0f3b9e\"" Jan 14 23:50:37.683134 containerd[1996]: time="2026-01-14T23:50:37.683088080Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 14 23:50:37.720724 containerd[1996]: time="2026-01-14T23:50:37.720130280Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-cqpz4,Uid:cf34a3a4-eba9-46e0-9f84-01c9f3710543,Namespace:kube-system,Attempt:0,} returns sandbox id \"b1b04d2c4f4d15ff7d96d287062076fbb06db3130c2410309907ce140d26fe27\"" Jan 14 23:50:37.734540 containerd[1996]: time="2026-01-14T23:50:37.734490824Z" level=info msg="CreateContainer within sandbox \"b1b04d2c4f4d15ff7d96d287062076fbb06db3130c2410309907ce140d26fe27\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 14 23:50:37.775068 containerd[1996]: time="2026-01-14T23:50:37.774998457Z" level=info msg="Container b9a8ba2f3378430645ba1a0bad9624593ec9fd9ca05badda678e9f6e8dc11b55: CDI devices from CRI Config.CDIDevices: []" Jan 14 23:50:37.774000 audit[5318]: NETFILTER_CFG table=filter:136 family=2 entries=57 op=nft_register_chain pid=5318 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 14 23:50:37.774000 audit[5318]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=27828 a0=3 a1=ffffd09f3e30 a2=0 a3=ffffab475fa8 items=0 ppid=4631 pid=5318 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:50:37.774000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 14 23:50:37.782000 audit: BPF prog-id=242 op=LOAD Jan 14 23:50:37.784000 audit: BPF prog-id=243 op=LOAD Jan 14 23:50:37.784000 audit[5287]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a8180 a2=98 a3=0 items=0 ppid=5274 pid=5287 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:50:37.784000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3934383933616130376332613964663462666161326132613938616630 Jan 14 23:50:37.785000 audit: BPF prog-id=243 op=UNLOAD Jan 14 23:50:37.785000 audit[5287]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5274 pid=5287 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:50:37.785000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3934383933616130376332613964663462666161326132613938616630 Jan 14 23:50:37.786000 audit: BPF prog-id=244 op=LOAD Jan 14 23:50:37.786000 audit[5287]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a83e8 a2=98 a3=0 items=0 ppid=5274 pid=5287 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:50:37.786000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3934383933616130376332613964663462666161326132613938616630 Jan 14 23:50:37.786000 audit: BPF prog-id=245 op=LOAD Jan 14 23:50:37.786000 audit[5287]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=40001a8168 a2=98 a3=0 items=0 ppid=5274 pid=5287 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:50:37.786000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3934383933616130376332613964663462666161326132613938616630 Jan 14 23:50:37.786000 audit: BPF prog-id=245 op=UNLOAD Jan 14 23:50:37.786000 audit[5287]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5274 pid=5287 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:50:37.786000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3934383933616130376332613964663462666161326132613938616630 Jan 14 23:50:37.786000 audit: BPF prog-id=244 op=UNLOAD Jan 14 23:50:37.786000 audit[5287]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5274 pid=5287 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:50:37.786000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3934383933616130376332613964663462666161326132613938616630 Jan 14 23:50:37.786000 audit: BPF prog-id=246 op=LOAD Jan 14 23:50:37.786000 audit[5287]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a8648 a2=98 a3=0 items=0 ppid=5274 pid=5287 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:50:37.786000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3934383933616130376332613964663462666161326132613938616630 Jan 14 23:50:37.799837 containerd[1996]: time="2026-01-14T23:50:37.799564869Z" level=info msg="CreateContainer within sandbox \"b1b04d2c4f4d15ff7d96d287062076fbb06db3130c2410309907ce140d26fe27\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"b9a8ba2f3378430645ba1a0bad9624593ec9fd9ca05badda678e9f6e8dc11b55\"" Jan 14 23:50:37.801325 containerd[1996]: time="2026-01-14T23:50:37.800965953Z" level=info msg="StartContainer for \"b9a8ba2f3378430645ba1a0bad9624593ec9fd9ca05badda678e9f6e8dc11b55\"" Jan 14 23:50:37.805464 containerd[1996]: time="2026-01-14T23:50:37.805243737Z" level=info msg="connecting to shim b9a8ba2f3378430645ba1a0bad9624593ec9fd9ca05badda678e9f6e8dc11b55" address="unix:///run/containerd/s/e78d1982f7fbdb4a3def7043d908f78a92dc38c2923e1386166427cb3d0c67a4" protocol=ttrpc version=3 Jan 14 23:50:37.856739 systemd[1]: Started cri-containerd-b9a8ba2f3378430645ba1a0bad9624593ec9fd9ca05badda678e9f6e8dc11b55.scope - libcontainer container b9a8ba2f3378430645ba1a0bad9624593ec9fd9ca05badda678e9f6e8dc11b55. Jan 14 23:50:37.885095 kubelet[3513]: E0114 23:50:37.884789 3513 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6d86655bcb-l8j4w" podUID="255a4468-7378-413d-92bd-8056478658d3" Jan 14 23:50:37.913000 audit: BPF prog-id=247 op=LOAD Jan 14 23:50:37.916000 audit: BPF prog-id=248 op=LOAD Jan 14 23:50:37.916000 audit[5319]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130180 a2=98 a3=0 items=0 ppid=5218 pid=5319 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:50:37.916000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6239613862613266333337383433303634356261316130626164393632 Jan 14 23:50:37.916000 audit: BPF prog-id=248 op=UNLOAD Jan 14 23:50:37.916000 audit[5319]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5218 pid=5319 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:50:37.916000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6239613862613266333337383433303634356261316130626164393632 Jan 14 23:50:37.919000 audit: BPF prog-id=249 op=LOAD Jan 14 23:50:37.919000 audit[5319]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001303e8 a2=98 a3=0 items=0 ppid=5218 pid=5319 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:50:37.919000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6239613862613266333337383433303634356261316130626164393632 Jan 14 23:50:37.920000 audit: BPF prog-id=250 op=LOAD Jan 14 23:50:37.920000 audit[5319]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000130168 a2=98 a3=0 items=0 ppid=5218 pid=5319 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:50:37.920000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6239613862613266333337383433303634356261316130626164393632 Jan 14 23:50:37.922000 audit: BPF prog-id=250 op=UNLOAD Jan 14 23:50:37.922000 audit[5319]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5218 pid=5319 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:50:37.922000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6239613862613266333337383433303634356261316130626164393632 Jan 14 23:50:37.922000 audit: BPF prog-id=249 op=UNLOAD Jan 14 23:50:37.922000 audit[5319]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5218 pid=5319 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:50:37.922000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6239613862613266333337383433303634356261316130626164393632 Jan 14 23:50:37.923000 audit: BPF prog-id=251 op=LOAD Jan 14 23:50:37.923000 audit[5319]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130648 a2=98 a3=0 items=0 ppid=5218 pid=5319 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:50:37.923000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6239613862613266333337383433303634356261316130626164393632 Jan 14 23:50:37.984641 containerd[1996]: time="2026-01-14T23:50:37.984214990Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 23:50:37.992739 containerd[1996]: time="2026-01-14T23:50:37.990471682Z" level=info msg="StartContainer for \"b9a8ba2f3378430645ba1a0bad9624593ec9fd9ca05badda678e9f6e8dc11b55\" returns successfully" Jan 14 23:50:37.995422 containerd[1996]: time="2026-01-14T23:50:37.995337934Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 14 23:50:37.995959 containerd[1996]: time="2026-01-14T23:50:37.995796550Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 14 23:50:37.999219 kubelet[3513]: E0114 23:50:37.997524 3513 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 23:50:38.000170 kubelet[3513]: E0114 23:50:38.000117 3513 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 23:50:38.001121 kubelet[3513]: E0114 23:50:38.000896 3513 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fcwsd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-7cc6d978d-lsv59_calico-apiserver(49be0d3d-84d7-4994-b2e7-37cea1fa9624): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 14 23:50:38.002712 kubelet[3513]: E0114 23:50:38.002619 3513 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7cc6d978d-lsv59" podUID="49be0d3d-84d7-4994-b2e7-37cea1fa9624" Jan 14 23:50:38.060452 containerd[1996]: time="2026-01-14T23:50:38.060396378Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6d86655bcb-8jvgv,Uid:296d9b29-20aa-492b-aa70-e26652feb8da,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"94893aa07c2a9df4bfaa2a2a98af0ed1dd412343a9341916e5e9910d61d74fb8\"" Jan 14 23:50:38.066398 containerd[1996]: time="2026-01-14T23:50:38.065920662Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 14 23:50:38.185246 systemd-networkd[1746]: cali2912fead968: Gained IPv6LL Jan 14 23:50:38.384366 containerd[1996]: time="2026-01-14T23:50:38.384293600Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 23:50:38.385876 containerd[1996]: time="2026-01-14T23:50:38.385789640Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 14 23:50:38.386047 containerd[1996]: time="2026-01-14T23:50:38.385831844Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 14 23:50:38.386332 kubelet[3513]: E0114 23:50:38.386267 3513 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 23:50:38.386613 kubelet[3513]: E0114 23:50:38.386476 3513 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 23:50:38.386886 kubelet[3513]: E0114 23:50:38.386810 3513 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gdgll,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6d86655bcb-8jvgv_calico-apiserver(296d9b29-20aa-492b-aa70-e26652feb8da): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 14 23:50:38.388991 kubelet[3513]: E0114 23:50:38.388839 3513 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6d86655bcb-8jvgv" podUID="296d9b29-20aa-492b-aa70-e26652feb8da" Jan 14 23:50:38.462620 containerd[1996]: time="2026-01-14T23:50:38.462389348Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-28nlv,Uid:fd982afe-6d4f-4aff-a998-9f2578e041f1,Namespace:kube-system,Attempt:0,}" Jan 14 23:50:38.462998 containerd[1996]: time="2026-01-14T23:50:38.462958004Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-fd2bc,Uid:2054a7f1-33a6-4a0b-8079-7e8881899bb3,Namespace:calico-system,Attempt:0,}" Jan 14 23:50:38.760875 systemd-networkd[1746]: calia4709ed0c1b: Gained IPv6LL Jan 14 23:50:38.846569 systemd-networkd[1746]: cali5df0619ad49: Link UP Jan 14 23:50:38.848550 systemd-networkd[1746]: cali5df0619ad49: Gained carrier Jan 14 23:50:38.908003 containerd[1996]: 2026-01-14 23:50:38.635 [INFO][5360] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--18--197-k8s-goldmane--666569f655--fd2bc-eth0 goldmane-666569f655- calico-system 2054a7f1-33a6-4a0b-8079-7e8881899bb3 915 0 2026-01-14 23:50:06 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:666569f655 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ip-172-31-18-197 goldmane-666569f655-fd2bc eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali5df0619ad49 [] [] }} ContainerID="a6a7c97a02befe930f738b88552136612e6ab07efaa1633ed4a7ba7076267ac0" Namespace="calico-system" Pod="goldmane-666569f655-fd2bc" WorkloadEndpoint="ip--172--31--18--197-k8s-goldmane--666569f655--fd2bc-" Jan 14 23:50:38.908003 containerd[1996]: 2026-01-14 23:50:38.635 [INFO][5360] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a6a7c97a02befe930f738b88552136612e6ab07efaa1633ed4a7ba7076267ac0" Namespace="calico-system" Pod="goldmane-666569f655-fd2bc" WorkloadEndpoint="ip--172--31--18--197-k8s-goldmane--666569f655--fd2bc-eth0" Jan 14 23:50:38.908003 containerd[1996]: 2026-01-14 23:50:38.737 [INFO][5382] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a6a7c97a02befe930f738b88552136612e6ab07efaa1633ed4a7ba7076267ac0" HandleID="k8s-pod-network.a6a7c97a02befe930f738b88552136612e6ab07efaa1633ed4a7ba7076267ac0" Workload="ip--172--31--18--197-k8s-goldmane--666569f655--fd2bc-eth0" Jan 14 23:50:38.908003 containerd[1996]: 2026-01-14 23:50:38.737 [INFO][5382] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="a6a7c97a02befe930f738b88552136612e6ab07efaa1633ed4a7ba7076267ac0" HandleID="k8s-pod-network.a6a7c97a02befe930f738b88552136612e6ab07efaa1633ed4a7ba7076267ac0" Workload="ip--172--31--18--197-k8s-goldmane--666569f655--fd2bc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003cc1c0), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-18-197", "pod":"goldmane-666569f655-fd2bc", "timestamp":"2026-01-14 23:50:38.737447493 +0000 UTC"}, Hostname:"ip-172-31-18-197", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 14 23:50:38.908003 containerd[1996]: 2026-01-14 23:50:38.737 [INFO][5382] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 14 23:50:38.908003 containerd[1996]: 2026-01-14 23:50:38.737 [INFO][5382] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 14 23:50:38.908003 containerd[1996]: 2026-01-14 23:50:38.737 [INFO][5382] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-18-197' Jan 14 23:50:38.908003 containerd[1996]: 2026-01-14 23:50:38.758 [INFO][5382] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.a6a7c97a02befe930f738b88552136612e6ab07efaa1633ed4a7ba7076267ac0" host="ip-172-31-18-197" Jan 14 23:50:38.908003 containerd[1996]: 2026-01-14 23:50:38.771 [INFO][5382] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-18-197" Jan 14 23:50:38.908003 containerd[1996]: 2026-01-14 23:50:38.782 [INFO][5382] ipam/ipam.go 511: Trying affinity for 192.168.99.64/26 host="ip-172-31-18-197" Jan 14 23:50:38.908003 containerd[1996]: 2026-01-14 23:50:38.785 [INFO][5382] ipam/ipam.go 158: Attempting to load block cidr=192.168.99.64/26 host="ip-172-31-18-197" Jan 14 23:50:38.908003 containerd[1996]: 2026-01-14 23:50:38.793 [INFO][5382] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.99.64/26 host="ip-172-31-18-197" Jan 14 23:50:38.908003 containerd[1996]: 2026-01-14 23:50:38.793 [INFO][5382] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.99.64/26 handle="k8s-pod-network.a6a7c97a02befe930f738b88552136612e6ab07efaa1633ed4a7ba7076267ac0" host="ip-172-31-18-197" Jan 14 23:50:38.908003 containerd[1996]: 2026-01-14 23:50:38.800 [INFO][5382] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.a6a7c97a02befe930f738b88552136612e6ab07efaa1633ed4a7ba7076267ac0 Jan 14 23:50:38.908003 containerd[1996]: 2026-01-14 23:50:38.811 [INFO][5382] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.99.64/26 handle="k8s-pod-network.a6a7c97a02befe930f738b88552136612e6ab07efaa1633ed4a7ba7076267ac0" host="ip-172-31-18-197" Jan 14 23:50:38.908003 containerd[1996]: 2026-01-14 23:50:38.833 [INFO][5382] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.99.72/26] block=192.168.99.64/26 handle="k8s-pod-network.a6a7c97a02befe930f738b88552136612e6ab07efaa1633ed4a7ba7076267ac0" host="ip-172-31-18-197" Jan 14 23:50:38.908003 containerd[1996]: 2026-01-14 23:50:38.833 [INFO][5382] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.99.72/26] handle="k8s-pod-network.a6a7c97a02befe930f738b88552136612e6ab07efaa1633ed4a7ba7076267ac0" host="ip-172-31-18-197" Jan 14 23:50:38.908003 containerd[1996]: 2026-01-14 23:50:38.833 [INFO][5382] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 14 23:50:38.908003 containerd[1996]: 2026-01-14 23:50:38.835 [INFO][5382] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.99.72/26] IPv6=[] ContainerID="a6a7c97a02befe930f738b88552136612e6ab07efaa1633ed4a7ba7076267ac0" HandleID="k8s-pod-network.a6a7c97a02befe930f738b88552136612e6ab07efaa1633ed4a7ba7076267ac0" Workload="ip--172--31--18--197-k8s-goldmane--666569f655--fd2bc-eth0" Jan 14 23:50:38.911503 containerd[1996]: 2026-01-14 23:50:38.841 [INFO][5360] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a6a7c97a02befe930f738b88552136612e6ab07efaa1633ed4a7ba7076267ac0" Namespace="calico-system" Pod="goldmane-666569f655-fd2bc" WorkloadEndpoint="ip--172--31--18--197-k8s-goldmane--666569f655--fd2bc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--197-k8s-goldmane--666569f655--fd2bc-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"2054a7f1-33a6-4a0b-8079-7e8881899bb3", ResourceVersion:"915", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 23, 50, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-197", ContainerID:"", Pod:"goldmane-666569f655-fd2bc", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.99.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali5df0619ad49", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 23:50:38.911503 containerd[1996]: 2026-01-14 23:50:38.842 [INFO][5360] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.99.72/32] ContainerID="a6a7c97a02befe930f738b88552136612e6ab07efaa1633ed4a7ba7076267ac0" Namespace="calico-system" Pod="goldmane-666569f655-fd2bc" WorkloadEndpoint="ip--172--31--18--197-k8s-goldmane--666569f655--fd2bc-eth0" Jan 14 23:50:38.911503 containerd[1996]: 2026-01-14 23:50:38.842 [INFO][5360] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5df0619ad49 ContainerID="a6a7c97a02befe930f738b88552136612e6ab07efaa1633ed4a7ba7076267ac0" Namespace="calico-system" Pod="goldmane-666569f655-fd2bc" WorkloadEndpoint="ip--172--31--18--197-k8s-goldmane--666569f655--fd2bc-eth0" Jan 14 23:50:38.911503 containerd[1996]: 2026-01-14 23:50:38.850 [INFO][5360] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a6a7c97a02befe930f738b88552136612e6ab07efaa1633ed4a7ba7076267ac0" Namespace="calico-system" Pod="goldmane-666569f655-fd2bc" WorkloadEndpoint="ip--172--31--18--197-k8s-goldmane--666569f655--fd2bc-eth0" Jan 14 23:50:38.911503 containerd[1996]: 2026-01-14 23:50:38.854 [INFO][5360] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a6a7c97a02befe930f738b88552136612e6ab07efaa1633ed4a7ba7076267ac0" Namespace="calico-system" Pod="goldmane-666569f655-fd2bc" WorkloadEndpoint="ip--172--31--18--197-k8s-goldmane--666569f655--fd2bc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--197-k8s-goldmane--666569f655--fd2bc-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"2054a7f1-33a6-4a0b-8079-7e8881899bb3", ResourceVersion:"915", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 23, 50, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-197", ContainerID:"a6a7c97a02befe930f738b88552136612e6ab07efaa1633ed4a7ba7076267ac0", Pod:"goldmane-666569f655-fd2bc", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.99.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali5df0619ad49", MAC:"b2:32:1d:36:13:61", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 23:50:38.911503 containerd[1996]: 2026-01-14 23:50:38.891 [INFO][5360] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a6a7c97a02befe930f738b88552136612e6ab07efaa1633ed4a7ba7076267ac0" Namespace="calico-system" Pod="goldmane-666569f655-fd2bc" WorkloadEndpoint="ip--172--31--18--197-k8s-goldmane--666569f655--fd2bc-eth0" Jan 14 23:50:38.923227 kubelet[3513]: E0114 23:50:38.923047 3513 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7cc6d978d-lsv59" podUID="49be0d3d-84d7-4994-b2e7-37cea1fa9624" Jan 14 23:50:38.925177 kubelet[3513]: E0114 23:50:38.925082 3513 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6d86655bcb-8jvgv" podUID="296d9b29-20aa-492b-aa70-e26652feb8da" Jan 14 23:50:38.994396 containerd[1996]: time="2026-01-14T23:50:38.994309127Z" level=info msg="connecting to shim a6a7c97a02befe930f738b88552136612e6ab07efaa1633ed4a7ba7076267ac0" address="unix:///run/containerd/s/424817a996f6a7cafc73e22f1da037df582e7b694f840ab15e69ff742fa24b58" namespace=k8s.io protocol=ttrpc version=3 Jan 14 23:50:39.008674 kubelet[3513]: I0114 23:50:39.007915 3513 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-cqpz4" podStartSLOduration=60.007767151 podStartE2EDuration="1m0.007767151s" podCreationTimestamp="2026-01-14 23:49:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-14 23:50:38.958774162 +0000 UTC m=+63.793234733" watchObservedRunningTime="2026-01-14 23:50:39.007767151 +0000 UTC m=+63.842227638" Jan 14 23:50:39.080706 systemd-networkd[1746]: cali16ed2dae3b7: Gained IPv6LL Jan 14 23:50:39.127027 systemd[1]: Started cri-containerd-a6a7c97a02befe930f738b88552136612e6ab07efaa1633ed4a7ba7076267ac0.scope - libcontainer container a6a7c97a02befe930f738b88552136612e6ab07efaa1633ed4a7ba7076267ac0. Jan 14 23:50:39.148421 systemd-networkd[1746]: calia11178709f7: Link UP Jan 14 23:50:39.150040 systemd-networkd[1746]: calia11178709f7: Gained carrier Jan 14 23:50:39.189315 containerd[1996]: 2026-01-14 23:50:38.626 [INFO][5357] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--18--197-k8s-coredns--674b8bbfcf--28nlv-eth0 coredns-674b8bbfcf- kube-system fd982afe-6d4f-4aff-a998-9f2578e041f1 912 0 2026-01-14 23:49:39 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-18-197 coredns-674b8bbfcf-28nlv eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calia11178709f7 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="fe061ee937de198824ce1de067837208bbce669a031481410e61502957821d86" Namespace="kube-system" Pod="coredns-674b8bbfcf-28nlv" WorkloadEndpoint="ip--172--31--18--197-k8s-coredns--674b8bbfcf--28nlv-" Jan 14 23:50:39.189315 containerd[1996]: 2026-01-14 23:50:38.627 [INFO][5357] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="fe061ee937de198824ce1de067837208bbce669a031481410e61502957821d86" Namespace="kube-system" Pod="coredns-674b8bbfcf-28nlv" WorkloadEndpoint="ip--172--31--18--197-k8s-coredns--674b8bbfcf--28nlv-eth0" Jan 14 23:50:39.189315 containerd[1996]: 2026-01-14 23:50:38.761 [INFO][5380] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="fe061ee937de198824ce1de067837208bbce669a031481410e61502957821d86" HandleID="k8s-pod-network.fe061ee937de198824ce1de067837208bbce669a031481410e61502957821d86" Workload="ip--172--31--18--197-k8s-coredns--674b8bbfcf--28nlv-eth0" Jan 14 23:50:39.189315 containerd[1996]: 2026-01-14 23:50:38.764 [INFO][5380] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="fe061ee937de198824ce1de067837208bbce669a031481410e61502957821d86" HandleID="k8s-pod-network.fe061ee937de198824ce1de067837208bbce669a031481410e61502957821d86" Workload="ip--172--31--18--197-k8s-coredns--674b8bbfcf--28nlv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000314140), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-18-197", "pod":"coredns-674b8bbfcf-28nlv", "timestamp":"2026-01-14 23:50:38.761559945 +0000 UTC"}, Hostname:"ip-172-31-18-197", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 14 23:50:39.189315 containerd[1996]: 2026-01-14 23:50:38.764 [INFO][5380] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 14 23:50:39.189315 containerd[1996]: 2026-01-14 23:50:38.834 [INFO][5380] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 14 23:50:39.189315 containerd[1996]: 2026-01-14 23:50:38.835 [INFO][5380] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-18-197' Jan 14 23:50:39.189315 containerd[1996]: 2026-01-14 23:50:38.893 [INFO][5380] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.fe061ee937de198824ce1de067837208bbce669a031481410e61502957821d86" host="ip-172-31-18-197" Jan 14 23:50:39.189315 containerd[1996]: 2026-01-14 23:50:38.915 [INFO][5380] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-18-197" Jan 14 23:50:39.189315 containerd[1996]: 2026-01-14 23:50:38.935 [INFO][5380] ipam/ipam.go 511: Trying affinity for 192.168.99.64/26 host="ip-172-31-18-197" Jan 14 23:50:39.189315 containerd[1996]: 2026-01-14 23:50:38.953 [INFO][5380] ipam/ipam.go 158: Attempting to load block cidr=192.168.99.64/26 host="ip-172-31-18-197" Jan 14 23:50:39.189315 containerd[1996]: 2026-01-14 23:50:38.968 [INFO][5380] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.99.64/26 host="ip-172-31-18-197" Jan 14 23:50:39.189315 containerd[1996]: 2026-01-14 23:50:38.968 [INFO][5380] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.99.64/26 handle="k8s-pod-network.fe061ee937de198824ce1de067837208bbce669a031481410e61502957821d86" host="ip-172-31-18-197" Jan 14 23:50:39.189315 containerd[1996]: 2026-01-14 23:50:38.996 [INFO][5380] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.fe061ee937de198824ce1de067837208bbce669a031481410e61502957821d86 Jan 14 23:50:39.189315 containerd[1996]: 2026-01-14 23:50:39.056 [INFO][5380] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.99.64/26 handle="k8s-pod-network.fe061ee937de198824ce1de067837208bbce669a031481410e61502957821d86" host="ip-172-31-18-197" Jan 14 23:50:39.189315 containerd[1996]: 2026-01-14 23:50:39.094 [INFO][5380] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.99.73/26] block=192.168.99.64/26 handle="k8s-pod-network.fe061ee937de198824ce1de067837208bbce669a031481410e61502957821d86" host="ip-172-31-18-197" Jan 14 23:50:39.189315 containerd[1996]: 2026-01-14 23:50:39.095 [INFO][5380] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.99.73/26] handle="k8s-pod-network.fe061ee937de198824ce1de067837208bbce669a031481410e61502957821d86" host="ip-172-31-18-197" Jan 14 23:50:39.189315 containerd[1996]: 2026-01-14 23:50:39.095 [INFO][5380] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 14 23:50:39.189315 containerd[1996]: 2026-01-14 23:50:39.097 [INFO][5380] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.99.73/26] IPv6=[] ContainerID="fe061ee937de198824ce1de067837208bbce669a031481410e61502957821d86" HandleID="k8s-pod-network.fe061ee937de198824ce1de067837208bbce669a031481410e61502957821d86" Workload="ip--172--31--18--197-k8s-coredns--674b8bbfcf--28nlv-eth0" Jan 14 23:50:39.190483 containerd[1996]: 2026-01-14 23:50:39.120 [INFO][5357] cni-plugin/k8s.go 418: Populated endpoint ContainerID="fe061ee937de198824ce1de067837208bbce669a031481410e61502957821d86" Namespace="kube-system" Pod="coredns-674b8bbfcf-28nlv" WorkloadEndpoint="ip--172--31--18--197-k8s-coredns--674b8bbfcf--28nlv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--197-k8s-coredns--674b8bbfcf--28nlv-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"fd982afe-6d4f-4aff-a998-9f2578e041f1", ResourceVersion:"912", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 23, 49, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-197", ContainerID:"", Pod:"coredns-674b8bbfcf-28nlv", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.99.73/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia11178709f7", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 23:50:39.190483 containerd[1996]: 2026-01-14 23:50:39.127 [INFO][5357] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.99.73/32] ContainerID="fe061ee937de198824ce1de067837208bbce669a031481410e61502957821d86" Namespace="kube-system" Pod="coredns-674b8bbfcf-28nlv" WorkloadEndpoint="ip--172--31--18--197-k8s-coredns--674b8bbfcf--28nlv-eth0" Jan 14 23:50:39.190483 containerd[1996]: 2026-01-14 23:50:39.128 [INFO][5357] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia11178709f7 ContainerID="fe061ee937de198824ce1de067837208bbce669a031481410e61502957821d86" Namespace="kube-system" Pod="coredns-674b8bbfcf-28nlv" WorkloadEndpoint="ip--172--31--18--197-k8s-coredns--674b8bbfcf--28nlv-eth0" Jan 14 23:50:39.190483 containerd[1996]: 2026-01-14 23:50:39.152 [INFO][5357] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="fe061ee937de198824ce1de067837208bbce669a031481410e61502957821d86" Namespace="kube-system" Pod="coredns-674b8bbfcf-28nlv" WorkloadEndpoint="ip--172--31--18--197-k8s-coredns--674b8bbfcf--28nlv-eth0" Jan 14 23:50:39.190483 containerd[1996]: 2026-01-14 23:50:39.154 [INFO][5357] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="fe061ee937de198824ce1de067837208bbce669a031481410e61502957821d86" Namespace="kube-system" Pod="coredns-674b8bbfcf-28nlv" WorkloadEndpoint="ip--172--31--18--197-k8s-coredns--674b8bbfcf--28nlv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--197-k8s-coredns--674b8bbfcf--28nlv-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"fd982afe-6d4f-4aff-a998-9f2578e041f1", ResourceVersion:"912", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 23, 49, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-197", ContainerID:"fe061ee937de198824ce1de067837208bbce669a031481410e61502957821d86", Pod:"coredns-674b8bbfcf-28nlv", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.99.73/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia11178709f7", MAC:"be:cc:a6:38:23:45", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 23:50:39.190483 containerd[1996]: 2026-01-14 23:50:39.181 [INFO][5357] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="fe061ee937de198824ce1de067837208bbce669a031481410e61502957821d86" Namespace="kube-system" Pod="coredns-674b8bbfcf-28nlv" WorkloadEndpoint="ip--172--31--18--197-k8s-coredns--674b8bbfcf--28nlv-eth0" Jan 14 23:50:39.248000 audit: BPF prog-id=252 op=LOAD Jan 14 23:50:39.254000 audit: BPF prog-id=253 op=LOAD Jan 14 23:50:39.254000 audit[5420]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000176180 a2=98 a3=0 items=0 ppid=5409 pid=5420 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:50:39.254000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6136613763393761303262656665393330663733386238383535323133 Jan 14 23:50:39.254000 audit: BPF prog-id=253 op=UNLOAD Jan 14 23:50:39.254000 audit[5420]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=5409 pid=5420 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:50:39.254000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6136613763393761303262656665393330663733386238383535323133 Jan 14 23:50:39.257000 audit: BPF prog-id=254 op=LOAD Jan 14 23:50:39.257000 audit[5420]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001763e8 a2=98 a3=0 items=0 ppid=5409 pid=5420 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:50:39.257000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6136613763393761303262656665393330663733386238383535323133 Jan 14 23:50:39.260997 containerd[1996]: time="2026-01-14T23:50:39.260712548Z" level=info msg="connecting to shim fe061ee937de198824ce1de067837208bbce669a031481410e61502957821d86" address="unix:///run/containerd/s/61ca57e3d05c18dfa6ea5f89643d45a40c1a5257545bbb9aae48322e421aaaf0" namespace=k8s.io protocol=ttrpc version=3 Jan 14 23:50:39.258000 audit: BPF prog-id=255 op=LOAD Jan 14 23:50:39.258000 audit[5420]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=4000176168 a2=98 a3=0 items=0 ppid=5409 pid=5420 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:50:39.258000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6136613763393761303262656665393330663733386238383535323133 Jan 14 23:50:39.259000 audit: BPF prog-id=255 op=UNLOAD Jan 14 23:50:39.259000 audit[5420]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=5409 pid=5420 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:50:39.259000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6136613763393761303262656665393330663733386238383535323133 Jan 14 23:50:39.260000 audit: BPF prog-id=254 op=UNLOAD Jan 14 23:50:39.260000 audit[5420]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=5409 pid=5420 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:50:39.260000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6136613763393761303262656665393330663733386238383535323133 Jan 14 23:50:39.261000 audit: BPF prog-id=256 op=LOAD Jan 14 23:50:39.261000 audit[5420]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000176648 a2=98 a3=0 items=0 ppid=5409 pid=5420 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:50:39.261000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6136613763393761303262656665393330663733386238383535323133 Jan 14 23:50:39.299000 audit[5461]: NETFILTER_CFG table=filter:137 family=2 entries=20 op=nft_register_rule pid=5461 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 23:50:39.299000 audit[5461]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=7480 a0=3 a1=ffffc5c596b0 a2=0 a3=1 items=0 ppid=3619 pid=5461 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:50:39.299000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 23:50:39.319000 audit[5461]: NETFILTER_CFG table=nat:138 family=2 entries=14 op=nft_register_rule pid=5461 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 23:50:39.319000 audit[5461]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=3468 a0=3 a1=ffffc5c596b0 a2=0 a3=1 items=0 ppid=3619 pid=5461 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:50:39.319000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 23:50:39.365014 systemd[1]: Started cri-containerd-fe061ee937de198824ce1de067837208bbce669a031481410e61502957821d86.scope - libcontainer container fe061ee937de198824ce1de067837208bbce669a031481410e61502957821d86. Jan 14 23:50:39.453000 audit: BPF prog-id=257 op=LOAD Jan 14 23:50:39.456653 kernel: kauditd_printk_skb: 180 callbacks suppressed Jan 14 23:50:39.456773 kernel: audit: type=1334 audit(1768434639.453:741): prog-id=257 op=LOAD Jan 14 23:50:39.458000 audit: BPF prog-id=258 op=LOAD Jan 14 23:50:39.462529 kernel: audit: type=1334 audit(1768434639.458:742): prog-id=258 op=LOAD Jan 14 23:50:39.458000 audit[5473]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a0180 a2=98 a3=0 items=0 ppid=5459 pid=5473 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:50:39.469708 kernel: audit: type=1300 audit(1768434639.458:742): arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a0180 a2=98 a3=0 items=0 ppid=5459 pid=5473 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:50:39.458000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6665303631656539333764653139383832346365316465303637383337 Jan 14 23:50:39.479484 kernel: audit: type=1327 audit(1768434639.458:742): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6665303631656539333764653139383832346365316465303637383337 Jan 14 23:50:39.458000 audit: BPF prog-id=258 op=UNLOAD Jan 14 23:50:39.484302 kernel: audit: type=1334 audit(1768434639.458:743): prog-id=258 op=UNLOAD Jan 14 23:50:39.458000 audit[5473]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5459 pid=5473 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:50:39.491393 kernel: audit: type=1300 audit(1768434639.458:743): arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5459 pid=5473 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:50:39.458000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6665303631656539333764653139383832346365316465303637383337 Jan 14 23:50:39.502787 kernel: audit: type=1327 audit(1768434639.458:743): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6665303631656539333764653139383832346365316465303637383337 Jan 14 23:50:39.458000 audit: BPF prog-id=259 op=LOAD Jan 14 23:50:39.508153 kernel: audit: type=1334 audit(1768434639.458:744): prog-id=259 op=LOAD Jan 14 23:50:39.458000 audit[5473]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a03e8 a2=98 a3=0 items=0 ppid=5459 pid=5473 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:50:39.514796 kernel: audit: type=1300 audit(1768434639.458:744): arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a03e8 a2=98 a3=0 items=0 ppid=5459 pid=5473 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:50:39.458000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6665303631656539333764653139383832346365316465303637383337 Jan 14 23:50:39.526823 kernel: audit: type=1327 audit(1768434639.458:744): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6665303631656539333764653139383832346365316465303637383337 Jan 14 23:50:39.458000 audit: BPF prog-id=260 op=LOAD Jan 14 23:50:39.458000 audit[5473]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=40001a0168 a2=98 a3=0 items=0 ppid=5459 pid=5473 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:50:39.458000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6665303631656539333764653139383832346365316465303637383337 Jan 14 23:50:39.458000 audit: BPF prog-id=260 op=UNLOAD Jan 14 23:50:39.458000 audit[5473]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5459 pid=5473 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:50:39.458000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6665303631656539333764653139383832346365316465303637383337 Jan 14 23:50:39.458000 audit: BPF prog-id=259 op=UNLOAD Jan 14 23:50:39.458000 audit[5473]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5459 pid=5473 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:50:39.458000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6665303631656539333764653139383832346365316465303637383337 Jan 14 23:50:39.458000 audit: BPF prog-id=261 op=LOAD Jan 14 23:50:39.458000 audit[5473]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a0648 a2=98 a3=0 items=0 ppid=5459 pid=5473 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:50:39.458000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6665303631656539333764653139383832346365316465303637383337 Jan 14 23:50:39.522000 audit[5492]: NETFILTER_CFG table=filter:139 family=2 entries=17 op=nft_register_rule pid=5492 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 23:50:39.522000 audit[5492]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=ffffea418b10 a2=0 a3=1 items=0 ppid=3619 pid=5492 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:50:39.522000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 23:50:39.541000 audit[5492]: NETFILTER_CFG table=nat:140 family=2 entries=35 op=nft_register_chain pid=5492 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 23:50:39.541000 audit[5492]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=14196 a0=3 a1=ffffea418b10 a2=0 a3=1 items=0 ppid=3619 pid=5492 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:50:39.541000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 23:50:39.558858 containerd[1996]: time="2026-01-14T23:50:39.558764541Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-28nlv,Uid:fd982afe-6d4f-4aff-a998-9f2578e041f1,Namespace:kube-system,Attempt:0,} returns sandbox id \"fe061ee937de198824ce1de067837208bbce669a031481410e61502957821d86\"" Jan 14 23:50:39.574302 containerd[1996]: time="2026-01-14T23:50:39.572690109Z" level=info msg="CreateContainer within sandbox \"fe061ee937de198824ce1de067837208bbce669a031481410e61502957821d86\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 14 23:50:39.591009 containerd[1996]: time="2026-01-14T23:50:39.590934250Z" level=info msg="Container ece46b97289d54e2b7483ace1d76e823508d8b8bd72d0499a46792c6cc01a47a: CDI devices from CRI Config.CDIDevices: []" Jan 14 23:50:39.605528 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount423175075.mount: Deactivated successfully. Jan 14 23:50:39.612236 containerd[1996]: time="2026-01-14T23:50:39.612141274Z" level=info msg="CreateContainer within sandbox \"fe061ee937de198824ce1de067837208bbce669a031481410e61502957821d86\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"ece46b97289d54e2b7483ace1d76e823508d8b8bd72d0499a46792c6cc01a47a\"" Jan 14 23:50:39.613305 containerd[1996]: time="2026-01-14T23:50:39.613210342Z" level=info msg="StartContainer for \"ece46b97289d54e2b7483ace1d76e823508d8b8bd72d0499a46792c6cc01a47a\"" Jan 14 23:50:39.616435 containerd[1996]: time="2026-01-14T23:50:39.616310410Z" level=info msg="connecting to shim ece46b97289d54e2b7483ace1d76e823508d8b8bd72d0499a46792c6cc01a47a" address="unix:///run/containerd/s/61ca57e3d05c18dfa6ea5f89643d45a40c1a5257545bbb9aae48322e421aaaf0" protocol=ttrpc version=3 Jan 14 23:50:39.679044 systemd[1]: Started cri-containerd-ece46b97289d54e2b7483ace1d76e823508d8b8bd72d0499a46792c6cc01a47a.scope - libcontainer container ece46b97289d54e2b7483ace1d76e823508d8b8bd72d0499a46792c6cc01a47a. Jan 14 23:50:39.730000 audit: BPF prog-id=262 op=LOAD Jan 14 23:50:39.733000 audit: BPF prog-id=263 op=LOAD Jan 14 23:50:39.733000 audit[5501]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000176180 a2=98 a3=0 items=0 ppid=5459 pid=5501 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:50:39.733000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6563653436623937323839643534653262373438336163653164373665 Jan 14 23:50:39.734000 audit: BPF prog-id=263 op=UNLOAD Jan 14 23:50:39.734000 audit[5501]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5459 pid=5501 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:50:39.734000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6563653436623937323839643534653262373438336163653164373665 Jan 14 23:50:39.735000 audit: BPF prog-id=264 op=LOAD Jan 14 23:50:39.735000 audit[5501]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001763e8 a2=98 a3=0 items=0 ppid=5459 pid=5501 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:50:39.735000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6563653436623937323839643534653262373438336163653164373665 Jan 14 23:50:39.736000 audit: BPF prog-id=265 op=LOAD Jan 14 23:50:39.736000 audit[5501]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000176168 a2=98 a3=0 items=0 ppid=5459 pid=5501 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:50:39.736000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6563653436623937323839643534653262373438336163653164373665 Jan 14 23:50:39.737000 audit: BPF prog-id=265 op=UNLOAD Jan 14 23:50:39.737000 audit[5501]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5459 pid=5501 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:50:39.737000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6563653436623937323839643534653262373438336163653164373665 Jan 14 23:50:39.737000 audit: BPF prog-id=264 op=UNLOAD Jan 14 23:50:39.737000 audit[5501]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5459 pid=5501 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:50:39.737000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6563653436623937323839643534653262373438336163653164373665 Jan 14 23:50:39.737000 audit: BPF prog-id=266 op=LOAD Jan 14 23:50:39.737000 audit[5501]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000176648 a2=98 a3=0 items=0 ppid=5459 pid=5501 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:50:39.737000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6563653436623937323839643534653262373438336163653164373665 Jan 14 23:50:39.742058 containerd[1996]: time="2026-01-14T23:50:39.741510142Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-fd2bc,Uid:2054a7f1-33a6-4a0b-8079-7e8881899bb3,Namespace:calico-system,Attempt:0,} returns sandbox id \"a6a7c97a02befe930f738b88552136612e6ab07efaa1633ed4a7ba7076267ac0\"" Jan 14 23:50:39.739000 audit[5512]: NETFILTER_CFG table=filter:141 family=2 entries=68 op=nft_register_chain pid=5512 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 14 23:50:39.739000 audit[5512]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=32308 a0=3 a1=ffffef540fb0 a2=0 a3=ffffa6c5bfa8 items=0 ppid=4631 pid=5512 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:50:39.739000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 14 23:50:39.750092 containerd[1996]: time="2026-01-14T23:50:39.749975122Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 14 23:50:39.818679 containerd[1996]: time="2026-01-14T23:50:39.817999619Z" level=info msg="StartContainer for \"ece46b97289d54e2b7483ace1d76e823508d8b8bd72d0499a46792c6cc01a47a\" returns successfully" Jan 14 23:50:39.935116 kubelet[3513]: E0114 23:50:39.933899 3513 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6d86655bcb-8jvgv" podUID="296d9b29-20aa-492b-aa70-e26652feb8da" Jan 14 23:50:40.036000 audit[5548]: NETFILTER_CFG table=filter:142 family=2 entries=66 op=nft_register_chain pid=5548 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 14 23:50:40.036000 audit[5548]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=29136 a0=3 a1=fffffbeeb5d0 a2=0 a3=ffff939c1fa8 items=0 ppid=4631 pid=5548 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:50:40.036000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 14 23:50:40.154979 containerd[1996]: time="2026-01-14T23:50:40.154857092Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 23:50:40.156297 containerd[1996]: time="2026-01-14T23:50:40.156230324Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 14 23:50:40.156796 containerd[1996]: time="2026-01-14T23:50:40.156357176Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 14 23:50:40.157180 kubelet[3513]: E0114 23:50:40.157085 3513 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 14 23:50:40.157180 kubelet[3513]: E0114 23:50:40.157169 3513 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 14 23:50:40.157881 kubelet[3513]: E0114 23:50:40.157389 3513 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mr2sb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-fd2bc_calico-system(2054a7f1-33a6-4a0b-8079-7e8881899bb3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 14 23:50:40.158974 kubelet[3513]: E0114 23:50:40.158849 3513 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-fd2bc" podUID="2054a7f1-33a6-4a0b-8079-7e8881899bb3" Jan 14 23:50:40.168818 systemd-networkd[1746]: calia11178709f7: Gained IPv6LL Jan 14 23:50:40.423843 systemd-networkd[1746]: cali5df0619ad49: Gained IPv6LL Jan 14 23:50:40.589000 audit[5553]: NETFILTER_CFG table=filter:143 family=2 entries=14 op=nft_register_rule pid=5553 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 23:50:40.589000 audit[5553]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=fffff870b1a0 a2=0 a3=1 items=0 ppid=3619 pid=5553 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:50:40.589000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 23:50:40.596000 audit[5553]: NETFILTER_CFG table=nat:144 family=2 entries=44 op=nft_register_rule pid=5553 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 23:50:40.596000 audit[5553]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=14196 a0=3 a1=fffff870b1a0 a2=0 a3=1 items=0 ppid=3619 pid=5553 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:50:40.596000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 23:50:40.935600 kubelet[3513]: E0114 23:50:40.935506 3513 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-fd2bc" podUID="2054a7f1-33a6-4a0b-8079-7e8881899bb3" Jan 14 23:50:40.978435 kubelet[3513]: I0114 23:50:40.978331 3513 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-28nlv" podStartSLOduration=61.978307452 podStartE2EDuration="1m1.978307452s" podCreationTimestamp="2026-01-14 23:49:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-14 23:50:39.989106984 +0000 UTC m=+64.823567459" watchObservedRunningTime="2026-01-14 23:50:40.978307452 +0000 UTC m=+65.812767915" Jan 14 23:50:41.631000 audit[5556]: NETFILTER_CFG table=filter:145 family=2 entries=14 op=nft_register_rule pid=5556 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 23:50:41.631000 audit[5556]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=ffffebaf8ae0 a2=0 a3=1 items=0 ppid=3619 pid=5556 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:50:41.631000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 23:50:41.639000 audit[5556]: NETFILTER_CFG table=nat:146 family=2 entries=20 op=nft_register_rule pid=5556 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 23:50:41.639000 audit[5556]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5772 a0=3 a1=ffffebaf8ae0 a2=0 a3=1 items=0 ppid=3619 pid=5556 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:50:41.639000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 23:50:42.631852 ntpd[1953]: Listen normally on 6 vxlan.calico 192.168.99.64:123 Jan 14 23:50:42.633088 ntpd[1953]: 14 Jan 23:50:42 ntpd[1953]: Listen normally on 6 vxlan.calico 192.168.99.64:123 Jan 14 23:50:42.633088 ntpd[1953]: 14 Jan 23:50:42 ntpd[1953]: Listen normally on 7 cali6f660655851 [fe80::ecee:eeff:feee:eeee%4]:123 Jan 14 23:50:42.633088 ntpd[1953]: 14 Jan 23:50:42 ntpd[1953]: Listen normally on 8 vxlan.calico [fe80::64fe:a2ff:feb1:e57d%5]:123 Jan 14 23:50:42.633088 ntpd[1953]: 14 Jan 23:50:42 ntpd[1953]: Listen normally on 9 cali0fc9eb70906 [fe80::ecee:eeff:feee:eeee%8]:123 Jan 14 23:50:42.633088 ntpd[1953]: 14 Jan 23:50:42 ntpd[1953]: Listen normally on 10 cali3d81a774607 [fe80::ecee:eeff:feee:eeee%9]:123 Jan 14 23:50:42.633088 ntpd[1953]: 14 Jan 23:50:42 ntpd[1953]: Listen normally on 11 cali8ce4be00037 [fe80::ecee:eeff:feee:eeee%10]:123 Jan 14 23:50:42.633088 ntpd[1953]: 14 Jan 23:50:42 ntpd[1953]: Listen normally on 12 cali2912fead968 [fe80::ecee:eeff:feee:eeee%11]:123 Jan 14 23:50:42.633088 ntpd[1953]: 14 Jan 23:50:42 ntpd[1953]: Listen normally on 13 cali16ed2dae3b7 [fe80::ecee:eeff:feee:eeee%12]:123 Jan 14 23:50:42.633088 ntpd[1953]: 14 Jan 23:50:42 ntpd[1953]: Listen normally on 14 calia4709ed0c1b [fe80::ecee:eeff:feee:eeee%13]:123 Jan 14 23:50:42.633088 ntpd[1953]: 14 Jan 23:50:42 ntpd[1953]: Listen normally on 15 cali5df0619ad49 [fe80::ecee:eeff:feee:eeee%14]:123 Jan 14 23:50:42.633088 ntpd[1953]: 14 Jan 23:50:42 ntpd[1953]: Listen normally on 16 calia11178709f7 [fe80::ecee:eeff:feee:eeee%15]:123 Jan 14 23:50:42.631932 ntpd[1953]: Listen normally on 7 cali6f660655851 [fe80::ecee:eeff:feee:eeee%4]:123 Jan 14 23:50:42.631980 ntpd[1953]: Listen normally on 8 vxlan.calico [fe80::64fe:a2ff:feb1:e57d%5]:123 Jan 14 23:50:42.632025 ntpd[1953]: Listen normally on 9 cali0fc9eb70906 [fe80::ecee:eeff:feee:eeee%8]:123 Jan 14 23:50:42.632070 ntpd[1953]: Listen normally on 10 cali3d81a774607 [fe80::ecee:eeff:feee:eeee%9]:123 Jan 14 23:50:42.632114 ntpd[1953]: Listen normally on 11 cali8ce4be00037 [fe80::ecee:eeff:feee:eeee%10]:123 Jan 14 23:50:42.632786 ntpd[1953]: Listen normally on 12 cali2912fead968 [fe80::ecee:eeff:feee:eeee%11]:123 Jan 14 23:50:42.632847 ntpd[1953]: Listen normally on 13 cali16ed2dae3b7 [fe80::ecee:eeff:feee:eeee%12]:123 Jan 14 23:50:42.632891 ntpd[1953]: Listen normally on 14 calia4709ed0c1b [fe80::ecee:eeff:feee:eeee%13]:123 Jan 14 23:50:42.632935 ntpd[1953]: Listen normally on 15 cali5df0619ad49 [fe80::ecee:eeff:feee:eeee%14]:123 Jan 14 23:50:42.632979 ntpd[1953]: Listen normally on 16 calia11178709f7 [fe80::ecee:eeff:feee:eeee%15]:123 Jan 14 23:50:42.664000 audit[5558]: NETFILTER_CFG table=filter:147 family=2 entries=14 op=nft_register_rule pid=5558 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 23:50:42.664000 audit[5558]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=ffffd5b09920 a2=0 a3=1 items=0 ppid=3619 pid=5558 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:50:42.664000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 23:50:42.708000 audit[5558]: NETFILTER_CFG table=nat:148 family=2 entries=56 op=nft_register_chain pid=5558 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 23:50:42.708000 audit[5558]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=19860 a0=3 a1=ffffd5b09920 a2=0 a3=1 items=0 ppid=3619 pid=5558 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:50:42.708000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 23:50:46.402000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-172.31.18.197:22-20.161.92.111:44136 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:50:46.405476 kernel: kauditd_printk_skb: 64 callbacks suppressed Jan 14 23:50:46.405547 kernel: audit: type=1130 audit(1768434646.402:767): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-172.31.18.197:22-20.161.92.111:44136 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:50:46.403825 systemd[1]: Started sshd@7-172.31.18.197:22-20.161.92.111:44136.service - OpenSSH per-connection server daemon (20.161.92.111:44136). Jan 14 23:50:46.923000 audit[5576]: USER_ACCT pid=5576 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 23:50:46.925805 sshd[5576]: Accepted publickey for core from 20.161.92.111 port 44136 ssh2: RSA SHA256:tQfYP2ATQd/HQz4yrh8s4gHDWQ0sgDwafourhFj+esE Jan 14 23:50:46.929000 audit[5576]: CRED_ACQ pid=5576 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 23:50:46.933649 sshd-session[5576]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 23:50:46.936892 kernel: audit: type=1101 audit(1768434646.923:768): pid=5576 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 23:50:46.936983 kernel: audit: type=1103 audit(1768434646.929:769): pid=5576 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 23:50:46.941595 kernel: audit: type=1006 audit(1768434646.929:770): pid=5576 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=8 res=1 Jan 14 23:50:46.941717 kernel: audit: type=1300 audit(1768434646.929:770): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffe4f93630 a2=3 a3=0 items=0 ppid=1 pid=5576 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=8 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:50:46.929000 audit[5576]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffe4f93630 a2=3 a3=0 items=0 ppid=1 pid=5576 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=8 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:50:46.929000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 23:50:46.950420 kernel: audit: type=1327 audit(1768434646.929:770): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 23:50:46.957324 systemd-logind[1960]: New session 8 of user core. Jan 14 23:50:46.962922 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 14 23:50:46.966000 audit[5576]: USER_START pid=5576 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 23:50:46.976517 kernel: audit: type=1105 audit(1768434646.966:771): pid=5576 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 23:50:46.976673 kernel: audit: type=1103 audit(1768434646.974:772): pid=5579 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 23:50:46.974000 audit[5579]: CRED_ACQ pid=5579 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 23:50:47.318744 sshd[5579]: Connection closed by 20.161.92.111 port 44136 Jan 14 23:50:47.319529 sshd-session[5576]: pam_unix(sshd:session): session closed for user core Jan 14 23:50:47.321000 audit[5576]: USER_END pid=5576 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 23:50:47.331367 systemd[1]: sshd@7-172.31.18.197:22-20.161.92.111:44136.service: Deactivated successfully. Jan 14 23:50:47.337200 kernel: audit: type=1106 audit(1768434647.321:773): pid=5576 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 23:50:47.337329 kernel: audit: type=1104 audit(1768434647.321:774): pid=5576 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 23:50:47.321000 audit[5576]: CRED_DISP pid=5576 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 23:50:47.336403 systemd[1]: session-8.scope: Deactivated successfully. Jan 14 23:50:47.330000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-172.31.18.197:22-20.161.92.111:44136 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:50:47.339545 systemd-logind[1960]: Session 8 logged out. Waiting for processes to exit. Jan 14 23:50:47.346545 systemd-logind[1960]: Removed session 8. Jan 14 23:50:48.465420 containerd[1996]: time="2026-01-14T23:50:48.465312366Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 14 23:50:48.731685 containerd[1996]: time="2026-01-14T23:50:48.731352763Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 23:50:48.732997 containerd[1996]: time="2026-01-14T23:50:48.732794503Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 14 23:50:48.732997 containerd[1996]: time="2026-01-14T23:50:48.732916687Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 14 23:50:48.733386 kubelet[3513]: E0114 23:50:48.733325 3513 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 14 23:50:48.735062 kubelet[3513]: E0114 23:50:48.733398 3513 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 14 23:50:48.735062 kubelet[3513]: E0114 23:50:48.733557 3513 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:5c071b4136e94af696b69447927c640d,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-hpxsc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-789ccfdc9b-bggd8_calico-system(54b5943d-5205-46b5-af4e-d4680f06e390): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 14 23:50:48.738136 containerd[1996]: time="2026-01-14T23:50:48.737707879Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 14 23:50:49.006533 containerd[1996]: time="2026-01-14T23:50:49.006366544Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 23:50:49.008205 containerd[1996]: time="2026-01-14T23:50:49.008073364Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 14 23:50:49.008205 containerd[1996]: time="2026-01-14T23:50:49.008182036Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 14 23:50:49.009811 kubelet[3513]: E0114 23:50:49.009389 3513 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 14 23:50:49.010634 kubelet[3513]: E0114 23:50:49.009772 3513 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 14 23:50:49.010634 kubelet[3513]: E0114 23:50:49.010368 3513 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hpxsc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-789ccfdc9b-bggd8_calico-system(54b5943d-5205-46b5-af4e-d4680f06e390): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 14 23:50:49.011798 kubelet[3513]: E0114 23:50:49.011705 3513 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-789ccfdc9b-bggd8" podUID="54b5943d-5205-46b5-af4e-d4680f06e390" Jan 14 23:50:49.468620 containerd[1996]: time="2026-01-14T23:50:49.468550303Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 14 23:50:49.748292 containerd[1996]: time="2026-01-14T23:50:49.748126952Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 23:50:49.749478 containerd[1996]: time="2026-01-14T23:50:49.749410232Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 14 23:50:49.749642 containerd[1996]: time="2026-01-14T23:50:49.749534516Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 14 23:50:49.749926 kubelet[3513]: E0114 23:50:49.749853 3513 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 14 23:50:49.750465 kubelet[3513]: E0114 23:50:49.749927 3513 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 14 23:50:49.750465 kubelet[3513]: E0114 23:50:49.750286 3513 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kdx84,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-8dsvj_calico-system(23b28fd4-dc96-481b-a69a-1d96358778f5): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 14 23:50:49.751215 containerd[1996]: time="2026-01-14T23:50:49.750869312Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 14 23:50:50.025384 containerd[1996]: time="2026-01-14T23:50:50.025212521Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 23:50:50.026824 containerd[1996]: time="2026-01-14T23:50:50.026742557Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 14 23:50:50.026989 containerd[1996]: time="2026-01-14T23:50:50.026788769Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 14 23:50:50.027275 kubelet[3513]: E0114 23:50:50.027204 3513 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 23:50:50.027374 kubelet[3513]: E0114 23:50:50.027282 3513 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 23:50:50.028150 kubelet[3513]: E0114 23:50:50.027846 3513 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-m5xqc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6d86655bcb-l8j4w_calico-apiserver(255a4468-7378-413d-92bd-8056478658d3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 14 23:50:50.028454 containerd[1996]: time="2026-01-14T23:50:50.028250753Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 14 23:50:50.029404 kubelet[3513]: E0114 23:50:50.029323 3513 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6d86655bcb-l8j4w" podUID="255a4468-7378-413d-92bd-8056478658d3" Jan 14 23:50:50.279567 containerd[1996]: time="2026-01-14T23:50:50.279234007Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 23:50:50.280923 containerd[1996]: time="2026-01-14T23:50:50.280757095Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 14 23:50:50.280923 containerd[1996]: time="2026-01-14T23:50:50.280778815Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 14 23:50:50.281336 kubelet[3513]: E0114 23:50:50.281288 3513 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 14 23:50:50.281633 kubelet[3513]: E0114 23:50:50.281462 3513 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 14 23:50:50.282491 containerd[1996]: time="2026-01-14T23:50:50.282153823Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 14 23:50:50.283037 kubelet[3513]: E0114 23:50:50.281895 3513 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9fmf5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-fdc6fb9d4-p5lnk_calico-system(21b369b7-d986-41d1-8a2e-a01d832685f7): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 14 23:50:50.284418 kubelet[3513]: E0114 23:50:50.284347 3513 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-fdc6fb9d4-p5lnk" podUID="21b369b7-d986-41d1-8a2e-a01d832685f7" Jan 14 23:50:50.575080 containerd[1996]: time="2026-01-14T23:50:50.574869584Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 23:50:50.576938 containerd[1996]: time="2026-01-14T23:50:50.576740144Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 14 23:50:50.576938 containerd[1996]: time="2026-01-14T23:50:50.576862916Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 14 23:50:50.577218 kubelet[3513]: E0114 23:50:50.577159 3513 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 14 23:50:50.577361 kubelet[3513]: E0114 23:50:50.577256 3513 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 14 23:50:50.577691 kubelet[3513]: E0114 23:50:50.577447 3513 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kdx84,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-8dsvj_calico-system(23b28fd4-dc96-481b-a69a-1d96358778f5): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 14 23:50:50.578890 kubelet[3513]: E0114 23:50:50.578658 3513 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-8dsvj" podUID="23b28fd4-dc96-481b-a69a-1d96358778f5" Jan 14 23:50:51.468877 containerd[1996]: time="2026-01-14T23:50:51.468397017Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 14 23:50:51.787076 containerd[1996]: time="2026-01-14T23:50:51.786911062Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 23:50:51.788525 containerd[1996]: time="2026-01-14T23:50:51.788450974Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 14 23:50:51.788655 containerd[1996]: time="2026-01-14T23:50:51.788578258Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 14 23:50:51.788985 kubelet[3513]: E0114 23:50:51.788921 3513 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 23:50:51.790035 kubelet[3513]: E0114 23:50:51.788993 3513 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 23:50:51.790035 kubelet[3513]: E0114 23:50:51.789185 3513 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gdgll,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6d86655bcb-8jvgv_calico-apiserver(296d9b29-20aa-492b-aa70-e26652feb8da): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 14 23:50:51.791013 kubelet[3513]: E0114 23:50:51.790953 3513 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6d86655bcb-8jvgv" podUID="296d9b29-20aa-492b-aa70-e26652feb8da" Jan 14 23:50:52.414961 systemd[1]: Started sshd@8-172.31.18.197:22-20.161.92.111:40416.service - OpenSSH per-connection server daemon (20.161.92.111:40416). Jan 14 23:50:52.414000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-172.31.18.197:22-20.161.92.111:40416 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:50:52.417837 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 14 23:50:52.417918 kernel: audit: type=1130 audit(1768434652.414:776): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-172.31.18.197:22-20.161.92.111:40416 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:50:52.891000 audit[5595]: USER_ACCT pid=5595 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 23:50:52.893704 sshd[5595]: Accepted publickey for core from 20.161.92.111 port 40416 ssh2: RSA SHA256:tQfYP2ATQd/HQz4yrh8s4gHDWQ0sgDwafourhFj+esE Jan 14 23:50:52.899664 kernel: audit: type=1101 audit(1768434652.891:777): pid=5595 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 23:50:52.898000 audit[5595]: CRED_ACQ pid=5595 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 23:50:52.901140 sshd-session[5595]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 23:50:52.909189 kernel: audit: type=1103 audit(1768434652.898:778): pid=5595 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 23:50:52.909334 kernel: audit: type=1006 audit(1768434652.898:779): pid=5595 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=9 res=1 Jan 14 23:50:52.898000 audit[5595]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffee7b53b0 a2=3 a3=0 items=0 ppid=1 pid=5595 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=9 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:50:52.917132 kernel: audit: type=1300 audit(1768434652.898:779): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffee7b53b0 a2=3 a3=0 items=0 ppid=1 pid=5595 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=9 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:50:52.917247 kernel: audit: type=1327 audit(1768434652.898:779): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 23:50:52.898000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 23:50:52.923158 systemd-logind[1960]: New session 9 of user core. Jan 14 23:50:52.930054 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 14 23:50:52.935000 audit[5595]: USER_START pid=5595 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 23:50:52.944774 kernel: audit: type=1105 audit(1768434652.935:780): pid=5595 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 23:50:52.949803 kernel: audit: type=1103 audit(1768434652.943:781): pid=5598 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 23:50:52.943000 audit[5598]: CRED_ACQ pid=5598 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 23:50:53.270246 sshd[5598]: Connection closed by 20.161.92.111 port 40416 Jan 14 23:50:53.271429 sshd-session[5595]: pam_unix(sshd:session): session closed for user core Jan 14 23:50:53.273000 audit[5595]: USER_END pid=5595 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 23:50:53.281880 systemd[1]: sshd@8-172.31.18.197:22-20.161.92.111:40416.service: Deactivated successfully. Jan 14 23:50:53.274000 audit[5595]: CRED_DISP pid=5595 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 23:50:53.286188 systemd[1]: session-9.scope: Deactivated successfully. Jan 14 23:50:53.288510 kernel: audit: type=1106 audit(1768434653.273:782): pid=5595 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 23:50:53.288697 kernel: audit: type=1104 audit(1768434653.274:783): pid=5595 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 23:50:53.281000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-172.31.18.197:22-20.161.92.111:40416 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:50:53.290558 systemd-logind[1960]: Session 9 logged out. Waiting for processes to exit. Jan 14 23:50:53.295160 systemd-logind[1960]: Removed session 9. Jan 14 23:50:53.469946 containerd[1996]: time="2026-01-14T23:50:53.468323662Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 14 23:50:53.727434 containerd[1996]: time="2026-01-14T23:50:53.727225152Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 23:50:53.728659 containerd[1996]: time="2026-01-14T23:50:53.728611908Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 14 23:50:53.729096 containerd[1996]: time="2026-01-14T23:50:53.728647200Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 14 23:50:53.729392 kubelet[3513]: E0114 23:50:53.729320 3513 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 23:50:53.730751 kubelet[3513]: E0114 23:50:53.729963 3513 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 23:50:53.731757 kubelet[3513]: E0114 23:50:53.731623 3513 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fcwsd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-7cc6d978d-lsv59_calico-apiserver(49be0d3d-84d7-4994-b2e7-37cea1fa9624): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 14 23:50:53.733278 kubelet[3513]: E0114 23:50:53.732992 3513 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7cc6d978d-lsv59" podUID="49be0d3d-84d7-4994-b2e7-37cea1fa9624" Jan 14 23:50:54.465392 containerd[1996]: time="2026-01-14T23:50:54.464997767Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 14 23:50:54.774814 containerd[1996]: time="2026-01-14T23:50:54.774673801Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 23:50:54.776675 containerd[1996]: time="2026-01-14T23:50:54.776495413Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 14 23:50:54.776675 containerd[1996]: time="2026-01-14T23:50:54.776518285Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 14 23:50:54.777018 kubelet[3513]: E0114 23:50:54.776899 3513 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 14 23:50:54.777952 kubelet[3513]: E0114 23:50:54.777003 3513 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 14 23:50:54.777952 kubelet[3513]: E0114 23:50:54.777215 3513 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mr2sb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-fd2bc_calico-system(2054a7f1-33a6-4a0b-8079-7e8881899bb3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 14 23:50:54.778867 kubelet[3513]: E0114 23:50:54.778808 3513 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-fd2bc" podUID="2054a7f1-33a6-4a0b-8079-7e8881899bb3" Jan 14 23:50:58.378720 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 14 23:50:58.378881 kernel: audit: type=1130 audit(1768434658.370:785): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-172.31.18.197:22-20.161.92.111:40430 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:50:58.370000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-172.31.18.197:22-20.161.92.111:40430 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:50:58.372137 systemd[1]: Started sshd@9-172.31.18.197:22-20.161.92.111:40430.service - OpenSSH per-connection server daemon (20.161.92.111:40430). Jan 14 23:50:58.858000 audit[5619]: USER_ACCT pid=5619 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 23:50:58.866202 sshd[5619]: Accepted publickey for core from 20.161.92.111 port 40430 ssh2: RSA SHA256:tQfYP2ATQd/HQz4yrh8s4gHDWQ0sgDwafourhFj+esE Jan 14 23:50:58.865000 audit[5619]: CRED_ACQ pid=5619 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 23:50:58.867777 sshd-session[5619]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 23:50:58.868886 kernel: audit: type=1101 audit(1768434658.858:786): pid=5619 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 23:50:58.875165 kernel: audit: type=1103 audit(1768434658.865:787): pid=5619 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 23:50:58.879611 kernel: audit: type=1006 audit(1768434658.865:788): pid=5619 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=10 res=1 Jan 14 23:50:58.865000 audit[5619]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffd6b75280 a2=3 a3=0 items=0 ppid=1 pid=5619 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:50:58.886349 kernel: audit: type=1300 audit(1768434658.865:788): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffd6b75280 a2=3 a3=0 items=0 ppid=1 pid=5619 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:50:58.880363 systemd-logind[1960]: New session 10 of user core. Jan 14 23:50:58.865000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 23:50:58.890811 kernel: audit: type=1327 audit(1768434658.865:788): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 23:50:58.894301 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 14 23:50:58.901000 audit[5619]: USER_START pid=5619 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 23:50:58.909000 audit[5622]: CRED_ACQ pid=5622 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 23:50:58.916396 kernel: audit: type=1105 audit(1768434658.901:789): pid=5619 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 23:50:58.916701 kernel: audit: type=1103 audit(1768434658.909:790): pid=5622 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 23:50:59.248251 sshd[5622]: Connection closed by 20.161.92.111 port 40430 Jan 14 23:50:59.247995 sshd-session[5619]: pam_unix(sshd:session): session closed for user core Jan 14 23:50:59.250000 audit[5619]: USER_END pid=5619 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 23:50:59.258723 systemd[1]: sshd@9-172.31.18.197:22-20.161.92.111:40430.service: Deactivated successfully. Jan 14 23:50:59.250000 audit[5619]: CRED_DISP pid=5619 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 23:50:59.266197 kernel: audit: type=1106 audit(1768434659.250:791): pid=5619 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 23:50:59.266303 kernel: audit: type=1104 audit(1768434659.250:792): pid=5619 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 23:50:59.257000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-172.31.18.197:22-20.161.92.111:40430 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:50:59.267500 systemd[1]: session-10.scope: Deactivated successfully. Jan 14 23:50:59.271576 systemd-logind[1960]: Session 10 logged out. Waiting for processes to exit. Jan 14 23:50:59.275790 systemd-logind[1960]: Removed session 10. Jan 14 23:50:59.335837 systemd[1]: Started sshd@10-172.31.18.197:22-20.161.92.111:40440.service - OpenSSH per-connection server daemon (20.161.92.111:40440). Jan 14 23:50:59.334000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-172.31.18.197:22-20.161.92.111:40440 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:50:59.790000 audit[5634]: USER_ACCT pid=5634 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 23:50:59.792814 sshd[5634]: Accepted publickey for core from 20.161.92.111 port 40440 ssh2: RSA SHA256:tQfYP2ATQd/HQz4yrh8s4gHDWQ0sgDwafourhFj+esE Jan 14 23:50:59.792000 audit[5634]: CRED_ACQ pid=5634 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 23:50:59.795401 sshd-session[5634]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 23:50:59.793000 audit[5634]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=fffff8a3cbb0 a2=3 a3=0 items=0 ppid=1 pid=5634 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:50:59.793000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 23:50:59.804777 systemd-logind[1960]: New session 11 of user core. Jan 14 23:50:59.812913 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 14 23:50:59.816000 audit[5634]: USER_START pid=5634 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 23:50:59.820000 audit[5637]: CRED_ACQ pid=5637 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 23:51:00.274315 sshd[5637]: Connection closed by 20.161.92.111 port 40440 Jan 14 23:51:00.302156 sshd-session[5634]: pam_unix(sshd:session): session closed for user core Jan 14 23:51:00.302000 audit[5634]: USER_END pid=5634 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 23:51:00.303000 audit[5634]: CRED_DISP pid=5634 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 23:51:00.309906 systemd-logind[1960]: Session 11 logged out. Waiting for processes to exit. Jan 14 23:51:00.311153 systemd[1]: sshd@10-172.31.18.197:22-20.161.92.111:40440.service: Deactivated successfully. Jan 14 23:51:00.311000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-172.31.18.197:22-20.161.92.111:40440 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:51:00.317029 systemd[1]: session-11.scope: Deactivated successfully. Jan 14 23:51:00.321576 systemd-logind[1960]: Removed session 11. Jan 14 23:51:00.371103 systemd[1]: Started sshd@11-172.31.18.197:22-20.161.92.111:40444.service - OpenSSH per-connection server daemon (20.161.92.111:40444). Jan 14 23:51:00.369000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-172.31.18.197:22-20.161.92.111:40444 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:51:00.464021 kubelet[3513]: E0114 23:51:00.463540 3513 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-789ccfdc9b-bggd8" podUID="54b5943d-5205-46b5-af4e-d4680f06e390" Jan 14 23:51:00.839000 audit[5649]: USER_ACCT pid=5649 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 23:51:00.842034 sshd[5649]: Accepted publickey for core from 20.161.92.111 port 40444 ssh2: RSA SHA256:tQfYP2ATQd/HQz4yrh8s4gHDWQ0sgDwafourhFj+esE Jan 14 23:51:00.843000 audit[5649]: CRED_ACQ pid=5649 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 23:51:00.844000 audit[5649]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffc4a02890 a2=3 a3=0 items=0 ppid=1 pid=5649 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:51:00.844000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 23:51:00.847567 sshd-session[5649]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 23:51:00.863945 systemd-logind[1960]: New session 12 of user core. Jan 14 23:51:00.871167 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 14 23:51:00.877000 audit[5649]: USER_START pid=5649 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 23:51:00.881000 audit[5676]: CRED_ACQ pid=5676 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 23:51:01.255707 sshd[5676]: Connection closed by 20.161.92.111 port 40444 Jan 14 23:51:01.254487 sshd-session[5649]: pam_unix(sshd:session): session closed for user core Jan 14 23:51:01.256000 audit[5649]: USER_END pid=5649 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 23:51:01.256000 audit[5649]: CRED_DISP pid=5649 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 23:51:01.263224 systemd[1]: sshd@11-172.31.18.197:22-20.161.92.111:40444.service: Deactivated successfully. Jan 14 23:51:01.262000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-172.31.18.197:22-20.161.92.111:40444 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:51:01.268104 systemd[1]: session-12.scope: Deactivated successfully. Jan 14 23:51:01.271077 systemd-logind[1960]: Session 12 logged out. Waiting for processes to exit. Jan 14 23:51:01.275202 systemd-logind[1960]: Removed session 12. Jan 14 23:51:03.466566 kubelet[3513]: E0114 23:51:03.466380 3513 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6d86655bcb-8jvgv" podUID="296d9b29-20aa-492b-aa70-e26652feb8da" Jan 14 23:51:04.468736 kubelet[3513]: E0114 23:51:04.468486 3513 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-fdc6fb9d4-p5lnk" podUID="21b369b7-d986-41d1-8a2e-a01d832685f7" Jan 14 23:51:04.471042 kubelet[3513]: E0114 23:51:04.470797 3513 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6d86655bcb-l8j4w" podUID="255a4468-7378-413d-92bd-8056478658d3" Jan 14 23:51:04.475871 kubelet[3513]: E0114 23:51:04.475315 3513 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7cc6d978d-lsv59" podUID="49be0d3d-84d7-4994-b2e7-37cea1fa9624" Jan 14 23:51:04.481778 kubelet[3513]: E0114 23:51:04.480560 3513 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-8dsvj" podUID="23b28fd4-dc96-481b-a69a-1d96358778f5" Jan 14 23:51:05.473021 kubelet[3513]: E0114 23:51:05.472899 3513 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-fd2bc" podUID="2054a7f1-33a6-4a0b-8079-7e8881899bb3" Jan 14 23:51:06.350158 systemd[1]: Started sshd@12-172.31.18.197:22-20.161.92.111:34834.service - OpenSSH per-connection server daemon (20.161.92.111:34834). Jan 14 23:51:06.349000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-172.31.18.197:22-20.161.92.111:34834 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:51:06.352337 kernel: kauditd_printk_skb: 23 callbacks suppressed Jan 14 23:51:06.352436 kernel: audit: type=1130 audit(1768434666.349:812): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-172.31.18.197:22-20.161.92.111:34834 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:51:06.839000 audit[5697]: USER_ACCT pid=5697 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 23:51:06.848865 kernel: audit: type=1101 audit(1768434666.839:813): pid=5697 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 23:51:06.848930 sshd[5697]: Accepted publickey for core from 20.161.92.111 port 34834 ssh2: RSA SHA256:tQfYP2ATQd/HQz4yrh8s4gHDWQ0sgDwafourhFj+esE Jan 14 23:51:06.848000 audit[5697]: CRED_ACQ pid=5697 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 23:51:06.851519 sshd-session[5697]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 23:51:06.859696 kernel: audit: type=1103 audit(1768434666.848:814): pid=5697 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 23:51:06.859960 kernel: audit: type=1006 audit(1768434666.848:815): pid=5697 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=13 res=1 Jan 14 23:51:06.860054 kernel: audit: type=1300 audit(1768434666.848:815): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffe2c324a0 a2=3 a3=0 items=0 ppid=1 pid=5697 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:51:06.848000 audit[5697]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffe2c324a0 a2=3 a3=0 items=0 ppid=1 pid=5697 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:51:06.848000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 23:51:06.869382 kernel: audit: type=1327 audit(1768434666.848:815): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 23:51:06.873324 systemd-logind[1960]: New session 13 of user core. Jan 14 23:51:06.884934 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 14 23:51:06.894000 audit[5697]: USER_START pid=5697 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 23:51:06.898000 audit[5700]: CRED_ACQ pid=5700 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 23:51:06.909013 kernel: audit: type=1105 audit(1768434666.894:816): pid=5697 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 23:51:06.909226 kernel: audit: type=1103 audit(1768434666.898:817): pid=5700 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 23:51:07.219650 sshd[5700]: Connection closed by 20.161.92.111 port 34834 Jan 14 23:51:07.218397 sshd-session[5697]: pam_unix(sshd:session): session closed for user core Jan 14 23:51:07.220000 audit[5697]: USER_END pid=5697 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 23:51:07.229003 systemd[1]: sshd@12-172.31.18.197:22-20.161.92.111:34834.service: Deactivated successfully. Jan 14 23:51:07.222000 audit[5697]: CRED_DISP pid=5697 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 23:51:07.235003 systemd[1]: session-13.scope: Deactivated successfully. Jan 14 23:51:07.235626 kernel: audit: type=1106 audit(1768434667.220:818): pid=5697 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 23:51:07.235715 kernel: audit: type=1104 audit(1768434667.222:819): pid=5697 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 23:51:07.240645 systemd-logind[1960]: Session 13 logged out. Waiting for processes to exit. Jan 14 23:51:07.228000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-172.31.18.197:22-20.161.92.111:34834 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:51:07.244447 systemd-logind[1960]: Removed session 13. Jan 14 23:51:12.323715 systemd[1]: Started sshd@13-172.31.18.197:22-20.161.92.111:34848.service - OpenSSH per-connection server daemon (20.161.92.111:34848). Jan 14 23:51:12.322000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-172.31.18.197:22-20.161.92.111:34848 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:51:12.328791 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 14 23:51:12.328904 kernel: audit: type=1130 audit(1768434672.322:821): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-172.31.18.197:22-20.161.92.111:34848 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:51:12.465296 containerd[1996]: time="2026-01-14T23:51:12.465139997Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 14 23:51:12.728485 containerd[1996]: time="2026-01-14T23:51:12.728428854Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 23:51:12.730041 containerd[1996]: time="2026-01-14T23:51:12.729942114Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 14 23:51:12.730241 containerd[1996]: time="2026-01-14T23:51:12.729964038Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 14 23:51:12.730804 kubelet[3513]: E0114 23:51:12.730371 3513 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 14 23:51:12.730804 kubelet[3513]: E0114 23:51:12.730435 3513 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 14 23:51:12.730804 kubelet[3513]: E0114 23:51:12.730630 3513 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:5c071b4136e94af696b69447927c640d,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-hpxsc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-789ccfdc9b-bggd8_calico-system(54b5943d-5205-46b5-af4e-d4680f06e390): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 14 23:51:12.734437 containerd[1996]: time="2026-01-14T23:51:12.734343282Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 14 23:51:12.815000 audit[5717]: USER_ACCT pid=5717 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 23:51:12.817739 sshd[5717]: Accepted publickey for core from 20.161.92.111 port 34848 ssh2: RSA SHA256:tQfYP2ATQd/HQz4yrh8s4gHDWQ0sgDwafourhFj+esE Jan 14 23:51:12.824634 kernel: audit: type=1101 audit(1768434672.815:822): pid=5717 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 23:51:12.823000 audit[5717]: CRED_ACQ pid=5717 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 23:51:12.826441 sshd-session[5717]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 23:51:12.832626 kernel: audit: type=1103 audit(1768434672.823:823): pid=5717 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 23:51:12.824000 audit[5717]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffe8956660 a2=3 a3=0 items=0 ppid=1 pid=5717 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:51:12.842727 kernel: audit: type=1006 audit(1768434672.824:824): pid=5717 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=14 res=1 Jan 14 23:51:12.842841 kernel: audit: type=1300 audit(1768434672.824:824): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffe8956660 a2=3 a3=0 items=0 ppid=1 pid=5717 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:51:12.838004 systemd-logind[1960]: New session 14 of user core. Jan 14 23:51:12.824000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 23:51:12.845855 kernel: audit: type=1327 audit(1768434672.824:824): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 23:51:12.849920 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 14 23:51:12.856000 audit[5717]: USER_START pid=5717 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 23:51:12.864838 kernel: audit: type=1105 audit(1768434672.856:825): pid=5717 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 23:51:12.863000 audit[5720]: CRED_ACQ pid=5720 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 23:51:12.871738 kernel: audit: type=1103 audit(1768434672.863:826): pid=5720 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 23:51:12.995069 containerd[1996]: time="2026-01-14T23:51:12.994910779Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 23:51:12.998134 containerd[1996]: time="2026-01-14T23:51:12.997423951Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 14 23:51:12.998620 containerd[1996]: time="2026-01-14T23:51:12.997566355Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 14 23:51:12.999651 kubelet[3513]: E0114 23:51:12.998941 3513 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 14 23:51:12.999651 kubelet[3513]: E0114 23:51:12.999007 3513 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 14 23:51:12.999651 kubelet[3513]: E0114 23:51:12.999212 3513 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hpxsc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-789ccfdc9b-bggd8_calico-system(54b5943d-5205-46b5-af4e-d4680f06e390): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 14 23:51:13.001802 kubelet[3513]: E0114 23:51:13.001708 3513 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-789ccfdc9b-bggd8" podUID="54b5943d-5205-46b5-af4e-d4680f06e390" Jan 14 23:51:13.282953 sshd[5720]: Connection closed by 20.161.92.111 port 34848 Jan 14 23:51:13.283381 sshd-session[5717]: pam_unix(sshd:session): session closed for user core Jan 14 23:51:13.285000 audit[5717]: USER_END pid=5717 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 23:51:13.297631 systemd[1]: sshd@13-172.31.18.197:22-20.161.92.111:34848.service: Deactivated successfully. Jan 14 23:51:13.285000 audit[5717]: CRED_DISP pid=5717 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 23:51:13.304317 systemd[1]: session-14.scope: Deactivated successfully. Jan 14 23:51:13.310544 kernel: audit: type=1106 audit(1768434673.285:827): pid=5717 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 23:51:13.310914 kernel: audit: type=1104 audit(1768434673.285:828): pid=5717 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 23:51:13.296000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-172.31.18.197:22-20.161.92.111:34848 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:51:13.313555 systemd-logind[1960]: Session 14 logged out. Waiting for processes to exit. Jan 14 23:51:13.321212 systemd-logind[1960]: Removed session 14. Jan 14 23:51:17.466297 containerd[1996]: time="2026-01-14T23:51:17.466043566Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 14 23:51:17.772925 containerd[1996]: time="2026-01-14T23:51:17.772561463Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 23:51:17.774865 containerd[1996]: time="2026-01-14T23:51:17.774730799Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 14 23:51:17.774984 containerd[1996]: time="2026-01-14T23:51:17.774739163Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 14 23:51:17.775669 kubelet[3513]: E0114 23:51:17.775386 3513 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 23:51:17.775669 kubelet[3513]: E0114 23:51:17.775455 3513 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 23:51:17.776347 containerd[1996]: time="2026-01-14T23:51:17.775987607Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 14 23:51:17.777255 kubelet[3513]: E0114 23:51:17.776439 3513 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gdgll,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6d86655bcb-8jvgv_calico-apiserver(296d9b29-20aa-492b-aa70-e26652feb8da): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 14 23:51:17.777872 kubelet[3513]: E0114 23:51:17.777667 3513 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6d86655bcb-8jvgv" podUID="296d9b29-20aa-492b-aa70-e26652feb8da" Jan 14 23:51:18.080412 containerd[1996]: time="2026-01-14T23:51:18.080262909Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 23:51:18.083539 containerd[1996]: time="2026-01-14T23:51:18.083462217Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 14 23:51:18.083846 containerd[1996]: time="2026-01-14T23:51:18.083750613Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 14 23:51:18.084684 kubelet[3513]: E0114 23:51:18.084230 3513 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 14 23:51:18.084684 kubelet[3513]: E0114 23:51:18.084296 3513 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 14 23:51:18.085537 kubelet[3513]: E0114 23:51:18.084510 3513 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mr2sb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-fd2bc_calico-system(2054a7f1-33a6-4a0b-8079-7e8881899bb3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 14 23:51:18.087119 kubelet[3513]: E0114 23:51:18.087027 3513 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-fd2bc" podUID="2054a7f1-33a6-4a0b-8079-7e8881899bb3" Jan 14 23:51:18.382390 systemd[1]: Started sshd@14-172.31.18.197:22-20.161.92.111:37828.service - OpenSSH per-connection server daemon (20.161.92.111:37828). Jan 14 23:51:18.381000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-172.31.18.197:22-20.161.92.111:37828 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:51:18.385014 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 14 23:51:18.385622 kernel: audit: type=1130 audit(1768434678.381:830): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-172.31.18.197:22-20.161.92.111:37828 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:51:18.468465 containerd[1996]: time="2026-01-14T23:51:18.467821439Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 14 23:51:18.785397 containerd[1996]: time="2026-01-14T23:51:18.785225820Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 23:51:18.788136 containerd[1996]: time="2026-01-14T23:51:18.788071104Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 14 23:51:18.788265 containerd[1996]: time="2026-01-14T23:51:18.788191884Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 14 23:51:18.788470 kubelet[3513]: E0114 23:51:18.788410 3513 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 14 23:51:18.789753 kubelet[3513]: E0114 23:51:18.788470 3513 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 14 23:51:18.789827 containerd[1996]: time="2026-01-14T23:51:18.788907216Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 14 23:51:18.790999 kubelet[3513]: E0114 23:51:18.790445 3513 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9fmf5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-fdc6fb9d4-p5lnk_calico-system(21b369b7-d986-41d1-8a2e-a01d832685f7): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 14 23:51:18.792245 kubelet[3513]: E0114 23:51:18.792167 3513 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-fdc6fb9d4-p5lnk" podUID="21b369b7-d986-41d1-8a2e-a01d832685f7" Jan 14 23:51:18.894000 audit[5737]: USER_ACCT pid=5737 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 23:51:18.902872 sshd[5737]: Accepted publickey for core from 20.161.92.111 port 37828 ssh2: RSA SHA256:tQfYP2ATQd/HQz4yrh8s4gHDWQ0sgDwafourhFj+esE Jan 14 23:51:18.901000 audit[5737]: CRED_ACQ pid=5737 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 23:51:18.910442 kernel: audit: type=1101 audit(1768434678.894:831): pid=5737 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 23:51:18.910536 kernel: audit: type=1103 audit(1768434678.901:832): pid=5737 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 23:51:18.913965 kernel: audit: type=1006 audit(1768434678.901:833): pid=5737 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=15 res=1 Jan 14 23:51:18.901000 audit[5737]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffea65e020 a2=3 a3=0 items=0 ppid=1 pid=5737 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:51:18.920294 kernel: audit: type=1300 audit(1768434678.901:833): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffea65e020 a2=3 a3=0 items=0 ppid=1 pid=5737 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:51:18.901000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 23:51:18.922704 kernel: audit: type=1327 audit(1768434678.901:833): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 23:51:18.928420 sshd-session[5737]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 23:51:18.937440 systemd-logind[1960]: New session 15 of user core. Jan 14 23:51:18.950948 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 14 23:51:18.955000 audit[5737]: USER_START pid=5737 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 23:51:18.963000 audit[5740]: CRED_ACQ pid=5740 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 23:51:18.973202 kernel: audit: type=1105 audit(1768434678.955:834): pid=5737 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 23:51:18.973366 kernel: audit: type=1103 audit(1768434678.963:835): pid=5740 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 23:51:19.063756 containerd[1996]: time="2026-01-14T23:51:19.063472522Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 23:51:19.069021 containerd[1996]: time="2026-01-14T23:51:19.068875726Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 14 23:51:19.069379 containerd[1996]: time="2026-01-14T23:51:19.068948050Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 14 23:51:19.069543 kubelet[3513]: E0114 23:51:19.069449 3513 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 23:51:19.069543 kubelet[3513]: E0114 23:51:19.069511 3513 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 23:51:19.071313 kubelet[3513]: E0114 23:51:19.071085 3513 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-m5xqc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6d86655bcb-l8j4w_calico-apiserver(255a4468-7378-413d-92bd-8056478658d3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 14 23:51:19.071738 containerd[1996]: time="2026-01-14T23:51:19.071216902Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 14 23:51:19.072635 kubelet[3513]: E0114 23:51:19.072452 3513 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6d86655bcb-l8j4w" podUID="255a4468-7378-413d-92bd-8056478658d3" Jan 14 23:51:19.307238 sshd[5740]: Connection closed by 20.161.92.111 port 37828 Jan 14 23:51:19.308430 sshd-session[5737]: pam_unix(sshd:session): session closed for user core Jan 14 23:51:19.310000 audit[5737]: USER_END pid=5737 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 23:51:19.318230 systemd[1]: sshd@14-172.31.18.197:22-20.161.92.111:37828.service: Deactivated successfully. Jan 14 23:51:19.310000 audit[5737]: CRED_DISP pid=5737 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 23:51:19.323300 systemd[1]: session-15.scope: Deactivated successfully. Jan 14 23:51:19.325226 kernel: audit: type=1106 audit(1768434679.310:836): pid=5737 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 23:51:19.325439 kernel: audit: type=1104 audit(1768434679.310:837): pid=5737 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 23:51:19.317000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-172.31.18.197:22-20.161.92.111:37828 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:51:19.328551 systemd-logind[1960]: Session 15 logged out. Waiting for processes to exit. Jan 14 23:51:19.332062 systemd-logind[1960]: Removed session 15. Jan 14 23:51:19.385724 containerd[1996]: time="2026-01-14T23:51:19.385661039Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 23:51:19.388176 containerd[1996]: time="2026-01-14T23:51:19.388059071Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 14 23:51:19.388176 containerd[1996]: time="2026-01-14T23:51:19.388128971Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 14 23:51:19.388506 kubelet[3513]: E0114 23:51:19.388344 3513 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 14 23:51:19.388506 kubelet[3513]: E0114 23:51:19.388404 3513 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 14 23:51:19.388731 kubelet[3513]: E0114 23:51:19.388577 3513 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kdx84,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-8dsvj_calico-system(23b28fd4-dc96-481b-a69a-1d96358778f5): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 14 23:51:19.392900 containerd[1996]: time="2026-01-14T23:51:19.392844875Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 14 23:51:19.673764 containerd[1996]: time="2026-01-14T23:51:19.673689685Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 23:51:19.676030 containerd[1996]: time="2026-01-14T23:51:19.675956701Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 14 23:51:19.676149 containerd[1996]: time="2026-01-14T23:51:19.676082329Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 14 23:51:19.676693 kubelet[3513]: E0114 23:51:19.676335 3513 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 14 23:51:19.676693 kubelet[3513]: E0114 23:51:19.676400 3513 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 14 23:51:19.677396 kubelet[3513]: E0114 23:51:19.676801 3513 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kdx84,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-8dsvj_calico-system(23b28fd4-dc96-481b-a69a-1d96358778f5): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 14 23:51:19.677666 containerd[1996]: time="2026-01-14T23:51:19.677082241Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 14 23:51:19.678181 kubelet[3513]: E0114 23:51:19.677961 3513 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-8dsvj" podUID="23b28fd4-dc96-481b-a69a-1d96358778f5" Jan 14 23:51:19.921575 containerd[1996]: time="2026-01-14T23:51:19.921359906Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 23:51:19.924479 containerd[1996]: time="2026-01-14T23:51:19.923668958Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 14 23:51:19.924479 containerd[1996]: time="2026-01-14T23:51:19.923744378Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 14 23:51:19.925679 kubelet[3513]: E0114 23:51:19.924846 3513 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 23:51:19.925679 kubelet[3513]: E0114 23:51:19.924909 3513 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 23:51:19.925679 kubelet[3513]: E0114 23:51:19.925113 3513 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fcwsd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-7cc6d978d-lsv59_calico-apiserver(49be0d3d-84d7-4994-b2e7-37cea1fa9624): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 14 23:51:19.926757 kubelet[3513]: E0114 23:51:19.926659 3513 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7cc6d978d-lsv59" podUID="49be0d3d-84d7-4994-b2e7-37cea1fa9624" Jan 14 23:51:24.410044 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 14 23:51:24.410170 kernel: audit: type=1130 audit(1768434684.406:839): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-172.31.18.197:22-20.161.92.111:45400 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:51:24.406000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-172.31.18.197:22-20.161.92.111:45400 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:51:24.408098 systemd[1]: Started sshd@15-172.31.18.197:22-20.161.92.111:45400.service - OpenSSH per-connection server daemon (20.161.92.111:45400). Jan 14 23:51:24.883000 audit[5756]: USER_ACCT pid=5756 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 23:51:24.885838 sshd[5756]: Accepted publickey for core from 20.161.92.111 port 45400 ssh2: RSA SHA256:tQfYP2ATQd/HQz4yrh8s4gHDWQ0sgDwafourhFj+esE Jan 14 23:51:24.891639 kernel: audit: type=1101 audit(1768434684.883:840): pid=5756 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 23:51:24.890000 audit[5756]: CRED_ACQ pid=5756 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 23:51:24.898419 sshd-session[5756]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 23:51:24.901388 kernel: audit: type=1103 audit(1768434684.890:841): pid=5756 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 23:51:24.901567 kernel: audit: type=1006 audit(1768434684.890:842): pid=5756 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=16 res=1 Jan 14 23:51:24.890000 audit[5756]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffeca1eea0 a2=3 a3=0 items=0 ppid=1 pid=5756 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:51:24.908617 kernel: audit: type=1300 audit(1768434684.890:842): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffeca1eea0 a2=3 a3=0 items=0 ppid=1 pid=5756 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:51:24.911927 kernel: audit: type=1327 audit(1768434684.890:842): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 23:51:24.890000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 23:51:24.917697 systemd-logind[1960]: New session 16 of user core. Jan 14 23:51:24.930925 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 14 23:51:24.936000 audit[5756]: USER_START pid=5756 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 23:51:24.946657 kernel: audit: type=1105 audit(1768434684.936:843): pid=5756 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 23:51:24.947002 kernel: audit: type=1103 audit(1768434684.945:844): pid=5759 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 23:51:24.945000 audit[5759]: CRED_ACQ pid=5759 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 23:51:25.266310 sshd[5759]: Connection closed by 20.161.92.111 port 45400 Jan 14 23:51:25.267815 sshd-session[5756]: pam_unix(sshd:session): session closed for user core Jan 14 23:51:25.269000 audit[5756]: USER_END pid=5756 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 23:51:25.279310 systemd[1]: sshd@15-172.31.18.197:22-20.161.92.111:45400.service: Deactivated successfully. Jan 14 23:51:25.285085 kernel: audit: type=1106 audit(1768434685.269:845): pid=5756 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 23:51:25.285223 kernel: audit: type=1104 audit(1768434685.273:846): pid=5756 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 23:51:25.273000 audit[5756]: CRED_DISP pid=5756 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 23:51:25.278000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-172.31.18.197:22-20.161.92.111:45400 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:51:25.287282 systemd[1]: session-16.scope: Deactivated successfully. Jan 14 23:51:25.294713 systemd-logind[1960]: Session 16 logged out. Waiting for processes to exit. Jan 14 23:51:25.296577 systemd-logind[1960]: Removed session 16. Jan 14 23:51:25.374562 systemd[1]: Started sshd@16-172.31.18.197:22-20.161.92.111:45404.service - OpenSSH per-connection server daemon (20.161.92.111:45404). Jan 14 23:51:25.373000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-172.31.18.197:22-20.161.92.111:45404 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:51:25.855000 audit[5771]: USER_ACCT pid=5771 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 23:51:25.857008 sshd[5771]: Accepted publickey for core from 20.161.92.111 port 45404 ssh2: RSA SHA256:tQfYP2ATQd/HQz4yrh8s4gHDWQ0sgDwafourhFj+esE Jan 14 23:51:25.856000 audit[5771]: CRED_ACQ pid=5771 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 23:51:25.857000 audit[5771]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffd5f830e0 a2=3 a3=0 items=0 ppid=1 pid=5771 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:51:25.857000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 23:51:25.860048 sshd-session[5771]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 23:51:25.870325 systemd-logind[1960]: New session 17 of user core. Jan 14 23:51:25.875916 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 14 23:51:25.881000 audit[5771]: USER_START pid=5771 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 23:51:25.885000 audit[5774]: CRED_ACQ pid=5774 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 23:51:27.306319 sshd[5774]: Connection closed by 20.161.92.111 port 45404 Jan 14 23:51:27.307301 sshd-session[5771]: pam_unix(sshd:session): session closed for user core Jan 14 23:51:27.310000 audit[5771]: USER_END pid=5771 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 23:51:27.310000 audit[5771]: CRED_DISP pid=5771 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 23:51:27.317814 systemd[1]: sshd@16-172.31.18.197:22-20.161.92.111:45404.service: Deactivated successfully. Jan 14 23:51:27.319000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-172.31.18.197:22-20.161.92.111:45404 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:51:27.325030 systemd[1]: session-17.scope: Deactivated successfully. Jan 14 23:51:27.327355 systemd-logind[1960]: Session 17 logged out. Waiting for processes to exit. Jan 14 23:51:27.331817 systemd-logind[1960]: Removed session 17. Jan 14 23:51:27.395000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-172.31.18.197:22-20.161.92.111:45406 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:51:27.396491 systemd[1]: Started sshd@17-172.31.18.197:22-20.161.92.111:45406.service - OpenSSH per-connection server daemon (20.161.92.111:45406). Jan 14 23:51:27.872000 audit[5784]: USER_ACCT pid=5784 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 23:51:27.875718 sshd[5784]: Accepted publickey for core from 20.161.92.111 port 45406 ssh2: RSA SHA256:tQfYP2ATQd/HQz4yrh8s4gHDWQ0sgDwafourhFj+esE Jan 14 23:51:27.875000 audit[5784]: CRED_ACQ pid=5784 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 23:51:27.875000 audit[5784]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffc2b6bff0 a2=3 a3=0 items=0 ppid=1 pid=5784 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:51:27.875000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 23:51:27.878069 sshd-session[5784]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 23:51:27.891727 systemd-logind[1960]: New session 18 of user core. Jan 14 23:51:27.898886 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 14 23:51:27.903000 audit[5784]: USER_START pid=5784 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 23:51:27.906000 audit[5787]: CRED_ACQ pid=5787 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 23:51:28.467882 kubelet[3513]: E0114 23:51:28.467799 3513 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-789ccfdc9b-bggd8" podUID="54b5943d-5205-46b5-af4e-d4680f06e390" Jan 14 23:51:29.080000 audit[5803]: NETFILTER_CFG table=filter:149 family=2 entries=26 op=nft_register_rule pid=5803 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 23:51:29.080000 audit[5803]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=14176 a0=3 a1=fffff44211d0 a2=0 a3=1 items=0 ppid=3619 pid=5803 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:51:29.080000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 23:51:29.091000 audit[5803]: NETFILTER_CFG table=nat:150 family=2 entries=20 op=nft_register_rule pid=5803 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 23:51:29.091000 audit[5803]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5772 a0=3 a1=fffff44211d0 a2=0 a3=1 items=0 ppid=3619 pid=5803 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:51:29.091000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 23:51:29.145000 audit[5805]: NETFILTER_CFG table=filter:151 family=2 entries=38 op=nft_register_rule pid=5805 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 23:51:29.145000 audit[5805]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=14176 a0=3 a1=ffffca26e360 a2=0 a3=1 items=0 ppid=3619 pid=5805 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:51:29.145000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 23:51:29.155000 audit[5805]: NETFILTER_CFG table=nat:152 family=2 entries=20 op=nft_register_rule pid=5805 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 23:51:29.155000 audit[5805]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5772 a0=3 a1=ffffca26e360 a2=0 a3=1 items=0 ppid=3619 pid=5805 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:51:29.155000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 23:51:29.179335 sshd[5787]: Connection closed by 20.161.92.111 port 45406 Jan 14 23:51:29.179786 sshd-session[5784]: pam_unix(sshd:session): session closed for user core Jan 14 23:51:29.182000 audit[5784]: USER_END pid=5784 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 23:51:29.182000 audit[5784]: CRED_DISP pid=5784 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 23:51:29.189041 systemd[1]: sshd@17-172.31.18.197:22-20.161.92.111:45406.service: Deactivated successfully. Jan 14 23:51:29.189000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-172.31.18.197:22-20.161.92.111:45406 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:51:29.196525 systemd[1]: session-18.scope: Deactivated successfully. Jan 14 23:51:29.198736 systemd-logind[1960]: Session 18 logged out. Waiting for processes to exit. Jan 14 23:51:29.202771 systemd-logind[1960]: Removed session 18. Jan 14 23:51:29.283000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-172.31.18.197:22-20.161.92.111:45412 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:51:29.284860 systemd[1]: Started sshd@18-172.31.18.197:22-20.161.92.111:45412.service - OpenSSH per-connection server daemon (20.161.92.111:45412). Jan 14 23:51:29.763000 audit[5810]: USER_ACCT pid=5810 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 23:51:29.765166 sshd[5810]: Accepted publickey for core from 20.161.92.111 port 45412 ssh2: RSA SHA256:tQfYP2ATQd/HQz4yrh8s4gHDWQ0sgDwafourhFj+esE Jan 14 23:51:29.766042 kernel: kauditd_printk_skb: 36 callbacks suppressed Jan 14 23:51:29.766133 kernel: audit: type=1101 audit(1768434689.763:871): pid=5810 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 23:51:29.770000 audit[5810]: CRED_ACQ pid=5810 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 23:51:29.773412 sshd-session[5810]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 23:51:29.778057 kernel: audit: type=1103 audit(1768434689.770:872): pid=5810 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 23:51:29.781866 kernel: audit: type=1006 audit(1768434689.770:873): pid=5810 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=19 res=1 Jan 14 23:51:29.770000 audit[5810]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=fffff35dd680 a2=3 a3=0 items=0 ppid=1 pid=5810 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:51:29.789086 kernel: audit: type=1300 audit(1768434689.770:873): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=fffff35dd680 a2=3 a3=0 items=0 ppid=1 pid=5810 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:51:29.770000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 23:51:29.791888 kernel: audit: type=1327 audit(1768434689.770:873): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 23:51:29.796655 systemd-logind[1960]: New session 19 of user core. Jan 14 23:51:29.804925 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 14 23:51:29.809000 audit[5810]: USER_START pid=5810 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 23:51:29.816000 audit[5813]: CRED_ACQ pid=5813 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 23:51:29.823698 kernel: audit: type=1105 audit(1768434689.809:874): pid=5810 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 23:51:29.823841 kernel: audit: type=1103 audit(1768434689.816:875): pid=5813 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 23:51:30.409889 sshd[5813]: Connection closed by 20.161.92.111 port 45412 Jan 14 23:51:30.410763 sshd-session[5810]: pam_unix(sshd:session): session closed for user core Jan 14 23:51:30.415000 audit[5810]: USER_END pid=5810 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 23:51:30.415000 audit[5810]: CRED_DISP pid=5810 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 23:51:30.424994 systemd[1]: sshd@18-172.31.18.197:22-20.161.92.111:45412.service: Deactivated successfully. Jan 14 23:51:30.432568 kernel: audit: type=1106 audit(1768434690.415:876): pid=5810 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 23:51:30.433353 kernel: audit: type=1104 audit(1768434690.415:877): pid=5810 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 23:51:30.433941 kernel: audit: type=1131 audit(1768434690.423:878): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-172.31.18.197:22-20.161.92.111:45412 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:51:30.423000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-172.31.18.197:22-20.161.92.111:45412 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:51:30.430054 systemd[1]: session-19.scope: Deactivated successfully. Jan 14 23:51:30.440854 systemd-logind[1960]: Session 19 logged out. Waiting for processes to exit. Jan 14 23:51:30.443322 systemd-logind[1960]: Removed session 19. Jan 14 23:51:30.466721 kubelet[3513]: E0114 23:51:30.465975 3513 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-fd2bc" podUID="2054a7f1-33a6-4a0b-8079-7e8881899bb3" Jan 14 23:51:30.466721 kubelet[3513]: E0114 23:51:30.466382 3513 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6d86655bcb-8jvgv" podUID="296d9b29-20aa-492b-aa70-e26652feb8da" Jan 14 23:51:30.504125 systemd[1]: Started sshd@19-172.31.18.197:22-20.161.92.111:45420.service - OpenSSH per-connection server daemon (20.161.92.111:45420). Jan 14 23:51:30.502000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-172.31.18.197:22-20.161.92.111:45420 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:51:30.979000 audit[5823]: USER_ACCT pid=5823 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 23:51:30.982559 sshd[5823]: Accepted publickey for core from 20.161.92.111 port 45420 ssh2: RSA SHA256:tQfYP2ATQd/HQz4yrh8s4gHDWQ0sgDwafourhFj+esE Jan 14 23:51:30.982000 audit[5823]: CRED_ACQ pid=5823 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 23:51:30.982000 audit[5823]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffd21fd190 a2=3 a3=0 items=0 ppid=1 pid=5823 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=20 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:51:30.982000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 23:51:30.986320 sshd-session[5823]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 23:51:31.007012 systemd-logind[1960]: New session 20 of user core. Jan 14 23:51:31.019349 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 14 23:51:31.031000 audit[5823]: USER_START pid=5823 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 23:51:31.035000 audit[5851]: CRED_ACQ pid=5851 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 23:51:31.364672 sshd[5851]: Connection closed by 20.161.92.111 port 45420 Jan 14 23:51:31.364944 sshd-session[5823]: pam_unix(sshd:session): session closed for user core Jan 14 23:51:31.366000 audit[5823]: USER_END pid=5823 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 23:51:31.366000 audit[5823]: CRED_DISP pid=5823 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 23:51:31.372950 systemd[1]: sshd@19-172.31.18.197:22-20.161.92.111:45420.service: Deactivated successfully. Jan 14 23:51:31.372000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-172.31.18.197:22-20.161.92.111:45420 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:51:31.379129 systemd[1]: session-20.scope: Deactivated successfully. Jan 14 23:51:31.383342 systemd-logind[1960]: Session 20 logged out. Waiting for processes to exit. Jan 14 23:51:31.386975 systemd-logind[1960]: Removed session 20. Jan 14 23:51:31.465788 kubelet[3513]: E0114 23:51:31.465414 3513 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6d86655bcb-l8j4w" podUID="255a4468-7378-413d-92bd-8056478658d3" Jan 14 23:51:31.469451 kubelet[3513]: E0114 23:51:31.469232 3513 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7cc6d978d-lsv59" podUID="49be0d3d-84d7-4994-b2e7-37cea1fa9624" Jan 14 23:51:32.469118 kubelet[3513]: E0114 23:51:32.469025 3513 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-8dsvj" podUID="23b28fd4-dc96-481b-a69a-1d96358778f5" Jan 14 23:51:34.463113 kubelet[3513]: E0114 23:51:34.462995 3513 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-fdc6fb9d4-p5lnk" podUID="21b369b7-d986-41d1-8a2e-a01d832685f7" Jan 14 23:51:36.235000 audit[5867]: NETFILTER_CFG table=filter:153 family=2 entries=26 op=nft_register_rule pid=5867 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 23:51:36.242819 kernel: kauditd_printk_skb: 11 callbacks suppressed Jan 14 23:51:36.242945 kernel: audit: type=1325 audit(1768434696.235:888): table=filter:153 family=2 entries=26 op=nft_register_rule pid=5867 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 23:51:36.235000 audit[5867]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=ffffe9c947f0 a2=0 a3=1 items=0 ppid=3619 pid=5867 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:51:36.235000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 23:51:36.254491 kernel: audit: type=1300 audit(1768434696.235:888): arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=ffffe9c947f0 a2=0 a3=1 items=0 ppid=3619 pid=5867 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:51:36.254628 kernel: audit: type=1327 audit(1768434696.235:888): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 23:51:36.253000 audit[5867]: NETFILTER_CFG table=nat:154 family=2 entries=104 op=nft_register_chain pid=5867 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 23:51:36.253000 audit[5867]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=48684 a0=3 a1=ffffe9c947f0 a2=0 a3=1 items=0 ppid=3619 pid=5867 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:51:36.267099 kernel: audit: type=1325 audit(1768434696.253:889): table=nat:154 family=2 entries=104 op=nft_register_chain pid=5867 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 23:51:36.267186 kernel: audit: type=1300 audit(1768434696.253:889): arch=c00000b7 syscall=211 success=yes exit=48684 a0=3 a1=ffffe9c947f0 a2=0 a3=1 items=0 ppid=3619 pid=5867 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:51:36.253000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 23:51:36.270786 kernel: audit: type=1327 audit(1768434696.253:889): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 23:51:36.463855 systemd[1]: Started sshd@20-172.31.18.197:22-20.161.92.111:48902.service - OpenSSH per-connection server daemon (20.161.92.111:48902). Jan 14 23:51:36.462000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-172.31.18.197:22-20.161.92.111:48902 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:51:36.470628 kernel: audit: type=1130 audit(1768434696.462:890): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-172.31.18.197:22-20.161.92.111:48902 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:51:36.924000 audit[5869]: USER_ACCT pid=5869 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 23:51:36.932907 sshd[5869]: Accepted publickey for core from 20.161.92.111 port 48902 ssh2: RSA SHA256:tQfYP2ATQd/HQz4yrh8s4gHDWQ0sgDwafourhFj+esE Jan 14 23:51:36.932000 audit[5869]: CRED_ACQ pid=5869 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 23:51:36.934771 sshd-session[5869]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 23:51:36.940306 kernel: audit: type=1101 audit(1768434696.924:891): pid=5869 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 23:51:36.940410 kernel: audit: type=1103 audit(1768434696.932:892): pid=5869 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 23:51:36.944659 kernel: audit: type=1006 audit(1768434696.932:893): pid=5869 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=21 res=1 Jan 14 23:51:36.932000 audit[5869]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffc3ede1d0 a2=3 a3=0 items=0 ppid=1 pid=5869 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=21 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:51:36.932000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 23:51:36.953681 systemd-logind[1960]: New session 21 of user core. Jan 14 23:51:36.960977 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 14 23:51:36.973000 audit[5869]: USER_START pid=5869 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 23:51:36.977000 audit[5872]: CRED_ACQ pid=5872 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 23:51:37.293739 sshd[5872]: Connection closed by 20.161.92.111 port 48902 Jan 14 23:51:37.293461 sshd-session[5869]: pam_unix(sshd:session): session closed for user core Jan 14 23:51:37.296000 audit[5869]: USER_END pid=5869 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 23:51:37.296000 audit[5869]: CRED_DISP pid=5869 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 23:51:37.303661 systemd[1]: sshd@20-172.31.18.197:22-20.161.92.111:48902.service: Deactivated successfully. Jan 14 23:51:37.302000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-172.31.18.197:22-20.161.92.111:48902 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:51:37.310049 systemd[1]: session-21.scope: Deactivated successfully. Jan 14 23:51:37.312390 systemd-logind[1960]: Session 21 logged out. Waiting for processes to exit. Jan 14 23:51:37.316521 systemd-logind[1960]: Removed session 21. Jan 14 23:51:41.466791 kubelet[3513]: E0114 23:51:41.464190 3513 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6d86655bcb-8jvgv" podUID="296d9b29-20aa-492b-aa70-e26652feb8da" Jan 14 23:51:42.388952 kernel: kauditd_printk_skb: 7 callbacks suppressed Jan 14 23:51:42.389065 kernel: audit: type=1130 audit(1768434702.385:899): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-172.31.18.197:22-20.161.92.111:48910 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:51:42.385000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-172.31.18.197:22-20.161.92.111:48910 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:51:42.387077 systemd[1]: Started sshd@21-172.31.18.197:22-20.161.92.111:48910.service - OpenSSH per-connection server daemon (20.161.92.111:48910). Jan 14 23:51:42.851000 audit[5887]: USER_ACCT pid=5887 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 23:51:42.854918 sshd[5887]: Accepted publickey for core from 20.161.92.111 port 48910 ssh2: RSA SHA256:tQfYP2ATQd/HQz4yrh8s4gHDWQ0sgDwafourhFj+esE Jan 14 23:51:42.860700 kernel: audit: type=1101 audit(1768434702.851:900): pid=5887 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 23:51:42.862714 kernel: audit: type=1103 audit(1768434702.859:901): pid=5887 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 23:51:42.859000 audit[5887]: CRED_ACQ pid=5887 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 23:51:42.861902 sshd-session[5887]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 23:51:42.870111 kernel: audit: type=1006 audit(1768434702.859:902): pid=5887 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=22 res=1 Jan 14 23:51:42.859000 audit[5887]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffd4d54e90 a2=3 a3=0 items=0 ppid=1 pid=5887 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:51:42.877667 kernel: audit: type=1300 audit(1768434702.859:902): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffd4d54e90 a2=3 a3=0 items=0 ppid=1 pid=5887 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:51:42.877791 kernel: audit: type=1327 audit(1768434702.859:902): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 23:51:42.859000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 23:51:42.885790 systemd-logind[1960]: New session 22 of user core. Jan 14 23:51:42.890913 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 14 23:51:42.897000 audit[5887]: USER_START pid=5887 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 23:51:42.905933 kernel: audit: type=1105 audit(1768434702.897:903): pid=5887 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 23:51:42.904000 audit[5890]: CRED_ACQ pid=5890 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 23:51:42.913644 kernel: audit: type=1103 audit(1768434702.904:904): pid=5890 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 23:51:43.310727 sshd[5890]: Connection closed by 20.161.92.111 port 48910 Jan 14 23:51:43.313834 sshd-session[5887]: pam_unix(sshd:session): session closed for user core Jan 14 23:51:43.316000 audit[5887]: USER_END pid=5887 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 23:51:43.328004 systemd[1]: sshd@21-172.31.18.197:22-20.161.92.111:48910.service: Deactivated successfully. Jan 14 23:51:43.316000 audit[5887]: CRED_DISP pid=5887 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 23:51:43.335149 systemd[1]: session-22.scope: Deactivated successfully. Jan 14 23:51:43.341282 kernel: audit: type=1106 audit(1768434703.316:905): pid=5887 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 23:51:43.341449 kernel: audit: type=1104 audit(1768434703.316:906): pid=5887 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 23:51:43.343660 systemd-logind[1960]: Session 22 logged out. Waiting for processes to exit. Jan 14 23:51:43.326000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-172.31.18.197:22-20.161.92.111:48910 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:51:43.350607 systemd-logind[1960]: Removed session 22. Jan 14 23:51:43.469265 kubelet[3513]: E0114 23:51:43.469175 3513 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-fd2bc" podUID="2054a7f1-33a6-4a0b-8079-7e8881899bb3" Jan 14 23:51:43.471718 kubelet[3513]: E0114 23:51:43.471217 3513 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-789ccfdc9b-bggd8" podUID="54b5943d-5205-46b5-af4e-d4680f06e390" Jan 14 23:51:43.473433 kubelet[3513]: E0114 23:51:43.472916 3513 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-8dsvj" podUID="23b28fd4-dc96-481b-a69a-1d96358778f5" Jan 14 23:51:44.464265 kubelet[3513]: E0114 23:51:44.464164 3513 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7cc6d978d-lsv59" podUID="49be0d3d-84d7-4994-b2e7-37cea1fa9624" Jan 14 23:51:44.465532 kubelet[3513]: E0114 23:51:44.465470 3513 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6d86655bcb-l8j4w" podUID="255a4468-7378-413d-92bd-8056478658d3" Jan 14 23:51:48.404223 systemd[1]: Started sshd@22-172.31.18.197:22-20.161.92.111:57702.service - OpenSSH per-connection server daemon (20.161.92.111:57702). Jan 14 23:51:48.410768 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 14 23:51:48.410894 kernel: audit: type=1130 audit(1768434708.402:908): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-172.31.18.197:22-20.161.92.111:57702 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:51:48.402000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-172.31.18.197:22-20.161.92.111:57702 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:51:48.885000 audit[5903]: USER_ACCT pid=5903 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 23:51:48.894627 sshd[5903]: Accepted publickey for core from 20.161.92.111 port 57702 ssh2: RSA SHA256:tQfYP2ATQd/HQz4yrh8s4gHDWQ0sgDwafourhFj+esE Jan 14 23:51:48.894000 audit[5903]: CRED_ACQ pid=5903 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 23:51:48.902719 kernel: audit: type=1101 audit(1768434708.885:909): pid=5903 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 23:51:48.902870 kernel: audit: type=1103 audit(1768434708.894:910): pid=5903 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 23:51:48.903474 sshd-session[5903]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 23:51:48.911671 kernel: audit: type=1006 audit(1768434708.894:911): pid=5903 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=23 res=1 Jan 14 23:51:48.894000 audit[5903]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffc6c3b7d0 a2=3 a3=0 items=0 ppid=1 pid=5903 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:51:48.922698 kernel: audit: type=1300 audit(1768434708.894:911): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffc6c3b7d0 a2=3 a3=0 items=0 ppid=1 pid=5903 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:51:48.922827 kernel: audit: type=1327 audit(1768434708.894:911): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 23:51:48.894000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 23:51:48.925209 systemd-logind[1960]: New session 23 of user core. Jan 14 23:51:48.930916 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 14 23:51:48.936000 audit[5903]: USER_START pid=5903 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 23:51:48.946767 kernel: audit: type=1105 audit(1768434708.936:912): pid=5903 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 23:51:48.948000 audit[5906]: CRED_ACQ pid=5906 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 23:51:48.956650 kernel: audit: type=1103 audit(1768434708.948:913): pid=5906 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 23:51:49.315985 sshd[5906]: Connection closed by 20.161.92.111 port 57702 Jan 14 23:51:49.317307 sshd-session[5903]: pam_unix(sshd:session): session closed for user core Jan 14 23:51:49.319000 audit[5903]: USER_END pid=5903 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 23:51:49.329644 kernel: audit: type=1106 audit(1768434709.319:914): pid=5903 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 23:51:49.329780 kernel: audit: type=1104 audit(1768434709.319:915): pid=5903 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 23:51:49.319000 audit[5903]: CRED_DISP pid=5903 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 23:51:49.340310 systemd[1]: sshd@22-172.31.18.197:22-20.161.92.111:57702.service: Deactivated successfully. Jan 14 23:51:49.339000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-172.31.18.197:22-20.161.92.111:57702 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:51:49.347260 systemd[1]: session-23.scope: Deactivated successfully. Jan 14 23:51:49.351471 systemd-logind[1960]: Session 23 logged out. Waiting for processes to exit. Jan 14 23:51:49.354676 systemd-logind[1960]: Removed session 23. Jan 14 23:51:49.467414 kubelet[3513]: E0114 23:51:49.467272 3513 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-fdc6fb9d4-p5lnk" podUID="21b369b7-d986-41d1-8a2e-a01d832685f7" Jan 14 23:51:54.415950 systemd[1]: Started sshd@23-172.31.18.197:22-20.161.92.111:58228.service - OpenSSH per-connection server daemon (20.161.92.111:58228). Jan 14 23:51:54.418785 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 14 23:51:54.418898 kernel: audit: type=1130 audit(1768434714.414:917): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-172.31.18.197:22-20.161.92.111:58228 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:51:54.414000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-172.31.18.197:22-20.161.92.111:58228 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:51:54.467494 kubelet[3513]: E0114 23:51:54.466890 3513 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-fd2bc" podUID="2054a7f1-33a6-4a0b-8079-7e8881899bb3" Jan 14 23:51:54.921000 audit[5924]: USER_ACCT pid=5924 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 23:51:54.925124 sshd[5924]: Accepted publickey for core from 20.161.92.111 port 58228 ssh2: RSA SHA256:tQfYP2ATQd/HQz4yrh8s4gHDWQ0sgDwafourhFj+esE Jan 14 23:51:54.929000 audit[5924]: CRED_ACQ pid=5924 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 23:51:54.933433 sshd-session[5924]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 23:51:54.937193 kernel: audit: type=1101 audit(1768434714.921:918): pid=5924 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 23:51:54.937328 kernel: audit: type=1103 audit(1768434714.929:919): pid=5924 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 23:51:54.942457 kernel: audit: type=1006 audit(1768434714.929:920): pid=5924 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=24 res=1 Jan 14 23:51:54.929000 audit[5924]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffe5731430 a2=3 a3=0 items=0 ppid=1 pid=5924 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:51:54.950506 kernel: audit: type=1300 audit(1768434714.929:920): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffe5731430 a2=3 a3=0 items=0 ppid=1 pid=5924 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:51:54.929000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 23:51:54.954521 kernel: audit: type=1327 audit(1768434714.929:920): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 23:51:54.958181 systemd-logind[1960]: New session 24 of user core. Jan 14 23:51:54.965183 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 14 23:51:54.973000 audit[5924]: USER_START pid=5924 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 23:51:54.973000 audit[5927]: CRED_ACQ pid=5927 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 23:51:54.987790 kernel: audit: type=1105 audit(1768434714.973:921): pid=5924 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 23:51:54.987875 kernel: audit: type=1103 audit(1768434714.973:922): pid=5927 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 23:51:55.329724 sshd[5927]: Connection closed by 20.161.92.111 port 58228 Jan 14 23:51:55.330548 sshd-session[5924]: pam_unix(sshd:session): session closed for user core Jan 14 23:51:55.334000 audit[5924]: USER_END pid=5924 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 23:51:55.347624 kernel: audit: type=1106 audit(1768434715.334:923): pid=5924 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 23:51:55.347745 kernel: audit: type=1104 audit(1768434715.335:924): pid=5924 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 23:51:55.335000 audit[5924]: CRED_DISP pid=5924 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 23:51:55.345147 systemd[1]: sshd@23-172.31.18.197:22-20.161.92.111:58228.service: Deactivated successfully. Jan 14 23:51:55.352664 systemd[1]: session-24.scope: Deactivated successfully. Jan 14 23:51:55.345000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-172.31.18.197:22-20.161.92.111:58228 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:51:55.360216 systemd-logind[1960]: Session 24 logged out. Waiting for processes to exit. Jan 14 23:51:55.365673 systemd-logind[1960]: Removed session 24. Jan 14 23:51:55.472251 containerd[1996]: time="2026-01-14T23:51:55.472028446Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 14 23:51:55.728822 containerd[1996]: time="2026-01-14T23:51:55.728747556Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 23:51:55.731118 containerd[1996]: time="2026-01-14T23:51:55.731009760Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 14 23:51:55.731298 containerd[1996]: time="2026-01-14T23:51:55.731167788Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 14 23:51:55.731866 kubelet[3513]: E0114 23:51:55.731801 3513 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 14 23:51:55.732376 kubelet[3513]: E0114 23:51:55.731872 3513 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 14 23:51:55.732376 kubelet[3513]: E0114 23:51:55.732033 3513 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:5c071b4136e94af696b69447927c640d,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-hpxsc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-789ccfdc9b-bggd8_calico-system(54b5943d-5205-46b5-af4e-d4680f06e390): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 14 23:51:55.737204 containerd[1996]: time="2026-01-14T23:51:55.737146188Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 14 23:51:56.152433 containerd[1996]: time="2026-01-14T23:51:56.152358478Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 23:51:56.155953 containerd[1996]: time="2026-01-14T23:51:56.155787934Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 14 23:51:56.155953 containerd[1996]: time="2026-01-14T23:51:56.155871238Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 14 23:51:56.156411 kubelet[3513]: E0114 23:51:56.156146 3513 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 14 23:51:56.156411 kubelet[3513]: E0114 23:51:56.156237 3513 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 14 23:51:56.157138 kubelet[3513]: E0114 23:51:56.157033 3513 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hpxsc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-789ccfdc9b-bggd8_calico-system(54b5943d-5205-46b5-af4e-d4680f06e390): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 14 23:51:56.158810 kubelet[3513]: E0114 23:51:56.158716 3513 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-789ccfdc9b-bggd8" podUID="54b5943d-5205-46b5-af4e-d4680f06e390" Jan 14 23:51:56.466188 kubelet[3513]: E0114 23:51:56.466020 3513 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6d86655bcb-8jvgv" podUID="296d9b29-20aa-492b-aa70-e26652feb8da" Jan 14 23:51:57.464385 kubelet[3513]: E0114 23:51:57.463738 3513 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7cc6d978d-lsv59" podUID="49be0d3d-84d7-4994-b2e7-37cea1fa9624" Jan 14 23:51:58.468287 kubelet[3513]: E0114 23:51:58.466468 3513 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-8dsvj" podUID="23b28fd4-dc96-481b-a69a-1d96358778f5" Jan 14 23:51:59.464954 containerd[1996]: time="2026-01-14T23:51:59.464836862Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 14 23:51:59.742112 containerd[1996]: time="2026-01-14T23:51:59.741751900Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 23:51:59.744623 containerd[1996]: time="2026-01-14T23:51:59.744405568Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 14 23:51:59.744974 containerd[1996]: time="2026-01-14T23:51:59.744538372Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 14 23:51:59.745195 kubelet[3513]: E0114 23:51:59.745019 3513 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 23:51:59.745195 kubelet[3513]: E0114 23:51:59.745083 3513 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 23:51:59.746222 kubelet[3513]: E0114 23:51:59.745281 3513 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-m5xqc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6d86655bcb-l8j4w_calico-apiserver(255a4468-7378-413d-92bd-8056478658d3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 14 23:51:59.746908 kubelet[3513]: E0114 23:51:59.746827 3513 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6d86655bcb-l8j4w" podUID="255a4468-7378-413d-92bd-8056478658d3" Jan 14 23:52:00.423000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-172.31.18.197:22-20.161.92.111:58238 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:52:00.424417 systemd[1]: Started sshd@24-172.31.18.197:22-20.161.92.111:58238.service - OpenSSH per-connection server daemon (20.161.92.111:58238). Jan 14 23:52:00.427807 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 14 23:52:00.427935 kernel: audit: type=1130 audit(1768434720.423:926): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-172.31.18.197:22-20.161.92.111:58238 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:52:00.927000 audit[5940]: USER_ACCT pid=5940 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 23:52:00.934851 sshd[5940]: Accepted publickey for core from 20.161.92.111 port 58238 ssh2: RSA SHA256:tQfYP2ATQd/HQz4yrh8s4gHDWQ0sgDwafourhFj+esE Jan 14 23:52:00.939407 sshd-session[5940]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 23:52:00.936000 audit[5940]: CRED_ACQ pid=5940 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 23:52:00.946768 kernel: audit: type=1101 audit(1768434720.927:927): pid=5940 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 23:52:00.946923 kernel: audit: type=1103 audit(1768434720.936:928): pid=5940 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 23:52:00.954167 kernel: audit: type=1006 audit(1768434720.936:929): pid=5940 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=25 res=1 Jan 14 23:52:00.936000 audit[5940]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=fffffd36e000 a2=3 a3=0 items=0 ppid=1 pid=5940 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:52:00.970173 systemd-logind[1960]: New session 25 of user core. Jan 14 23:52:00.976891 kernel: audit: type=1300 audit(1768434720.936:929): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=fffffd36e000 a2=3 a3=0 items=0 ppid=1 pid=5940 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:52:00.976975 kernel: audit: type=1327 audit(1768434720.936:929): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 23:52:00.936000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 23:52:00.979113 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 14 23:52:00.987000 audit[5940]: USER_START pid=5940 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 23:52:00.996000 audit[5967]: CRED_ACQ pid=5967 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 23:52:01.004822 kernel: audit: type=1105 audit(1768434720.987:930): pid=5940 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 23:52:01.004939 kernel: audit: type=1103 audit(1768434720.996:931): pid=5967 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 23:52:01.380892 sshd[5967]: Connection closed by 20.161.92.111 port 58238 Jan 14 23:52:01.384942 sshd-session[5940]: pam_unix(sshd:session): session closed for user core Jan 14 23:52:01.386000 audit[5940]: USER_END pid=5940 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 23:52:01.396679 systemd[1]: sshd@24-172.31.18.197:22-20.161.92.111:58238.service: Deactivated successfully. Jan 14 23:52:01.400133 systemd[1]: session-25.scope: Deactivated successfully. Jan 14 23:52:01.387000 audit[5940]: CRED_DISP pid=5940 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 23:52:01.412138 kernel: audit: type=1106 audit(1768434721.386:932): pid=5940 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 23:52:01.412294 kernel: audit: type=1104 audit(1768434721.387:933): pid=5940 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 23:52:01.413964 systemd-logind[1960]: Session 25 logged out. Waiting for processes to exit. Jan 14 23:52:01.395000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-172.31.18.197:22-20.161.92.111:58238 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:52:01.419147 systemd-logind[1960]: Removed session 25. Jan 14 23:52:01.468040 containerd[1996]: time="2026-01-14T23:52:01.467870536Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 14 23:52:01.741723 containerd[1996]: time="2026-01-14T23:52:01.740758926Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 23:52:01.744209 containerd[1996]: time="2026-01-14T23:52:01.744090198Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 14 23:52:01.744533 containerd[1996]: time="2026-01-14T23:52:01.744145146Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 14 23:52:01.744922 kubelet[3513]: E0114 23:52:01.744873 3513 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 14 23:52:01.745789 kubelet[3513]: E0114 23:52:01.745476 3513 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 14 23:52:01.746067 kubelet[3513]: E0114 23:52:01.745958 3513 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9fmf5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-fdc6fb9d4-p5lnk_calico-system(21b369b7-d986-41d1-8a2e-a01d832685f7): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 14 23:52:01.748125 kubelet[3513]: E0114 23:52:01.748054 3513 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-fdc6fb9d4-p5lnk" podUID="21b369b7-d986-41d1-8a2e-a01d832685f7" Jan 14 23:52:06.464216 containerd[1996]: time="2026-01-14T23:52:06.463991541Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 14 23:52:06.735829 containerd[1996]: time="2026-01-14T23:52:06.735648886Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 23:52:06.737974 containerd[1996]: time="2026-01-14T23:52:06.737899042Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 14 23:52:06.737974 containerd[1996]: time="2026-01-14T23:52:06.737921062Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 14 23:52:06.738271 kubelet[3513]: E0114 23:52:06.738224 3513 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 14 23:52:06.738781 kubelet[3513]: E0114 23:52:06.738288 3513 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 14 23:52:06.738781 kubelet[3513]: E0114 23:52:06.738505 3513 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mr2sb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-fd2bc_calico-system(2054a7f1-33a6-4a0b-8079-7e8881899bb3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 14 23:52:06.739864 kubelet[3513]: E0114 23:52:06.739783 3513 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-fd2bc" podUID="2054a7f1-33a6-4a0b-8079-7e8881899bb3" Jan 14 23:52:07.467281 kubelet[3513]: E0114 23:52:07.467062 3513 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-789ccfdc9b-bggd8" podUID="54b5943d-5205-46b5-af4e-d4680f06e390" Jan 14 23:52:11.465610 kubelet[3513]: E0114 23:52:11.464919 3513 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6d86655bcb-l8j4w" podUID="255a4468-7378-413d-92bd-8056478658d3" Jan 14 23:52:11.466206 containerd[1996]: time="2026-01-14T23:52:11.465421058Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 14 23:52:11.744332 containerd[1996]: time="2026-01-14T23:52:11.744151587Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 23:52:11.746419 containerd[1996]: time="2026-01-14T23:52:11.746346423Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 14 23:52:11.746714 containerd[1996]: time="2026-01-14T23:52:11.746382759Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 14 23:52:11.746782 kubelet[3513]: E0114 23:52:11.746672 3513 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 23:52:11.746782 kubelet[3513]: E0114 23:52:11.746728 3513 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 23:52:11.747398 kubelet[3513]: E0114 23:52:11.747028 3513 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fcwsd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-7cc6d978d-lsv59_calico-apiserver(49be0d3d-84d7-4994-b2e7-37cea1fa9624): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 14 23:52:11.747706 containerd[1996]: time="2026-01-14T23:52:11.747316659Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 14 23:52:11.748671 kubelet[3513]: E0114 23:52:11.748570 3513 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7cc6d978d-lsv59" podUID="49be0d3d-84d7-4994-b2e7-37cea1fa9624" Jan 14 23:52:12.022889 containerd[1996]: time="2026-01-14T23:52:12.022570261Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 23:52:12.024867 containerd[1996]: time="2026-01-14T23:52:12.024725821Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 14 23:52:12.024867 containerd[1996]: time="2026-01-14T23:52:12.024796489Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 14 23:52:12.025343 kubelet[3513]: E0114 23:52:12.025264 3513 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 23:52:12.025549 kubelet[3513]: E0114 23:52:12.025490 3513 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 23:52:12.025877 kubelet[3513]: E0114 23:52:12.025786 3513 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gdgll,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6d86655bcb-8jvgv_calico-apiserver(296d9b29-20aa-492b-aa70-e26652feb8da): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 14 23:52:12.027067 kubelet[3513]: E0114 23:52:12.027015 3513 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6d86655bcb-8jvgv" podUID="296d9b29-20aa-492b-aa70-e26652feb8da" Jan 14 23:52:13.466087 containerd[1996]: time="2026-01-14T23:52:13.465854332Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 14 23:52:13.719121 containerd[1996]: time="2026-01-14T23:52:13.718962689Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 23:52:13.721233 containerd[1996]: time="2026-01-14T23:52:13.721155041Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 14 23:52:13.721364 containerd[1996]: time="2026-01-14T23:52:13.721300529Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 14 23:52:13.721737 kubelet[3513]: E0114 23:52:13.721573 3513 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 14 23:52:13.721737 kubelet[3513]: E0114 23:52:13.721703 3513 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 14 23:52:13.722652 kubelet[3513]: E0114 23:52:13.722533 3513 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kdx84,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-8dsvj_calico-system(23b28fd4-dc96-481b-a69a-1d96358778f5): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 14 23:52:13.725470 containerd[1996]: time="2026-01-14T23:52:13.725378465Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 14 23:52:13.983048 containerd[1996]: time="2026-01-14T23:52:13.982885950Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 23:52:13.985120 containerd[1996]: time="2026-01-14T23:52:13.985029318Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 14 23:52:13.985250 containerd[1996]: time="2026-01-14T23:52:13.985164066Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 14 23:52:13.985471 kubelet[3513]: E0114 23:52:13.985376 3513 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 14 23:52:13.985471 kubelet[3513]: E0114 23:52:13.985447 3513 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 14 23:52:13.985904 kubelet[3513]: E0114 23:52:13.985819 3513 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kdx84,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-8dsvj_calico-system(23b28fd4-dc96-481b-a69a-1d96358778f5): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 14 23:52:13.987106 kubelet[3513]: E0114 23:52:13.987052 3513 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-8dsvj" podUID="23b28fd4-dc96-481b-a69a-1d96358778f5" Jan 14 23:52:14.462491 kubelet[3513]: E0114 23:52:14.462428 3513 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-fdc6fb9d4-p5lnk" podUID="21b369b7-d986-41d1-8a2e-a01d832685f7" Jan 14 23:52:15.416314 systemd[1]: cri-containerd-dac1888a9150e46d946f273d249809444e739843c6a3221125849236c62192ba.scope: Deactivated successfully. Jan 14 23:52:15.418058 systemd[1]: cri-containerd-dac1888a9150e46d946f273d249809444e739843c6a3221125849236c62192ba.scope: Consumed 30.064s CPU time, 108M memory peak. Jan 14 23:52:15.420000 audit: BPF prog-id=152 op=UNLOAD Jan 14 23:52:15.422214 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 14 23:52:15.422287 kernel: audit: type=1334 audit(1768434735.420:935): prog-id=152 op=UNLOAD Jan 14 23:52:15.420000 audit: BPF prog-id=156 op=UNLOAD Jan 14 23:52:15.427523 kernel: audit: type=1334 audit(1768434735.420:936): prog-id=156 op=UNLOAD Jan 14 23:52:15.427663 containerd[1996]: time="2026-01-14T23:52:15.426061494Z" level=info msg="received container exit event container_id:\"dac1888a9150e46d946f273d249809444e739843c6a3221125849236c62192ba\" id:\"dac1888a9150e46d946f273d249809444e739843c6a3221125849236c62192ba\" pid:3836 exit_status:1 exited_at:{seconds:1768434735 nanos:424852506}" Jan 14 23:52:15.475956 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-dac1888a9150e46d946f273d249809444e739843c6a3221125849236c62192ba-rootfs.mount: Deactivated successfully. Jan 14 23:52:16.295999 kubelet[3513]: I0114 23:52:16.295881 3513 scope.go:117] "RemoveContainer" containerID="dac1888a9150e46d946f273d249809444e739843c6a3221125849236c62192ba" Jan 14 23:52:16.300547 containerd[1996]: time="2026-01-14T23:52:16.300460182Z" level=info msg="CreateContainer within sandbox \"7bff15e8298eb1901ea375f1bf95a3c279b3ae3359852b2d42832db6ee7fda9a\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Jan 14 23:52:16.316184 systemd[1]: cri-containerd-f7a36272e2f2686043f56697d76899ca8fb8bbc67662af29f9291149d37b5e56.scope: Deactivated successfully. Jan 14 23:52:16.316808 systemd[1]: cri-containerd-f7a36272e2f2686043f56697d76899ca8fb8bbc67662af29f9291149d37b5e56.scope: Consumed 9.721s CPU time, 61.5M memory peak. Jan 14 23:52:16.318000 audit: BPF prog-id=267 op=LOAD Jan 14 23:52:16.326210 containerd[1996]: time="2026-01-14T23:52:16.325872102Z" level=info msg="received container exit event container_id:\"f7a36272e2f2686043f56697d76899ca8fb8bbc67662af29f9291149d37b5e56\" id:\"f7a36272e2f2686043f56697d76899ca8fb8bbc67662af29f9291149d37b5e56\" pid:3137 exit_status:1 exited_at:{seconds:1768434736 nanos:324379110}" Jan 14 23:52:16.320000 audit: BPF prog-id=89 op=UNLOAD Jan 14 23:52:16.328892 kernel: audit: type=1334 audit(1768434736.318:937): prog-id=267 op=LOAD Jan 14 23:52:16.328995 kernel: audit: type=1334 audit(1768434736.320:938): prog-id=89 op=UNLOAD Jan 14 23:52:16.329061 kernel: audit: type=1334 audit(1768434736.322:939): prog-id=104 op=UNLOAD Jan 14 23:52:16.322000 audit: BPF prog-id=104 op=UNLOAD Jan 14 23:52:16.332506 kernel: audit: type=1334 audit(1768434736.322:940): prog-id=108 op=UNLOAD Jan 14 23:52:16.322000 audit: BPF prog-id=108 op=UNLOAD Jan 14 23:52:16.346642 containerd[1996]: time="2026-01-14T23:52:16.344337342Z" level=info msg="Container 09bce258e9ca98e1ad0482e2c95689f7d3bf4298c5b26213d1210f5e8b2467db: CDI devices from CRI Config.CDIDevices: []" Jan 14 23:52:16.362732 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3904354465.mount: Deactivated successfully. Jan 14 23:52:16.370758 containerd[1996]: time="2026-01-14T23:52:16.370648542Z" level=info msg="CreateContainer within sandbox \"7bff15e8298eb1901ea375f1bf95a3c279b3ae3359852b2d42832db6ee7fda9a\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"09bce258e9ca98e1ad0482e2c95689f7d3bf4298c5b26213d1210f5e8b2467db\"" Jan 14 23:52:16.372333 containerd[1996]: time="2026-01-14T23:52:16.372287406Z" level=info msg="StartContainer for \"09bce258e9ca98e1ad0482e2c95689f7d3bf4298c5b26213d1210f5e8b2467db\"" Jan 14 23:52:16.374771 containerd[1996]: time="2026-01-14T23:52:16.374575494Z" level=info msg="connecting to shim 09bce258e9ca98e1ad0482e2c95689f7d3bf4298c5b26213d1210f5e8b2467db" address="unix:///run/containerd/s/112527f83027f3a4ecb09e8766a78af86446564ea97ee63dbd2c288c1a82f777" protocol=ttrpc version=3 Jan 14 23:52:16.423294 systemd[1]: Started cri-containerd-09bce258e9ca98e1ad0482e2c95689f7d3bf4298c5b26213d1210f5e8b2467db.scope - libcontainer container 09bce258e9ca98e1ad0482e2c95689f7d3bf4298c5b26213d1210f5e8b2467db. Jan 14 23:52:16.451000 audit: BPF prog-id=268 op=LOAD Jan 14 23:52:16.453636 kernel: audit: type=1334 audit(1768434736.451:941): prog-id=268 op=LOAD Jan 14 23:52:16.453000 audit: BPF prog-id=269 op=LOAD Jan 14 23:52:16.461802 kernel: audit: type=1334 audit(1768434736.453:942): prog-id=269 op=LOAD Jan 14 23:52:16.461890 kernel: audit: type=1300 audit(1768434736.453:942): arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000176180 a2=98 a3=0 items=0 ppid=3646 pid=6032 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:52:16.453000 audit[6032]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000176180 a2=98 a3=0 items=0 ppid=3646 pid=6032 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:52:16.453000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3039626365323538653963613938653161643034383265326339353638 Jan 14 23:52:16.470021 kernel: audit: type=1327 audit(1768434736.453:942): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3039626365323538653963613938653161643034383265326339353638 Jan 14 23:52:16.453000 audit: BPF prog-id=269 op=UNLOAD Jan 14 23:52:16.453000 audit[6032]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3646 pid=6032 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:52:16.453000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3039626365323538653963613938653161643034383265326339353638 Jan 14 23:52:16.453000 audit: BPF prog-id=270 op=LOAD Jan 14 23:52:16.453000 audit[6032]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001763e8 a2=98 a3=0 items=0 ppid=3646 pid=6032 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:52:16.453000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3039626365323538653963613938653161643034383265326339353638 Jan 14 23:52:16.461000 audit: BPF prog-id=271 op=LOAD Jan 14 23:52:16.461000 audit[6032]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000176168 a2=98 a3=0 items=0 ppid=3646 pid=6032 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:52:16.461000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3039626365323538653963613938653161643034383265326339353638 Jan 14 23:52:16.461000 audit: BPF prog-id=271 op=UNLOAD Jan 14 23:52:16.461000 audit[6032]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3646 pid=6032 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:52:16.461000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3039626365323538653963613938653161643034383265326339353638 Jan 14 23:52:16.461000 audit: BPF prog-id=270 op=UNLOAD Jan 14 23:52:16.461000 audit[6032]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3646 pid=6032 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:52:16.461000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3039626365323538653963613938653161643034383265326339353638 Jan 14 23:52:16.462000 audit: BPF prog-id=272 op=LOAD Jan 14 23:52:16.462000 audit[6032]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000176648 a2=98 a3=0 items=0 ppid=3646 pid=6032 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:52:16.462000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3039626365323538653963613938653161643034383265326339353638 Jan 14 23:52:16.477498 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f7a36272e2f2686043f56697d76899ca8fb8bbc67662af29f9291149d37b5e56-rootfs.mount: Deactivated successfully. Jan 14 23:52:16.508300 containerd[1996]: time="2026-01-14T23:52:16.508238647Z" level=info msg="StartContainer for \"09bce258e9ca98e1ad0482e2c95689f7d3bf4298c5b26213d1210f5e8b2467db\" returns successfully" Jan 14 23:52:17.302933 kubelet[3513]: I0114 23:52:17.302548 3513 scope.go:117] "RemoveContainer" containerID="f7a36272e2f2686043f56697d76899ca8fb8bbc67662af29f9291149d37b5e56" Jan 14 23:52:17.309646 containerd[1996]: time="2026-01-14T23:52:17.308407735Z" level=info msg="CreateContainer within sandbox \"7adca946cbee63a85f5c4236daadf37caed657cbc39392be919b48941888c6d6\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Jan 14 23:52:17.328141 containerd[1996]: time="2026-01-14T23:52:17.328078615Z" level=info msg="Container 42309a3dd581055f5bb5e10fe82bd3bb73bc974f02248e97246918116960778a: CDI devices from CRI Config.CDIDevices: []" Jan 14 23:52:17.349567 containerd[1996]: time="2026-01-14T23:52:17.349514251Z" level=info msg="CreateContainer within sandbox \"7adca946cbee63a85f5c4236daadf37caed657cbc39392be919b48941888c6d6\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"42309a3dd581055f5bb5e10fe82bd3bb73bc974f02248e97246918116960778a\"" Jan 14 23:52:17.350849 containerd[1996]: time="2026-01-14T23:52:17.350784487Z" level=info msg="StartContainer for \"42309a3dd581055f5bb5e10fe82bd3bb73bc974f02248e97246918116960778a\"" Jan 14 23:52:17.354646 containerd[1996]: time="2026-01-14T23:52:17.354554251Z" level=info msg="connecting to shim 42309a3dd581055f5bb5e10fe82bd3bb73bc974f02248e97246918116960778a" address="unix:///run/containerd/s/14087e40d13dc3990ad0ca37e0f2ba2d9908d8d17b7305f0f3b7461981f6193a" protocol=ttrpc version=3 Jan 14 23:52:17.397952 systemd[1]: Started cri-containerd-42309a3dd581055f5bb5e10fe82bd3bb73bc974f02248e97246918116960778a.scope - libcontainer container 42309a3dd581055f5bb5e10fe82bd3bb73bc974f02248e97246918116960778a. Jan 14 23:52:17.426000 audit: BPF prog-id=273 op=LOAD Jan 14 23:52:17.426000 audit: BPF prog-id=274 op=LOAD Jan 14 23:52:17.426000 audit[6063]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a0180 a2=98 a3=0 items=0 ppid=3010 pid=6063 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:52:17.426000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3432333039613364643538313035356635626235653130666538326264 Jan 14 23:52:17.427000 audit: BPF prog-id=274 op=UNLOAD Jan 14 23:52:17.427000 audit[6063]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3010 pid=6063 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:52:17.427000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3432333039613364643538313035356635626235653130666538326264 Jan 14 23:52:17.427000 audit: BPF prog-id=275 op=LOAD Jan 14 23:52:17.427000 audit[6063]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a03e8 a2=98 a3=0 items=0 ppid=3010 pid=6063 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:52:17.427000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3432333039613364643538313035356635626235653130666538326264 Jan 14 23:52:17.427000 audit: BPF prog-id=276 op=LOAD Jan 14 23:52:17.427000 audit[6063]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=40001a0168 a2=98 a3=0 items=0 ppid=3010 pid=6063 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:52:17.427000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3432333039613364643538313035356635626235653130666538326264 Jan 14 23:52:17.427000 audit: BPF prog-id=276 op=UNLOAD Jan 14 23:52:17.427000 audit[6063]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3010 pid=6063 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:52:17.427000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3432333039613364643538313035356635626235653130666538326264 Jan 14 23:52:17.427000 audit: BPF prog-id=275 op=UNLOAD Jan 14 23:52:17.427000 audit[6063]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3010 pid=6063 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:52:17.427000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3432333039613364643538313035356635626235653130666538326264 Jan 14 23:52:17.427000 audit: BPF prog-id=277 op=LOAD Jan 14 23:52:17.427000 audit[6063]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a0648 a2=98 a3=0 items=0 ppid=3010 pid=6063 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:52:17.427000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3432333039613364643538313035356635626235653130666538326264 Jan 14 23:52:17.494233 containerd[1996]: time="2026-01-14T23:52:17.494156252Z" level=info msg="StartContainer for \"42309a3dd581055f5bb5e10fe82bd3bb73bc974f02248e97246918116960778a\" returns successfully" Jan 14 23:52:18.462233 kubelet[3513]: E0114 23:52:18.462163 3513 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-fd2bc" podUID="2054a7f1-33a6-4a0b-8079-7e8881899bb3" Jan 14 23:52:18.774094 kubelet[3513]: E0114 23:52:18.773919 3513 controller.go:195] "Failed to update lease" err="Put \"https://172.31.18.197:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-18-197?timeout=10s\": context deadline exceeded" Jan 14 23:52:20.462879 kubelet[3513]: E0114 23:52:20.462805 3513 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-789ccfdc9b-bggd8" podUID="54b5943d-5205-46b5-af4e-d4680f06e390" Jan 14 23:52:22.240429 systemd[1]: cri-containerd-c6b03fab291f5dd307fb0058be7e16b80d2249035f35cf493b17ebff7cf8abf8.scope: Deactivated successfully. Jan 14 23:52:22.241131 systemd[1]: cri-containerd-c6b03fab291f5dd307fb0058be7e16b80d2249035f35cf493b17ebff7cf8abf8.scope: Consumed 6.013s CPU time, 22.7M memory peak. Jan 14 23:52:22.244000 audit: BPF prog-id=278 op=LOAD Jan 14 23:52:22.246707 kernel: kauditd_printk_skb: 40 callbacks suppressed Jan 14 23:52:22.246939 kernel: audit: type=1334 audit(1768434742.244:957): prog-id=278 op=LOAD Jan 14 23:52:22.244000 audit: BPF prog-id=99 op=UNLOAD Jan 14 23:52:22.251109 kernel: audit: type=1334 audit(1768434742.244:958): prog-id=99 op=UNLOAD Jan 14 23:52:22.251518 containerd[1996]: time="2026-01-14T23:52:22.251440667Z" level=info msg="received container exit event container_id:\"c6b03fab291f5dd307fb0058be7e16b80d2249035f35cf493b17ebff7cf8abf8\" id:\"c6b03fab291f5dd307fb0058be7e16b80d2249035f35cf493b17ebff7cf8abf8\" pid:3169 exit_status:1 exited_at:{seconds:1768434742 nanos:250360739}" Jan 14 23:52:22.248000 audit: BPF prog-id=114 op=UNLOAD Jan 14 23:52:22.254302 kernel: audit: type=1334 audit(1768434742.248:959): prog-id=114 op=UNLOAD Jan 14 23:52:22.254395 kernel: audit: type=1334 audit(1768434742.248:960): prog-id=118 op=UNLOAD Jan 14 23:52:22.248000 audit: BPF prog-id=118 op=UNLOAD Jan 14 23:52:22.304466 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c6b03fab291f5dd307fb0058be7e16b80d2249035f35cf493b17ebff7cf8abf8-rootfs.mount: Deactivated successfully. Jan 14 23:52:23.334076 kubelet[3513]: I0114 23:52:23.334030 3513 scope.go:117] "RemoveContainer" containerID="c6b03fab291f5dd307fb0058be7e16b80d2249035f35cf493b17ebff7cf8abf8" Jan 14 23:52:23.337780 containerd[1996]: time="2026-01-14T23:52:23.337704301Z" level=info msg="CreateContainer within sandbox \"45b3ab18905646a82e4d18f4477a3fc4c7b98ddbdac14bb0157106e58da95c72\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Jan 14 23:52:23.365627 containerd[1996]: time="2026-01-14T23:52:23.363930517Z" level=info msg="Container 1dbf9748614620d645aab4ff2c77ee5c1d9f076f646ca71981579bd13e727329: CDI devices from CRI Config.CDIDevices: []" Jan 14 23:52:23.377520 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3016044431.mount: Deactivated successfully. Jan 14 23:52:23.386131 containerd[1996]: time="2026-01-14T23:52:23.386047093Z" level=info msg="CreateContainer within sandbox \"45b3ab18905646a82e4d18f4477a3fc4c7b98ddbdac14bb0157106e58da95c72\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"1dbf9748614620d645aab4ff2c77ee5c1d9f076f646ca71981579bd13e727329\"" Jan 14 23:52:23.387193 containerd[1996]: time="2026-01-14T23:52:23.387156685Z" level=info msg="StartContainer for \"1dbf9748614620d645aab4ff2c77ee5c1d9f076f646ca71981579bd13e727329\"" Jan 14 23:52:23.389856 containerd[1996]: time="2026-01-14T23:52:23.389744161Z" level=info msg="connecting to shim 1dbf9748614620d645aab4ff2c77ee5c1d9f076f646ca71981579bd13e727329" address="unix:///run/containerd/s/09de55abdc151c170cb6ec4a7c570c28e4b13c12fc6e2cb0fbb99106eed57792" protocol=ttrpc version=3 Jan 14 23:52:23.429975 systemd[1]: Started cri-containerd-1dbf9748614620d645aab4ff2c77ee5c1d9f076f646ca71981579bd13e727329.scope - libcontainer container 1dbf9748614620d645aab4ff2c77ee5c1d9f076f646ca71981579bd13e727329. Jan 14 23:52:23.455000 audit: BPF prog-id=279 op=LOAD Jan 14 23:52:23.457000 audit: BPF prog-id=280 op=LOAD Jan 14 23:52:23.460106 kernel: audit: type=1334 audit(1768434743.455:961): prog-id=279 op=LOAD Jan 14 23:52:23.460205 kernel: audit: type=1334 audit(1768434743.457:962): prog-id=280 op=LOAD Jan 14 23:52:23.457000 audit[6109]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130180 a2=98 a3=0 items=0 ppid=3033 pid=6109 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:52:23.466506 kernel: audit: type=1300 audit(1768434743.457:962): arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130180 a2=98 a3=0 items=0 ppid=3033 pid=6109 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:52:23.457000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3164626639373438363134363230643634356161623466663263373765 Jan 14 23:52:23.474096 kernel: audit: type=1327 audit(1768434743.457:962): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3164626639373438363134363230643634356161623466663263373765 Jan 14 23:52:23.476286 kubelet[3513]: E0114 23:52:23.476116 3513 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6d86655bcb-8jvgv" podUID="296d9b29-20aa-492b-aa70-e26652feb8da" Jan 14 23:52:23.458000 audit: BPF prog-id=280 op=UNLOAD Jan 14 23:52:23.480333 kernel: audit: type=1334 audit(1768434743.458:963): prog-id=280 op=UNLOAD Jan 14 23:52:23.458000 audit[6109]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3033 pid=6109 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:52:23.487600 kernel: audit: type=1300 audit(1768434743.458:963): arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3033 pid=6109 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:52:23.458000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3164626639373438363134363230643634356161623466663263373765 Jan 14 23:52:23.458000 audit: BPF prog-id=281 op=LOAD Jan 14 23:52:23.458000 audit[6109]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001303e8 a2=98 a3=0 items=0 ppid=3033 pid=6109 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:52:23.458000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3164626639373438363134363230643634356161623466663263373765 Jan 14 23:52:23.459000 audit: BPF prog-id=282 op=LOAD Jan 14 23:52:23.459000 audit[6109]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000130168 a2=98 a3=0 items=0 ppid=3033 pid=6109 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:52:23.459000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3164626639373438363134363230643634356161623466663263373765 Jan 14 23:52:23.466000 audit: BPF prog-id=282 op=UNLOAD Jan 14 23:52:23.466000 audit[6109]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3033 pid=6109 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:52:23.466000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3164626639373438363134363230643634356161623466663263373765 Jan 14 23:52:23.466000 audit: BPF prog-id=281 op=UNLOAD Jan 14 23:52:23.466000 audit[6109]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3033 pid=6109 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:52:23.466000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3164626639373438363134363230643634356161623466663263373765 Jan 14 23:52:23.466000 audit: BPF prog-id=283 op=LOAD Jan 14 23:52:23.466000 audit[6109]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130648 a2=98 a3=0 items=0 ppid=3033 pid=6109 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:52:23.466000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3164626639373438363134363230643634356161623466663263373765 Jan 14 23:52:23.549493 containerd[1996]: time="2026-01-14T23:52:23.549414002Z" level=info msg="StartContainer for \"1dbf9748614620d645aab4ff2c77ee5c1d9f076f646ca71981579bd13e727329\" returns successfully" Jan 14 23:52:24.462870 kubelet[3513]: E0114 23:52:24.462793 3513 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6d86655bcb-l8j4w" podUID="255a4468-7378-413d-92bd-8056478658d3" Jan 14 23:52:25.464118 kubelet[3513]: E0114 23:52:25.463978 3513 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-fdc6fb9d4-p5lnk" podUID="21b369b7-d986-41d1-8a2e-a01d832685f7" Jan 14 23:52:26.462353 kubelet[3513]: E0114 23:52:26.462241 3513 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7cc6d978d-lsv59" podUID="49be0d3d-84d7-4994-b2e7-37cea1fa9624" Jan 14 23:52:27.464015 kubelet[3513]: E0114 23:52:27.463815 3513 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-8dsvj" podUID="23b28fd4-dc96-481b-a69a-1d96358778f5"