Jan 20 00:40:09.992627 kernel: Booting Linux on physical CPU 0x0000000000 [0x410fd490] Jan 20 00:40:09.992644 kernel: Linux version 6.12.66-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.1_p20250801 p4) 14.3.1 20250801, GNU ld (Gentoo 2.45 p3) 2.45.0) #1 SMP PREEMPT Mon Jan 19 22:52:15 -00 2026 Jan 20 00:40:09.992651 kernel: KASLR enabled Jan 20 00:40:09.992655 kernel: earlycon: pl11 at MMIO 0x00000000effec000 (options '') Jan 20 00:40:09.992660 kernel: printk: legacy bootconsole [pl11] enabled Jan 20 00:40:09.992664 kernel: efi: EFI v2.7 by EDK II Jan 20 00:40:09.992670 kernel: efi: ACPI 2.0=0x3f979018 SMBIOS=0x3f8a0000 SMBIOS 3.0=0x3f880000 MEMATTR=0x3e89c018 RNG=0x3f979998 MEMRESERVE=0x3db83598 Jan 20 00:40:09.992674 kernel: random: crng init done Jan 20 00:40:09.992678 kernel: secureboot: Secure boot disabled Jan 20 00:40:09.992682 kernel: ACPI: Early table checksum verification disabled Jan 20 00:40:09.992686 kernel: ACPI: RSDP 0x000000003F979018 000024 (v02 VRTUAL) Jan 20 00:40:09.992690 kernel: ACPI: XSDT 0x000000003F979F18 00006C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 20 00:40:09.992695 kernel: ACPI: FACP 0x000000003F979C18 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 20 00:40:09.992700 kernel: ACPI: DSDT 0x000000003F95A018 01E046 (v02 MSFTVM DSDT01 00000001 INTL 20230628) Jan 20 00:40:09.992705 kernel: ACPI: DBG2 0x000000003F979B18 000072 (v00 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 20 00:40:09.992710 kernel: ACPI: GTDT 0x000000003F979D98 000060 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 20 00:40:09.992714 kernel: ACPI: OEM0 0x000000003F979098 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 20 00:40:09.992719 kernel: ACPI: SPCR 0x000000003F979A98 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 20 00:40:09.992724 kernel: ACPI: APIC 0x000000003F979818 0000FC (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 20 00:40:09.992728 kernel: ACPI: SRAT 0x000000003F979198 000234 (v03 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 20 00:40:09.992733 kernel: ACPI: PPTT 0x000000003F979418 000120 (v01 VRTUAL MICROSFT 00000000 MSFT 00000000) Jan 20 00:40:09.992737 kernel: ACPI: BGRT 0x000000003F979E98 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 20 00:40:09.992742 kernel: ACPI: SPCR: console: pl011,mmio32,0xeffec000,115200 Jan 20 00:40:09.992746 kernel: ACPI: Use ACPI SPCR as default console: Yes Jan 20 00:40:09.992751 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] hotplug Jan 20 00:40:09.992755 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x1bfffffff] hotplug Jan 20 00:40:09.992759 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1c0000000-0xfbfffffff] hotplug Jan 20 00:40:09.992764 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] hotplug Jan 20 00:40:09.992769 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] hotplug Jan 20 00:40:09.992773 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] hotplug Jan 20 00:40:09.992778 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] hotplug Jan 20 00:40:09.992782 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] hotplug Jan 20 00:40:09.992787 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] hotplug Jan 20 00:40:09.992791 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] hotplug Jan 20 00:40:09.992795 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] hotplug Jan 20 00:40:09.992800 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] hotplug Jan 20 00:40:09.992804 kernel: NUMA: Node 0 [mem 0x00000000-0x3fffffff] + [mem 0x100000000-0x1bfffffff] -> [mem 0x00000000-0x1bfffffff] Jan 20 00:40:09.992809 kernel: NODE_DATA(0) allocated [mem 0x1bf7ffa00-0x1bf806fff] Jan 20 00:40:09.992814 kernel: Zone ranges: Jan 20 00:40:09.992819 kernel: DMA [mem 0x0000000000000000-0x00000000ffffffff] Jan 20 00:40:09.992825 kernel: DMA32 empty Jan 20 00:40:09.992830 kernel: Normal [mem 0x0000000100000000-0x00000001bfffffff] Jan 20 00:40:09.992834 kernel: Device empty Jan 20 00:40:09.992840 kernel: Movable zone start for each node Jan 20 00:40:09.992845 kernel: Early memory node ranges Jan 20 00:40:09.992849 kernel: node 0: [mem 0x0000000000000000-0x00000000007fffff] Jan 20 00:40:09.992854 kernel: node 0: [mem 0x0000000000824000-0x000000003f38ffff] Jan 20 00:40:09.992859 kernel: node 0: [mem 0x000000003f390000-0x000000003f93ffff] Jan 20 00:40:09.992863 kernel: node 0: [mem 0x000000003f940000-0x000000003f9effff] Jan 20 00:40:09.992868 kernel: node 0: [mem 0x000000003f9f0000-0x000000003fdeffff] Jan 20 00:40:09.992872 kernel: node 0: [mem 0x000000003fdf0000-0x000000003fffffff] Jan 20 00:40:09.992877 kernel: node 0: [mem 0x0000000100000000-0x00000001bfffffff] Jan 20 00:40:09.992882 kernel: Initmem setup node 0 [mem 0x0000000000000000-0x00000001bfffffff] Jan 20 00:40:09.992887 kernel: On node 0, zone DMA: 36 pages in unavailable ranges Jan 20 00:40:09.992892 kernel: cma: Reserved 16 MiB at 0x000000003ca00000 on node -1 Jan 20 00:40:09.992896 kernel: psci: probing for conduit method from ACPI. Jan 20 00:40:09.992901 kernel: psci: PSCIv1.3 detected in firmware. Jan 20 00:40:09.992906 kernel: psci: Using standard PSCI v0.2 function IDs Jan 20 00:40:09.992910 kernel: psci: MIGRATE_INFO_TYPE not supported. Jan 20 00:40:09.992918 kernel: psci: SMC Calling Convention v1.4 Jan 20 00:40:09.992923 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x0 -> Node 0 Jan 20 00:40:09.992928 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1 -> Node 0 Jan 20 00:40:09.992932 kernel: percpu: Embedded 33 pages/cpu s98200 r8192 d28776 u135168 Jan 20 00:40:09.992937 kernel: pcpu-alloc: s98200 r8192 d28776 u135168 alloc=33*4096 Jan 20 00:40:09.992943 kernel: pcpu-alloc: [0] 0 [0] 1 Jan 20 00:40:09.992947 kernel: Detected PIPT I-cache on CPU0 Jan 20 00:40:09.992952 kernel: CPU features: detected: Address authentication (architected QARMA5 algorithm) Jan 20 00:40:09.992957 kernel: CPU features: detected: GIC system register CPU interface Jan 20 00:40:09.992962 kernel: CPU features: detected: Spectre-v4 Jan 20 00:40:09.992966 kernel: CPU features: detected: Spectre-BHB Jan 20 00:40:09.992971 kernel: CPU features: kernel page table isolation forced ON by KASLR Jan 20 00:40:09.992976 kernel: CPU features: detected: Kernel page table isolation (KPTI) Jan 20 00:40:09.992980 kernel: CPU features: detected: ARM erratum 2067961 or 2054223 Jan 20 00:40:09.992985 kernel: CPU features: detected: SSBS not fully self-synchronizing Jan 20 00:40:09.992990 kernel: alternatives: applying boot alternatives Jan 20 00:40:09.992996 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=206b712a0c1875fb8c7063accb0d68a6eba9a03f8b0d8d52f8e385d18728fb74 Jan 20 00:40:09.993001 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 20 00:40:09.993006 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 20 00:40:09.993010 kernel: Fallback order for Node 0: 0 Jan 20 00:40:09.993015 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1048540 Jan 20 00:40:09.993020 kernel: Policy zone: Normal Jan 20 00:40:09.993025 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 20 00:40:09.993029 kernel: software IO TLB: area num 2. Jan 20 00:40:09.993034 kernel: software IO TLB: mapped [mem 0x0000000037380000-0x000000003b380000] (64MB) Jan 20 00:40:09.993039 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 20 00:40:09.993044 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 20 00:40:09.993049 kernel: rcu: RCU event tracing is enabled. Jan 20 00:40:09.993054 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 20 00:40:09.993059 kernel: Trampoline variant of Tasks RCU enabled. Jan 20 00:40:09.993064 kernel: Tracing variant of Tasks RCU enabled. Jan 20 00:40:09.993068 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 20 00:40:09.993073 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 20 00:40:09.993078 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 20 00:40:09.993083 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 20 00:40:09.993087 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jan 20 00:40:09.993092 kernel: GICv3: 960 SPIs implemented Jan 20 00:40:09.993098 kernel: GICv3: 0 Extended SPIs implemented Jan 20 00:40:09.993102 kernel: Root IRQ handler: gic_handle_irq Jan 20 00:40:09.993107 kernel: GICv3: GICv3 features: 16 PPIs, RSS Jan 20 00:40:09.993111 kernel: GICv3: GICD_CTRL.DS=0, SCR_EL3.FIQ=0 Jan 20 00:40:09.993116 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000effee000 Jan 20 00:40:09.993121 kernel: ITS: No ITS available, not enabling LPIs Jan 20 00:40:09.993126 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 20 00:40:09.993130 kernel: arch_timer: cp15 timer(s) running at 1000.00MHz (virt). Jan 20 00:40:09.993135 kernel: clocksource: arch_sys_counter: mask: 0x1fffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 20 00:40:09.993140 kernel: sched_clock: 61 bits at 1000MHz, resolution 1ns, wraps every 4398046511103ns Jan 20 00:40:09.993145 kernel: Console: colour dummy device 80x25 Jan 20 00:40:09.993151 kernel: printk: legacy console [tty1] enabled Jan 20 00:40:09.993156 kernel: ACPI: Core revision 20240827 Jan 20 00:40:09.993161 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 2000.00 BogoMIPS (lpj=1000000) Jan 20 00:40:09.993166 kernel: pid_max: default: 32768 minimum: 301 Jan 20 00:40:09.993171 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Jan 20 00:40:09.993176 kernel: landlock: Up and running. Jan 20 00:40:09.993180 kernel: SELinux: Initializing. Jan 20 00:40:09.993186 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 20 00:40:09.993191 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 20 00:40:09.993196 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3b8030, hints 0xa0000e, misc 0x31e1 Jan 20 00:40:09.993201 kernel: Hyper-V: Host Build 10.0.26102.1172-1-0 Jan 20 00:40:09.993209 kernel: Hyper-V: enabling crash_kexec_post_notifiers Jan 20 00:40:09.993215 kernel: rcu: Hierarchical SRCU implementation. Jan 20 00:40:09.993220 kernel: rcu: Max phase no-delay instances is 400. Jan 20 00:40:09.993225 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Jan 20 00:40:09.993230 kernel: Remapping and enabling EFI services. Jan 20 00:40:09.993236 kernel: smp: Bringing up secondary CPUs ... Jan 20 00:40:09.993241 kernel: Detected PIPT I-cache on CPU1 Jan 20 00:40:09.993246 kernel: GICv3: CPU1: found redistributor 1 region 1:0x00000000f000e000 Jan 20 00:40:09.993252 kernel: CPU1: Booted secondary processor 0x0000000001 [0x410fd490] Jan 20 00:40:09.993258 kernel: smp: Brought up 1 node, 2 CPUs Jan 20 00:40:09.993263 kernel: SMP: Total of 2 processors activated. Jan 20 00:40:09.993268 kernel: CPU: All CPU(s) started at EL1 Jan 20 00:40:09.993273 kernel: CPU features: detected: 32-bit EL0 Support Jan 20 00:40:09.993278 kernel: CPU features: detected: Instruction cache invalidation not required for I/D coherence Jan 20 00:40:09.993283 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Jan 20 00:40:09.993289 kernel: CPU features: detected: Common not Private translations Jan 20 00:40:09.993295 kernel: CPU features: detected: CRC32 instructions Jan 20 00:40:09.995323 kernel: CPU features: detected: Generic authentication (architected QARMA5 algorithm) Jan 20 00:40:09.995337 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Jan 20 00:40:09.995343 kernel: CPU features: detected: LSE atomic instructions Jan 20 00:40:09.995349 kernel: CPU features: detected: Privileged Access Never Jan 20 00:40:09.995354 kernel: CPU features: detected: Speculation barrier (SB) Jan 20 00:40:09.995360 kernel: CPU features: detected: TLB range maintenance instructions Jan 20 00:40:09.995368 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Jan 20 00:40:09.995374 kernel: CPU features: detected: Scalable Vector Extension Jan 20 00:40:09.995379 kernel: alternatives: applying system-wide alternatives Jan 20 00:40:09.995384 kernel: CPU features: detected: Hardware dirty bit management on CPU0-1 Jan 20 00:40:09.995390 kernel: SVE: maximum available vector length 16 bytes per vector Jan 20 00:40:09.995395 kernel: SVE: default vector length 16 bytes per vector Jan 20 00:40:09.995401 kernel: Memory: 3979964K/4194160K available (11200K kernel code, 2458K rwdata, 9088K rodata, 12416K init, 1038K bss, 193008K reserved, 16384K cma-reserved) Jan 20 00:40:09.995407 kernel: devtmpfs: initialized Jan 20 00:40:09.995412 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 20 00:40:09.995418 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 20 00:40:09.995423 kernel: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Jan 20 00:40:09.995428 kernel: 0 pages in range for non-PLT usage Jan 20 00:40:09.995433 kernel: 515184 pages in range for PLT usage Jan 20 00:40:09.995439 kernel: pinctrl core: initialized pinctrl subsystem Jan 20 00:40:09.995445 kernel: SMBIOS 3.1.0 present. Jan 20 00:40:09.995450 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 06/10/2025 Jan 20 00:40:09.995455 kernel: DMI: Memory slots populated: 2/2 Jan 20 00:40:09.995461 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 20 00:40:09.995466 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jan 20 00:40:09.995471 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jan 20 00:40:09.995477 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jan 20 00:40:09.995482 kernel: audit: initializing netlink subsys (disabled) Jan 20 00:40:09.995488 kernel: audit: type=2000 audit(0.059:1): state=initialized audit_enabled=0 res=1 Jan 20 00:40:09.995493 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 20 00:40:09.995499 kernel: cpuidle: using governor menu Jan 20 00:40:09.995504 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jan 20 00:40:09.995509 kernel: ASID allocator initialised with 32768 entries Jan 20 00:40:09.995514 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 20 00:40:09.995519 kernel: Serial: AMBA PL011 UART driver Jan 20 00:40:09.995526 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 20 00:40:09.995531 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jan 20 00:40:09.995536 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jan 20 00:40:09.995541 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jan 20 00:40:09.995546 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 20 00:40:09.995552 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jan 20 00:40:09.995557 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jan 20 00:40:09.995563 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jan 20 00:40:09.995568 kernel: ACPI: Added _OSI(Module Device) Jan 20 00:40:09.995573 kernel: ACPI: Added _OSI(Processor Device) Jan 20 00:40:09.995578 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 20 00:40:09.995583 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 20 00:40:09.995589 kernel: ACPI: Interpreter enabled Jan 20 00:40:09.995594 kernel: ACPI: Using GIC for interrupt routing Jan 20 00:40:09.995600 kernel: ARMH0011:00: ttyAMA0 at MMIO 0xeffec000 (irq = 12, base_baud = 0) is a SBSA Jan 20 00:40:09.995605 kernel: printk: legacy console [ttyAMA0] enabled Jan 20 00:40:09.995610 kernel: printk: legacy bootconsole [pl11] disabled Jan 20 00:40:09.995615 kernel: ARMH0011:01: ttyAMA1 at MMIO 0xeffeb000 (irq = 13, base_baud = 0) is a SBSA Jan 20 00:40:09.995621 kernel: ACPI: CPU0 has been hot-added Jan 20 00:40:09.995626 kernel: ACPI: CPU1 has been hot-added Jan 20 00:40:09.995631 kernel: iommu: Default domain type: Translated Jan 20 00:40:09.995637 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jan 20 00:40:09.995642 kernel: efivars: Registered efivars operations Jan 20 00:40:09.995647 kernel: vgaarb: loaded Jan 20 00:40:09.995653 kernel: clocksource: Switched to clocksource arch_sys_counter Jan 20 00:40:09.995658 kernel: VFS: Disk quotas dquot_6.6.0 Jan 20 00:40:09.995663 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 20 00:40:09.995668 kernel: pnp: PnP ACPI init Jan 20 00:40:09.995674 kernel: pnp: PnP ACPI: found 0 devices Jan 20 00:40:09.995679 kernel: NET: Registered PF_INET protocol family Jan 20 00:40:09.995685 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 20 00:40:09.995690 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 20 00:40:09.995695 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 20 00:40:09.995701 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 20 00:40:09.995706 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 20 00:40:09.995712 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 20 00:40:09.995717 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 20 00:40:09.995722 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 20 00:40:09.995728 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 20 00:40:09.995733 kernel: PCI: CLS 0 bytes, default 64 Jan 20 00:40:09.995738 kernel: kvm [1]: HYP mode not available Jan 20 00:40:09.995743 kernel: Initialise system trusted keyrings Jan 20 00:40:09.995748 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 20 00:40:09.995754 kernel: Key type asymmetric registered Jan 20 00:40:09.995759 kernel: Asymmetric key parser 'x509' registered Jan 20 00:40:09.995765 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Jan 20 00:40:09.995770 kernel: io scheduler mq-deadline registered Jan 20 00:40:09.995775 kernel: io scheduler kyber registered Jan 20 00:40:09.995780 kernel: io scheduler bfq registered Jan 20 00:40:09.995785 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 20 00:40:09.995791 kernel: thunder_xcv, ver 1.0 Jan 20 00:40:09.995797 kernel: thunder_bgx, ver 1.0 Jan 20 00:40:09.995802 kernel: nicpf, ver 1.0 Jan 20 00:40:09.995807 kernel: nicvf, ver 1.0 Jan 20 00:40:09.995938 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jan 20 00:40:09.996008 kernel: rtc-efi rtc-efi.0: setting system clock to 2026-01-20T00:40:04 UTC (1768869604) Jan 20 00:40:09.996017 kernel: efifb: probing for efifb Jan 20 00:40:09.996022 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Jan 20 00:40:09.996027 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Jan 20 00:40:09.996033 kernel: efifb: scrolling: redraw Jan 20 00:40:09.996038 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Jan 20 00:40:09.996043 kernel: Console: switching to colour frame buffer device 128x48 Jan 20 00:40:09.996048 kernel: fb0: EFI VGA frame buffer device Jan 20 00:40:09.996054 kernel: SMCCC: SOC_ID: ARCH_SOC_ID not implemented, skipping .... Jan 20 00:40:09.996060 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 20 00:40:09.996065 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 (0,8000003f) counters available Jan 20 00:40:09.996070 kernel: watchdog: NMI not fully supported Jan 20 00:40:09.996076 kernel: NET: Registered PF_INET6 protocol family Jan 20 00:40:09.996081 kernel: watchdog: Hard watchdog permanently disabled Jan 20 00:40:09.996086 kernel: Segment Routing with IPv6 Jan 20 00:40:09.996092 kernel: In-situ OAM (IOAM) with IPv6 Jan 20 00:40:09.996097 kernel: NET: Registered PF_PACKET protocol family Jan 20 00:40:09.996102 kernel: Key type dns_resolver registered Jan 20 00:40:09.996107 kernel: registered taskstats version 1 Jan 20 00:40:09.996113 kernel: Loading compiled-in X.509 certificates Jan 20 00:40:09.996118 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.66-flatcar: b9ade5620e57e6dd3f7c5c42f6ef830eecb768a2' Jan 20 00:40:09.996123 kernel: Demotion targets for Node 0: null Jan 20 00:40:09.996129 kernel: Key type .fscrypt registered Jan 20 00:40:09.996134 kernel: Key type fscrypt-provisioning registered Jan 20 00:40:09.996139 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 20 00:40:09.996144 kernel: ima: Allocated hash algorithm: sha1 Jan 20 00:40:09.996150 kernel: ima: No architecture policies found Jan 20 00:40:09.996155 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jan 20 00:40:09.996160 kernel: clk: Disabling unused clocks Jan 20 00:40:09.996165 kernel: PM: genpd: Disabling unused power domains Jan 20 00:40:09.996171 kernel: Freeing unused kernel memory: 12416K Jan 20 00:40:09.996176 kernel: Run /init as init process Jan 20 00:40:09.996181 kernel: with arguments: Jan 20 00:40:09.996186 kernel: /init Jan 20 00:40:09.996191 kernel: with environment: Jan 20 00:40:09.996196 kernel: HOME=/ Jan 20 00:40:09.996202 kernel: TERM=linux Jan 20 00:40:09.996208 kernel: hv_vmbus: Vmbus version:5.3 Jan 20 00:40:09.996213 kernel: hv_vmbus: registering driver hid_hyperv Jan 20 00:40:09.996218 kernel: SCSI subsystem initialized Jan 20 00:40:09.996223 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input0 Jan 20 00:40:09.996335 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Jan 20 00:40:09.996344 kernel: hv_vmbus: registering driver hyperv_keyboard Jan 20 00:40:09.996351 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input1 Jan 20 00:40:09.996357 kernel: pps_core: LinuxPPS API ver. 1 registered Jan 20 00:40:09.996362 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Jan 20 00:40:09.996367 kernel: PTP clock support registered Jan 20 00:40:09.996373 kernel: hv_utils: Registering HyperV Utility Driver Jan 20 00:40:09.996378 kernel: hv_vmbus: registering driver hv_utils Jan 20 00:40:09.996383 kernel: hv_utils: Heartbeat IC version 3.0 Jan 20 00:40:09.996389 kernel: hv_utils: Shutdown IC version 3.2 Jan 20 00:40:09.996395 kernel: hv_utils: TimeSync IC version 4.0 Jan 20 00:40:09.996400 kernel: hv_vmbus: registering driver hv_storvsc Jan 20 00:40:09.996501 kernel: scsi host1: storvsc_host_t Jan 20 00:40:09.996586 kernel: scsi host0: storvsc_host_t Jan 20 00:40:09.996673 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Jan 20 00:40:09.996756 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 5 Jan 20 00:40:09.996840 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Jan 20 00:40:09.996915 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Jan 20 00:40:09.996989 kernel: sd 0:0:0:0: [sda] Write Protect is off Jan 20 00:40:09.997063 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Jan 20 00:40:09.997136 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Jan 20 00:40:09.997218 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#189 cmd 0x5a status: scsi 0x2 srb 0x86 hv 0xc0000001 Jan 20 00:40:09.997288 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x5a status: scsi 0x2 srb 0x86 hv 0xc0000001 Jan 20 00:40:09.997294 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 20 00:40:09.997383 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Jan 20 00:40:09.997459 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Jan 20 00:40:09.997468 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 20 00:40:09.997541 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Jan 20 00:40:09.997548 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 20 00:40:09.997553 kernel: device-mapper: uevent: version 1.0.3 Jan 20 00:40:09.997558 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Jan 20 00:40:09.997564 kernel: device-mapper: verity: sha256 using shash "sha256-ce" Jan 20 00:40:09.997569 kernel: raid6: neonx8 gen() 18550 MB/s Jan 20 00:40:09.997575 kernel: raid6: neonx4 gen() 18572 MB/s Jan 20 00:40:09.997580 kernel: raid6: neonx2 gen() 17085 MB/s Jan 20 00:40:09.997586 kernel: raid6: neonx1 gen() 15016 MB/s Jan 20 00:40:09.997591 kernel: raid6: int64x8 gen() 10546 MB/s Jan 20 00:40:09.997596 kernel: raid6: int64x4 gen() 10615 MB/s Jan 20 00:40:09.997601 kernel: raid6: int64x2 gen() 8994 MB/s Jan 20 00:40:09.997606 kernel: raid6: int64x1 gen() 6999 MB/s Jan 20 00:40:09.997612 kernel: raid6: using algorithm neonx4 gen() 18572 MB/s Jan 20 00:40:09.997618 kernel: raid6: .... xor() 15135 MB/s, rmw enabled Jan 20 00:40:09.997623 kernel: raid6: using neon recovery algorithm Jan 20 00:40:09.997628 kernel: xor: measuring software checksum speed Jan 20 00:40:09.997633 kernel: 8regs : 28440 MB/sec Jan 20 00:40:09.997638 kernel: 32regs : 28804 MB/sec Jan 20 00:40:09.997644 kernel: arm64_neon : 37473 MB/sec Jan 20 00:40:09.997649 kernel: xor: using function: arm64_neon (37473 MB/sec) Jan 20 00:40:09.997655 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 20 00:40:09.997660 kernel: BTRFS: device fsid bbab59fc-6596-4baf-b7a9-8825520ceeab devid 1 transid 34 /dev/mapper/usr (254:0) scanned by mount (429) Jan 20 00:40:09.997666 kernel: BTRFS info (device dm-0): first mount of filesystem bbab59fc-6596-4baf-b7a9-8825520ceeab Jan 20 00:40:09.997671 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jan 20 00:40:09.997676 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 20 00:40:09.997681 kernel: BTRFS info (device dm-0): enabling free space tree Jan 20 00:40:09.997687 kernel: loop: module loaded Jan 20 00:40:09.997693 kernel: loop0: detected capacity change from 0 to 91488 Jan 20 00:40:09.997698 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 20 00:40:09.997704 systemd[1]: Successfully made /usr/ read-only. Jan 20 00:40:09.997711 systemd[1]: systemd 257.9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 20 00:40:09.997718 systemd[1]: Detected virtualization microsoft. Jan 20 00:40:09.997724 systemd[1]: Detected architecture arm64. Jan 20 00:40:09.997729 systemd[1]: Running in initrd. Jan 20 00:40:09.997735 systemd[1]: No hostname configured, using default hostname. Jan 20 00:40:09.997741 systemd[1]: Hostname set to . Jan 20 00:40:09.997747 systemd[1]: Initializing machine ID from random generator. Jan 20 00:40:09.997752 systemd[1]: Queued start job for default target initrd.target. Jan 20 00:40:09.997758 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Jan 20 00:40:09.997764 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 20 00:40:09.997770 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 20 00:40:09.997776 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 20 00:40:09.997782 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 20 00:40:09.997788 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 20 00:40:09.997794 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 20 00:40:09.997801 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 20 00:40:09.997807 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 20 00:40:09.997812 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Jan 20 00:40:09.997818 systemd[1]: Reached target paths.target - Path Units. Jan 20 00:40:09.997824 systemd[1]: Reached target slices.target - Slice Units. Jan 20 00:40:09.997829 systemd[1]: Reached target swap.target - Swaps. Jan 20 00:40:09.997836 systemd[1]: Reached target timers.target - Timer Units. Jan 20 00:40:09.997841 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 20 00:40:09.997847 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 20 00:40:09.997852 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Jan 20 00:40:09.997858 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 20 00:40:09.997864 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jan 20 00:40:09.997870 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 20 00:40:09.997880 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 20 00:40:09.997887 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 20 00:40:09.997892 systemd[1]: Reached target sockets.target - Socket Units. Jan 20 00:40:09.997898 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 20 00:40:09.997904 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 20 00:40:09.997911 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 20 00:40:09.997916 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 20 00:40:09.997923 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Jan 20 00:40:09.997929 systemd[1]: Starting systemd-fsck-usr.service... Jan 20 00:40:09.997934 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 20 00:40:09.997940 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 20 00:40:09.997960 systemd-journald[568]: Collecting audit messages is enabled. Jan 20 00:40:09.997974 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 20 00:40:09.997981 systemd-journald[568]: Journal started Jan 20 00:40:09.997994 systemd-journald[568]: Runtime Journal (/run/log/journal/b6694d7c59944efebd733823f5629e3c) is 8M, max 78.3M, 70.3M free. Jan 20 00:40:10.012600 systemd[1]: Started systemd-journald.service - Journal Service. Jan 20 00:40:10.011000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 00:40:10.013083 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 20 00:40:10.028349 kernel: audit: type=1130 audit(1768869610.011:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 00:40:10.031000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup-pre comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 00:40:10.032615 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 20 00:40:10.053436 kernel: audit: type=1130 audit(1768869610.031:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup-pre comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 00:40:10.053000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 00:40:10.055324 systemd[1]: Finished systemd-fsck-usr.service. Jan 20 00:40:10.072056 kernel: audit: type=1130 audit(1768869610.053:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 00:40:10.071000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 00:40:10.073761 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 20 00:40:10.089968 kernel: audit: type=1130 audit(1768869610.071:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 00:40:10.103988 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 20 00:40:10.126329 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 20 00:40:10.199464 systemd-modules-load[571]: Inserted module 'br_netfilter' Jan 20 00:40:10.208327 kernel: Bridge firewalling registered Jan 20 00:40:10.203576 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 20 00:40:10.212000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 00:40:10.203886 systemd-tmpfiles[581]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Jan 20 00:40:10.230469 kernel: audit: type=1130 audit(1768869610.212:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 00:40:10.224145 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 20 00:40:10.235000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 00:40:10.254483 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 20 00:40:10.276197 kernel: audit: type=1130 audit(1768869610.235:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 00:40:10.276215 kernel: audit: type=1130 audit(1768869610.259:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 00:40:10.259000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 00:40:10.277485 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 20 00:40:10.281971 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 20 00:40:10.302726 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 20 00:40:10.308000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 00:40:10.314037 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 20 00:40:10.348431 kernel: audit: type=1130 audit(1768869610.308:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 00:40:10.348454 kernel: audit: type=1130 audit(1768869610.331:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 00:40:10.331000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 00:40:10.348776 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 20 00:40:10.368231 kernel: audit: type=1130 audit(1768869610.352:11): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 00:40:10.352000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 00:40:10.368942 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 20 00:40:10.377000 audit: BPF prog-id=6 op=LOAD Jan 20 00:40:10.379517 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 20 00:40:10.435575 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 20 00:40:10.445000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 00:40:10.561288 systemd-resolved[597]: Positive Trust Anchors: Jan 20 00:40:10.561318 systemd-resolved[597]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 20 00:40:10.561320 systemd-resolved[597]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Jan 20 00:40:10.561339 systemd-resolved[597]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 20 00:40:10.564395 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 20 00:40:10.612000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 00:40:10.577929 systemd-resolved[597]: Defaulting to hostname 'linux'. Jan 20 00:40:10.578613 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 20 00:40:10.613004 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 20 00:40:10.636841 dracut-cmdline[609]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=206b712a0c1875fb8c7063accb0d68a6eba9a03f8b0d8d52f8e385d18728fb74 Jan 20 00:40:10.805324 kernel: Loading iSCSI transport class v2.0-870. Jan 20 00:40:10.880328 kernel: iscsi: registered transport (tcp) Jan 20 00:40:10.925056 kernel: iscsi: registered transport (qla4xxx) Jan 20 00:40:10.925078 kernel: QLogic iSCSI HBA Driver Jan 20 00:40:10.999478 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 20 00:40:11.024412 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 20 00:40:11.028000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 00:40:11.030032 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 20 00:40:11.076909 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 20 00:40:11.084000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 00:40:11.086346 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 20 00:40:11.090851 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 20 00:40:11.132365 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 20 00:40:11.139000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 00:40:11.140000 audit: BPF prog-id=7 op=LOAD Jan 20 00:40:11.140000 audit: BPF prog-id=8 op=LOAD Jan 20 00:40:11.142313 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 20 00:40:11.298444 systemd-udevd[848]: Using default interface naming scheme 'v257'. Jan 20 00:40:11.303753 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 20 00:40:11.315000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 00:40:11.315962 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 20 00:40:11.320000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 00:40:11.322506 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 20 00:40:11.345000 audit: BPF prog-id=9 op=LOAD Jan 20 00:40:11.347416 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 20 00:40:11.362054 dracut-pre-trigger[950]: rd.md=0: removing MD RAID activation Jan 20 00:40:11.388843 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 20 00:40:11.396000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 00:40:11.397585 systemd-networkd[951]: lo: Link UP Jan 20 00:40:11.401000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 00:40:11.397588 systemd-networkd[951]: lo: Gained carrier Jan 20 00:40:11.398197 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 20 00:40:11.403184 systemd[1]: Reached target network.target - Network. Jan 20 00:40:11.409786 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 20 00:40:11.458963 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 20 00:40:11.463000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 00:40:11.466338 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 20 00:40:11.567333 kernel: hv_vmbus: registering driver hv_netvsc Jan 20 00:40:11.591886 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 20 00:40:11.591985 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 20 00:40:11.613357 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#84 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Jan 20 00:40:11.608000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 00:40:11.608880 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 20 00:40:11.618910 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 20 00:40:11.640612 kernel: hv_netvsc 7ced8d8a-f9e3-7ced-8d8a-f9e37ced8d8a eth0: VF slot 1 added Jan 20 00:40:11.641364 systemd-networkd[951]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 20 00:40:11.641369 systemd-networkd[951]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 20 00:40:11.650879 systemd-networkd[951]: eth0: Link UP Jan 20 00:40:11.650964 systemd-networkd[951]: eth0: Gained carrier Jan 20 00:40:11.650973 systemd-networkd[951]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 20 00:40:11.674449 systemd-networkd[951]: eth0: DHCPv4 address 10.200.20.14/24, gateway 10.200.20.1 acquired from 168.63.129.16 Jan 20 00:40:11.694431 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 20 00:40:11.698000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 00:40:11.723321 kernel: hv_vmbus: registering driver hv_pci Jan 20 00:40:11.729470 kernel: hv_pci ae77923e-8848-4ab7-b4b5-5fe3e7b585f3: PCI VMBus probing: Using version 0x10004 Jan 20 00:40:11.729615 kernel: hv_pci ae77923e-8848-4ab7-b4b5-5fe3e7b585f3: PCI host bridge to bus 8848:00 Jan 20 00:40:11.738012 kernel: pci_bus 8848:00: root bus resource [mem 0xfc0000000-0xfc00fffff window] Jan 20 00:40:11.742518 kernel: pci_bus 8848:00: No busn resource found for root bus, will use [bus 00-ff] Jan 20 00:40:11.748420 kernel: pci 8848:00:02.0: [15b3:101a] type 00 class 0x020000 PCIe Endpoint Jan 20 00:40:11.753390 kernel: pci 8848:00:02.0: BAR 0 [mem 0xfc0000000-0xfc00fffff 64bit pref] Jan 20 00:40:11.758319 kernel: pci 8848:00:02.0: enabling Extended Tags Jan 20 00:40:11.772375 kernel: pci 8848:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at 8848:00:02.0 (capable of 252.048 Gb/s with 16.0 GT/s PCIe x16 link) Jan 20 00:40:11.781809 kernel: pci_bus 8848:00: busn_res: [bus 00-ff] end is updated to 00 Jan 20 00:40:11.781954 kernel: pci 8848:00:02.0: BAR 0 [mem 0xfc0000000-0xfc00fffff 64bit pref]: assigned Jan 20 00:40:12.047439 kernel: mlx5_core 8848:00:02.0: enabling device (0000 -> 0002) Jan 20 00:40:12.055585 kernel: mlx5_core 8848:00:02.0: PTM is not supported by PCIe Jan 20 00:40:12.055751 kernel: mlx5_core 8848:00:02.0: firmware version: 16.30.5026 Jan 20 00:40:12.224812 kernel: hv_netvsc 7ced8d8a-f9e3-7ced-8d8a-f9e37ced8d8a eth0: VF registering: eth1 Jan 20 00:40:12.228314 kernel: mlx5_core 8848:00:02.0 eth1: joined to eth0 Jan 20 00:40:12.235389 kernel: mlx5_core 8848:00:02.0: MLX5E: StrdRq(1) RqSz(8) StrdSz(2048) RxCqeCmprss(0 basic) Jan 20 00:40:12.249187 systemd-networkd[951]: eth1: Interface name change detected, renamed to enP34888s1. Jan 20 00:40:12.254370 kernel: mlx5_core 8848:00:02.0 enP34888s1: renamed from eth1 Jan 20 00:40:12.355972 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Jan 20 00:40:12.366954 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 20 00:40:12.382320 kernel: mlx5_core 8848:00:02.0 enP34888s1: Link up Jan 20 00:40:12.414764 systemd-networkd[951]: enP34888s1: Link UP Jan 20 00:40:12.418299 kernel: hv_netvsc 7ced8d8a-f9e3-7ced-8d8a-f9e37ced8d8a eth0: Data path switched to VF: enP34888s1 Jan 20 00:40:12.532243 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Jan 20 00:40:12.544006 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Jan 20 00:40:12.637663 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Jan 20 00:40:12.657218 systemd-networkd[951]: enP34888s1: Gained carrier Jan 20 00:40:12.772926 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 20 00:40:12.778000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 00:40:12.779489 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 20 00:40:12.784982 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 20 00:40:12.800370 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 20 00:40:12.810558 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 20 00:40:12.841681 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 20 00:40:12.848000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 00:40:13.192558 systemd-networkd[951]: eth0: Gained IPv6LL Jan 20 00:40:13.790712 disk-uuid[1070]: Warning: The kernel is still using the old partition table. Jan 20 00:40:13.790712 disk-uuid[1070]: The new table will be used at the next reboot or after you Jan 20 00:40:13.790712 disk-uuid[1070]: run partprobe(8) or kpartx(8) Jan 20 00:40:13.790712 disk-uuid[1070]: The operation has completed successfully. Jan 20 00:40:13.807397 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 20 00:40:13.811000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 00:40:13.811000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 00:40:13.807489 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 20 00:40:13.813284 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 20 00:40:13.879347 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 (8:6) scanned by mount (1222) Jan 20 00:40:13.889151 kernel: BTRFS info (device sda6): first mount of filesystem 9521f811-4827-4eb4-a543-3d54044f87c2 Jan 20 00:40:13.889180 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jan 20 00:40:13.931130 kernel: BTRFS info (device sda6): turning on async discard Jan 20 00:40:13.931145 kernel: BTRFS info (device sda6): enabling free space tree Jan 20 00:40:13.939757 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 20 00:40:13.952694 kernel: BTRFS info (device sda6): last unmount of filesystem 9521f811-4827-4eb4-a543-3d54044f87c2 Jan 20 00:40:13.943000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 00:40:13.945079 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 20 00:40:15.812297 ignition[1241]: Ignition 2.22.0 Jan 20 00:40:15.813715 ignition[1241]: Stage: fetch-offline Jan 20 00:40:15.827337 kernel: kauditd_printk_skb: 21 callbacks suppressed Jan 20 00:40:15.827356 kernel: audit: type=1130 audit(1768869615.821:33): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 00:40:15.821000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 00:40:15.816051 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 20 00:40:15.813842 ignition[1241]: no configs at "/usr/lib/ignition/base.d" Jan 20 00:40:15.829142 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 20 00:40:15.813858 ignition[1241]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 20 00:40:15.813933 ignition[1241]: parsed url from cmdline: "" Jan 20 00:40:15.813936 ignition[1241]: no config URL provided Jan 20 00:40:15.813939 ignition[1241]: reading system config file "/usr/lib/ignition/user.ign" Jan 20 00:40:15.813946 ignition[1241]: no config at "/usr/lib/ignition/user.ign" Jan 20 00:40:15.813950 ignition[1241]: failed to fetch config: resource requires networking Jan 20 00:40:15.814076 ignition[1241]: Ignition finished successfully Jan 20 00:40:15.888441 ignition[1249]: Ignition 2.22.0 Jan 20 00:40:15.888454 ignition[1249]: Stage: fetch Jan 20 00:40:15.888617 ignition[1249]: no configs at "/usr/lib/ignition/base.d" Jan 20 00:40:15.888627 ignition[1249]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 20 00:40:15.888713 ignition[1249]: parsed url from cmdline: "" Jan 20 00:40:15.888715 ignition[1249]: no config URL provided Jan 20 00:40:15.888719 ignition[1249]: reading system config file "/usr/lib/ignition/user.ign" Jan 20 00:40:15.888725 ignition[1249]: no config at "/usr/lib/ignition/user.ign" Jan 20 00:40:15.888738 ignition[1249]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Jan 20 00:40:15.954644 ignition[1249]: GET result: OK Jan 20 00:40:15.957004 ignition[1249]: config has been read from IMDS userdata Jan 20 00:40:15.957032 ignition[1249]: parsing config with SHA512: 467c3df43c10c4807fc423bfdcbdf1fffa2a4c8506358318b79e6f42738669b408062df349cf9fe4d0a062d60530603aec8a5670cd97524096e9852564ab26a5 Jan 20 00:40:15.962004 unknown[1249]: fetched base config from "system" Jan 20 00:40:15.962024 unknown[1249]: fetched base config from "system" Jan 20 00:40:15.970000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 00:40:15.962265 ignition[1249]: fetch: fetch complete Jan 20 00:40:15.992012 kernel: audit: type=1130 audit(1768869615.970:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 00:40:15.962028 unknown[1249]: fetched user config from "azure" Jan 20 00:40:15.962269 ignition[1249]: fetch: fetch passed Jan 20 00:40:15.966547 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 20 00:40:15.962317 ignition[1249]: Ignition finished successfully Jan 20 00:40:15.972223 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 20 00:40:16.021396 ignition[1255]: Ignition 2.22.0 Jan 20 00:40:16.021408 ignition[1255]: Stage: kargs Jan 20 00:40:16.021578 ignition[1255]: no configs at "/usr/lib/ignition/base.d" Jan 20 00:40:16.021586 ignition[1255]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 20 00:40:16.034000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 00:40:16.027425 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 20 00:40:16.022198 ignition[1255]: kargs: kargs passed Jan 20 00:40:16.061649 kernel: audit: type=1130 audit(1768869616.034:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 00:40:16.051567 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 20 00:40:16.022235 ignition[1255]: Ignition finished successfully Jan 20 00:40:16.081475 ignition[1261]: Ignition 2.22.0 Jan 20 00:40:16.081483 ignition[1261]: Stage: disks Jan 20 00:40:16.088000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 00:40:16.083559 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 20 00:40:16.081637 ignition[1261]: no configs at "/usr/lib/ignition/base.d" Jan 20 00:40:16.119592 kernel: audit: type=1130 audit(1768869616.088:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 00:40:16.089282 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 20 00:40:16.081644 ignition[1261]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 20 00:40:16.107232 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 20 00:40:16.082133 ignition[1261]: disks: disks passed Jan 20 00:40:16.116191 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 20 00:40:16.082167 ignition[1261]: Ignition finished successfully Jan 20 00:40:16.123851 systemd[1]: Reached target sysinit.target - System Initialization. Jan 20 00:40:16.132373 systemd[1]: Reached target basic.target - Basic System. Jan 20 00:40:16.146442 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 20 00:40:16.333957 systemd-fsck[1269]: ROOT: clean, 15/6361680 files, 408771/6359552 blocks Jan 20 00:40:16.341782 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 20 00:40:16.346000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 00:40:16.364534 kernel: audit: type=1130 audit(1768869616.346:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 00:40:16.364734 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 20 00:40:16.815323 kernel: EXT4-fs (sda9): mounted filesystem 86245b97-e7c3-45ed-ab5a-2f99a550dd39 r/w with ordered data mode. Quota mode: none. Jan 20 00:40:16.816177 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 20 00:40:16.820002 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 20 00:40:16.883892 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 20 00:40:16.904803 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 20 00:40:16.912592 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jan 20 00:40:16.923081 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 20 00:40:16.952493 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 (8:6) scanned by mount (1283) Jan 20 00:40:16.952511 kernel: BTRFS info (device sda6): first mount of filesystem 9521f811-4827-4eb4-a543-3d54044f87c2 Jan 20 00:40:16.952519 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jan 20 00:40:16.923112 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 20 00:40:16.963911 kernel: BTRFS info (device sda6): turning on async discard Jan 20 00:40:16.934756 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 20 00:40:16.976271 kernel: BTRFS info (device sda6): enabling free space tree Jan 20 00:40:16.969228 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 20 00:40:16.977681 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 20 00:40:18.068908 coreos-metadata[1285]: Jan 20 00:40:18.068 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Jan 20 00:40:18.075200 coreos-metadata[1285]: Jan 20 00:40:18.074 INFO Fetch successful Jan 20 00:40:18.075200 coreos-metadata[1285]: Jan 20 00:40:18.074 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Jan 20 00:40:18.087481 coreos-metadata[1285]: Jan 20 00:40:18.087 INFO Fetch successful Jan 20 00:40:18.116559 coreos-metadata[1285]: Jan 20 00:40:18.115 INFO wrote hostname ci-4515.1.0-n-fc9e3ff023 to /sysroot/etc/hostname Jan 20 00:40:18.124042 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 20 00:40:18.128000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 00:40:18.146329 kernel: audit: type=1130 audit(1768869618.128:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 00:40:18.441659 initrd-setup-root[1313]: cut: /sysroot/etc/passwd: No such file or directory Jan 20 00:40:18.533155 initrd-setup-root[1320]: cut: /sysroot/etc/group: No such file or directory Jan 20 00:40:18.571703 initrd-setup-root[1327]: cut: /sysroot/etc/shadow: No such file or directory Jan 20 00:40:18.576481 initrd-setup-root[1334]: cut: /sysroot/etc/gshadow: No such file or directory Jan 20 00:40:20.036865 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 20 00:40:20.044000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 00:40:20.052400 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 20 00:40:20.065685 kernel: audit: type=1130 audit(1768869620.044:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 00:40:20.067882 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 20 00:40:20.106184 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 20 00:40:20.118643 kernel: BTRFS info (device sda6): last unmount of filesystem 9521f811-4827-4eb4-a543-3d54044f87c2 Jan 20 00:40:20.126382 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 20 00:40:20.133000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 00:40:20.147368 kernel: audit: type=1130 audit(1768869620.133:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 00:40:20.152324 ignition[1405]: INFO : Ignition 2.22.0 Jan 20 00:40:20.152324 ignition[1405]: INFO : Stage: mount Jan 20 00:40:20.152324 ignition[1405]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 20 00:40:20.152324 ignition[1405]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 20 00:40:20.188012 kernel: audit: type=1130 audit(1768869620.163:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 00:40:20.163000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 00:40:20.188058 ignition[1405]: INFO : mount: mount passed Jan 20 00:40:20.188058 ignition[1405]: INFO : Ignition finished successfully Jan 20 00:40:20.156728 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 20 00:40:20.184522 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 20 00:40:20.199516 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 20 00:40:20.228643 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 (8:6) scanned by mount (1414) Jan 20 00:40:20.239182 kernel: BTRFS info (device sda6): first mount of filesystem 9521f811-4827-4eb4-a543-3d54044f87c2 Jan 20 00:40:20.239215 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jan 20 00:40:20.248528 kernel: BTRFS info (device sda6): turning on async discard Jan 20 00:40:20.248561 kernel: BTRFS info (device sda6): enabling free space tree Jan 20 00:40:20.249813 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 20 00:40:20.279219 ignition[1432]: INFO : Ignition 2.22.0 Jan 20 00:40:20.279219 ignition[1432]: INFO : Stage: files Jan 20 00:40:20.285160 ignition[1432]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 20 00:40:20.285160 ignition[1432]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 20 00:40:20.285160 ignition[1432]: DEBUG : files: compiled without relabeling support, skipping Jan 20 00:40:20.285160 ignition[1432]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 20 00:40:20.285160 ignition[1432]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 20 00:40:20.428668 ignition[1432]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 20 00:40:20.434240 ignition[1432]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 20 00:40:20.434240 ignition[1432]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 20 00:40:20.428964 unknown[1432]: wrote ssh authorized keys file for user: core Jan 20 00:40:20.487176 ignition[1432]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Jan 20 00:40:20.495005 ignition[1432]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-arm64.tar.gz: attempt #1 Jan 20 00:40:20.518388 ignition[1432]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 20 00:40:20.638907 ignition[1432]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Jan 20 00:40:20.646894 ignition[1432]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 20 00:40:20.646894 ignition[1432]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 20 00:40:20.646894 ignition[1432]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 20 00:40:20.646894 ignition[1432]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 20 00:40:20.646894 ignition[1432]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 20 00:40:20.646894 ignition[1432]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 20 00:40:20.646894 ignition[1432]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 20 00:40:20.646894 ignition[1432]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 20 00:40:20.703553 ignition[1432]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 20 00:40:20.703553 ignition[1432]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 20 00:40:20.703553 ignition[1432]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Jan 20 00:40:20.703553 ignition[1432]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Jan 20 00:40:20.703553 ignition[1432]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Jan 20 00:40:20.703553 ignition[1432]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-arm64.raw: attempt #1 Jan 20 00:40:21.240435 ignition[1432]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jan 20 00:40:21.672318 ignition[1432]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Jan 20 00:40:21.681614 ignition[1432]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jan 20 00:40:21.978342 ignition[1432]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 20 00:40:22.005125 ignition[1432]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 20 00:40:22.005125 ignition[1432]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jan 20 00:40:22.005125 ignition[1432]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Jan 20 00:40:22.024000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 00:40:22.041145 ignition[1432]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Jan 20 00:40:22.041145 ignition[1432]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 20 00:40:22.041145 ignition[1432]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 20 00:40:22.041145 ignition[1432]: INFO : files: files passed Jan 20 00:40:22.041145 ignition[1432]: INFO : Ignition finished successfully Jan 20 00:40:22.080869 kernel: audit: type=1130 audit(1768869622.024:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 00:40:22.018463 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 20 00:40:22.085000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 00:40:22.025979 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 20 00:40:22.116727 kernel: audit: type=1130 audit(1768869622.085:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 00:40:22.116747 kernel: audit: type=1131 audit(1768869622.085:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 00:40:22.085000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 00:40:22.069733 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 20 00:40:22.077669 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 20 00:40:22.079850 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 20 00:40:22.135518 initrd-setup-root-after-ignition[1463]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 20 00:40:22.135518 initrd-setup-root-after-ignition[1463]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 20 00:40:22.148488 initrd-setup-root-after-ignition[1467]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 20 00:40:22.149127 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 20 00:40:22.181638 kernel: audit: type=1130 audit(1768869622.159:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 00:40:22.159000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 00:40:22.159986 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 20 00:40:22.186764 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 20 00:40:22.223901 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 20 00:40:22.224016 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 20 00:40:22.232000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 00:40:22.249750 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 20 00:40:22.254102 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 20 00:40:22.277436 kernel: audit: type=1130 audit(1768869622.232:46): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 00:40:22.277454 kernel: audit: type=1131 audit(1768869622.232:47): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 00:40:22.232000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 00:40:22.262306 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 20 00:40:22.277410 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 20 00:40:22.308343 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 20 00:40:22.312000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 00:40:22.330885 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 20 00:40:22.341165 kernel: audit: type=1130 audit(1768869622.312:48): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 00:40:22.360154 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Jan 20 00:40:22.360311 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 20 00:40:22.370002 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 20 00:40:22.379505 systemd[1]: Stopped target timers.target - Timer Units. Jan 20 00:40:22.387857 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 20 00:40:22.395000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 00:40:22.387950 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 20 00:40:22.411604 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 20 00:40:22.423981 kernel: audit: type=1131 audit(1768869622.395:49): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 00:40:22.420152 systemd[1]: Stopped target basic.target - Basic System. Jan 20 00:40:22.427710 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 20 00:40:22.435492 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 20 00:40:22.444647 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 20 00:40:22.453711 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Jan 20 00:40:22.463278 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 20 00:40:22.471816 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 20 00:40:22.480791 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 20 00:40:22.489932 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 20 00:40:22.498087 systemd[1]: Stopped target swap.target - Swaps. Jan 20 00:40:22.505548 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 20 00:40:22.512000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 00:40:22.505656 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 20 00:40:22.528910 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 20 00:40:22.541558 kernel: audit: type=1131 audit(1768869622.512:50): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 00:40:22.537581 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 20 00:40:22.546359 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 20 00:40:22.546414 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 20 00:40:22.563000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 00:40:22.556113 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 20 00:40:22.556196 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 20 00:40:22.594785 kernel: audit: type=1131 audit(1768869622.563:51): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 00:40:22.590000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 00:40:22.581721 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 20 00:40:22.598000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 00:40:22.581815 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 20 00:40:22.608000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 00:40:22.590634 systemd[1]: ignition-files.service: Deactivated successfully. Jan 20 00:40:22.590697 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 20 00:40:22.599022 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jan 20 00:40:22.599093 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 20 00:40:22.615469 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 20 00:40:22.651000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 00:40:22.658010 ignition[1487]: INFO : Ignition 2.22.0 Jan 20 00:40:22.658010 ignition[1487]: INFO : Stage: umount Jan 20 00:40:22.658010 ignition[1487]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 20 00:40:22.658010 ignition[1487]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 20 00:40:22.658010 ignition[1487]: INFO : umount: umount passed Jan 20 00:40:22.658010 ignition[1487]: INFO : Ignition finished successfully Jan 20 00:40:22.661000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 00:40:22.668000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 00:40:22.625465 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 20 00:40:22.690000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 00:40:22.632606 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 20 00:40:22.698000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 00:40:22.632720 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 20 00:40:22.706000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 00:40:22.652370 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 20 00:40:22.715000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 00:40:22.652450 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 20 00:40:22.662469 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 20 00:40:22.734000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 00:40:22.662548 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 20 00:40:22.681522 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 20 00:40:22.681601 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 20 00:40:22.691654 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 20 00:40:22.691879 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 20 00:40:22.698772 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 20 00:40:22.698812 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 20 00:40:22.707190 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 20 00:40:22.707224 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 20 00:40:22.716450 systemd[1]: Stopped target network.target - Network. Jan 20 00:40:22.805000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 00:40:22.725513 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 20 00:40:22.813000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup-pre comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 00:40:22.725572 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 20 00:40:22.734581 systemd[1]: Stopped target paths.target - Path Units. Jan 20 00:40:22.742041 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 20 00:40:22.745315 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 20 00:40:22.846000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 00:40:22.846000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 00:40:22.751312 systemd[1]: Stopped target slices.target - Slice Units. Jan 20 00:40:22.856000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 00:40:22.762791 systemd[1]: Stopped target sockets.target - Socket Units. Jan 20 00:40:22.770461 systemd[1]: iscsid.socket: Deactivated successfully. Jan 20 00:40:22.868000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 00:40:22.770504 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 20 00:40:22.778290 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 20 00:40:22.879000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 00:40:22.880000 audit: BPF prog-id=9 op=UNLOAD Jan 20 00:40:22.882000 audit: BPF prog-id=6 op=UNLOAD Jan 20 00:40:22.778330 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 20 00:40:22.786318 systemd[1]: systemd-journald-audit.socket: Deactivated successfully. Jan 20 00:40:22.786342 systemd[1]: Closed systemd-journald-audit.socket - Journal Audit Socket. Jan 20 00:40:22.904000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 00:40:22.798565 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 20 00:40:22.798631 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 20 00:40:22.806452 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 20 00:40:22.927000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 00:40:22.806492 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 20 00:40:22.936000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 00:40:22.814722 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 20 00:40:22.944000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 00:40:22.827695 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 20 00:40:22.836197 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 20 00:40:22.836697 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 20 00:40:22.836772 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 20 00:40:22.846965 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 20 00:40:22.977000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 00:40:22.847034 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 20 00:40:22.861067 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 20 00:40:22.861151 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 20 00:40:23.003000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 00:40:22.873340 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 20 00:40:23.022405 kernel: hv_netvsc 7ced8d8a-f9e3-7ced-8d8a-f9e37ced8d8a eth0: Data path switched from VF: enP34888s1 Jan 20 00:40:23.018000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 00:40:22.873450 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 20 00:40:23.027000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 00:40:22.882386 systemd[1]: Stopped target network-pre.target - Preparation for Network. Jan 20 00:40:22.888122 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 20 00:40:22.888151 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 20 00:40:22.896639 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 20 00:40:23.054000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 00:40:22.896682 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 20 00:40:23.066000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 00:40:22.905784 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 20 00:40:23.076000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 00:40:22.919484 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 20 00:40:23.085000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 00:40:22.919544 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 20 00:40:23.094000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 00:40:22.928590 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 20 00:40:23.106000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 00:40:23.106000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 00:40:22.928635 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 20 00:40:22.936807 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 20 00:40:22.936837 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 20 00:40:22.945827 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 20 00:40:22.972766 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 20 00:40:23.131000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 00:40:22.972902 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 20 00:40:22.978593 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 20 00:40:22.978621 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 20 00:40:22.987234 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 20 00:40:22.987256 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 20 00:40:22.995893 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 20 00:40:22.995928 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 20 00:40:23.008207 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 20 00:40:23.008244 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 20 00:40:23.022230 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 20 00:40:23.022264 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 20 00:40:23.031926 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 20 00:40:23.045345 systemd[1]: systemd-network-generator.service: Deactivated successfully. Jan 20 00:40:23.045410 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Jan 20 00:40:23.054680 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 20 00:40:23.054721 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 20 00:40:23.067781 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jan 20 00:40:23.067823 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 20 00:40:23.076989 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 20 00:40:23.077027 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 20 00:40:23.086514 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 20 00:40:23.086554 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 20 00:40:23.095812 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 20 00:40:23.095894 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 20 00:40:23.123287 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 20 00:40:23.123440 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 20 00:40:23.132615 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 20 00:40:23.140837 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 20 00:40:23.200334 systemd[1]: Switching root. Jan 20 00:40:23.341805 systemd-journald[568]: Journal stopped Jan 20 00:40:30.385536 systemd-journald[568]: Received SIGTERM from PID 1 (systemd). Jan 20 00:40:30.385557 kernel: SELinux: policy capability network_peer_controls=1 Jan 20 00:40:30.385567 kernel: SELinux: policy capability open_perms=1 Jan 20 00:40:30.385574 kernel: SELinux: policy capability extended_socket_class=1 Jan 20 00:40:30.385580 kernel: SELinux: policy capability always_check_network=0 Jan 20 00:40:30.385585 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 20 00:40:30.385591 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 20 00:40:30.385597 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 20 00:40:30.385603 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 20 00:40:30.385609 kernel: SELinux: policy capability userspace_initial_context=0 Jan 20 00:40:30.385615 systemd[1]: Successfully loaded SELinux policy in 279.092ms. Jan 20 00:40:30.385622 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 4.451ms. Jan 20 00:40:30.385629 systemd[1]: systemd 257.9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 20 00:40:30.385635 systemd[1]: Detected virtualization microsoft. Jan 20 00:40:30.385643 systemd[1]: Detected architecture arm64. Jan 20 00:40:30.385649 systemd[1]: Detected first boot. Jan 20 00:40:30.385656 systemd[1]: Hostname set to . Jan 20 00:40:30.385662 systemd[1]: Initializing machine ID from random generator. Jan 20 00:40:30.385668 zram_generator::config[1531]: No configuration found. Jan 20 00:40:30.385676 kernel: NET: Registered PF_VSOCK protocol family Jan 20 00:40:30.385682 systemd[1]: Populated /etc with preset unit settings. Jan 20 00:40:30.385689 kernel: kauditd_printk_skb: 42 callbacks suppressed Jan 20 00:40:30.385695 kernel: audit: type=1334 audit(1768869629.481:94): prog-id=12 op=LOAD Jan 20 00:40:30.385702 kernel: audit: type=1334 audit(1768869629.481:95): prog-id=3 op=UNLOAD Jan 20 00:40:30.385708 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 20 00:40:30.385715 kernel: audit: type=1334 audit(1768869629.484:96): prog-id=13 op=LOAD Jan 20 00:40:30.385721 kernel: audit: type=1334 audit(1768869629.485:97): prog-id=14 op=LOAD Jan 20 00:40:30.385727 kernel: audit: type=1334 audit(1768869629.485:98): prog-id=4 op=UNLOAD Jan 20 00:40:30.385733 kernel: audit: type=1334 audit(1768869629.485:99): prog-id=5 op=UNLOAD Jan 20 00:40:30.385739 kernel: audit: type=1131 audit(1768869629.485:100): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 00:40:30.385746 kernel: audit: type=1334 audit(1768869629.525:101): prog-id=12 op=UNLOAD Jan 20 00:40:30.385753 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 20 00:40:30.385759 kernel: audit: type=1130 audit(1768869629.536:102): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 00:40:30.385765 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 20 00:40:30.385772 kernel: audit: type=1131 audit(1768869629.536:103): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 00:40:30.385779 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 20 00:40:30.385785 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 20 00:40:30.385793 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 20 00:40:30.385800 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 20 00:40:30.385806 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 20 00:40:30.385814 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 20 00:40:30.385821 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 20 00:40:30.385828 systemd[1]: Created slice user.slice - User and Session Slice. Jan 20 00:40:30.385835 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 20 00:40:30.385843 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 20 00:40:30.385849 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 20 00:40:30.385856 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 20 00:40:30.385863 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 20 00:40:30.385869 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 20 00:40:30.385876 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Jan 20 00:40:30.385883 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 20 00:40:30.385890 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 20 00:40:30.385896 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 20 00:40:30.385903 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 20 00:40:30.385910 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 20 00:40:30.385917 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 20 00:40:30.385924 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 20 00:40:30.385930 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 20 00:40:30.385937 systemd[1]: Reached target remote-veritysetup.target - Remote Verity Protected Volumes. Jan 20 00:40:30.385943 systemd[1]: Reached target slices.target - Slice Units. Jan 20 00:40:30.385950 systemd[1]: Reached target swap.target - Swaps. Jan 20 00:40:30.385957 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 20 00:40:30.385964 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 20 00:40:30.385971 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jan 20 00:40:30.385978 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Jan 20 00:40:30.385985 systemd[1]: Listening on systemd-mountfsd.socket - DDI File System Mounter Socket. Jan 20 00:40:30.385992 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 20 00:40:30.385999 systemd[1]: Listening on systemd-nsresourced.socket - Namespace Resource Manager Socket. Jan 20 00:40:30.386006 systemd[1]: Listening on systemd-oomd.socket - Userspace Out-Of-Memory (OOM) Killer Socket. Jan 20 00:40:30.386012 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 20 00:40:30.386019 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 20 00:40:30.386026 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 20 00:40:30.386033 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 20 00:40:30.386040 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 20 00:40:30.386046 systemd[1]: Mounting media.mount - External Media Directory... Jan 20 00:40:30.386053 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 20 00:40:30.386060 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 20 00:40:30.386066 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 20 00:40:30.386073 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 20 00:40:30.386081 systemd[1]: Reached target machines.target - Containers. Jan 20 00:40:30.386087 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 20 00:40:30.386094 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 20 00:40:30.386101 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 20 00:40:30.386108 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 20 00:40:30.386114 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 20 00:40:30.386121 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 20 00:40:30.386129 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 20 00:40:30.386136 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 20 00:40:30.386143 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 20 00:40:30.386149 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 20 00:40:30.386156 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 20 00:40:30.386163 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 20 00:40:30.386169 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 20 00:40:30.386177 systemd[1]: Stopped systemd-fsck-usr.service. Jan 20 00:40:30.386184 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 20 00:40:30.386190 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 20 00:40:30.386197 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 20 00:40:30.386204 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 20 00:40:30.386210 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 20 00:40:30.386218 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jan 20 00:40:30.386224 kernel: fuse: init (API version 7.41) Jan 20 00:40:30.386231 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 20 00:40:30.386237 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 20 00:40:30.386244 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 20 00:40:30.386250 systemd[1]: Mounted media.mount - External Media Directory. Jan 20 00:40:30.386257 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 20 00:40:30.386278 systemd-journald[1622]: Collecting audit messages is enabled. Jan 20 00:40:30.386293 systemd-journald[1622]: Journal started Jan 20 00:40:30.386315 systemd-journald[1622]: Runtime Journal (/run/log/journal/962f4ef07ebf4cf997f0da7f1e5f58e6) is 8M, max 78.3M, 70.3M free. Jan 20 00:40:29.849000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Jan 20 00:40:30.229000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 00:40:30.242000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 00:40:30.254000 audit: BPF prog-id=14 op=UNLOAD Jan 20 00:40:30.254000 audit: BPF prog-id=13 op=UNLOAD Jan 20 00:40:30.255000 audit: BPF prog-id=15 op=LOAD Jan 20 00:40:30.255000 audit: BPF prog-id=16 op=LOAD Jan 20 00:40:30.255000 audit: BPF prog-id=17 op=LOAD Jan 20 00:40:30.382000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Jan 20 00:40:30.382000 audit[1622]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=60 a0=6 a1=fffff1823b50 a2=4000 a3=0 items=0 ppid=1 pid=1622 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:40:30.382000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Jan 20 00:40:29.471344 systemd[1]: Queued start job for default target multi-user.target. Jan 20 00:40:29.486529 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Jan 20 00:40:29.486921 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 20 00:40:29.487189 systemd[1]: systemd-journald.service: Consumed 2.499s CPU time. Jan 20 00:40:30.396968 systemd[1]: Started systemd-journald.service - Journal Service. Jan 20 00:40:30.395000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 00:40:30.397769 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 20 00:40:30.402642 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 20 00:40:30.406959 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 20 00:40:30.410000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 00:40:30.412098 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 20 00:40:30.416000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 00:40:30.417722 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 20 00:40:30.417846 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 20 00:40:30.425000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 00:40:30.425000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 00:40:30.426130 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 20 00:40:30.426237 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 20 00:40:30.427323 kernel: ACPI: bus type drm_connector registered Jan 20 00:40:30.430000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 00:40:30.430000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 00:40:30.431646 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 20 00:40:30.431789 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 20 00:40:30.435000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 00:40:30.435000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 00:40:30.436658 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 20 00:40:30.438331 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 20 00:40:30.442000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 00:40:30.442000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 00:40:30.443883 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 20 00:40:30.444005 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 20 00:40:30.447000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 00:40:30.447000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 00:40:30.449103 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 20 00:40:30.449210 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 20 00:40:30.452000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 00:40:30.452000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 00:40:30.454296 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 20 00:40:30.457000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 00:40:30.459481 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 20 00:40:30.463000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 00:40:30.465908 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 20 00:40:30.470000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 00:40:30.471770 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jan 20 00:40:30.475000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-load-credentials comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 00:40:30.485806 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 20 00:40:30.491706 systemd[1]: Listening on systemd-importd.socket - Disk Image Download Service Socket. Jan 20 00:40:30.500400 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 20 00:40:30.514398 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 20 00:40:30.519286 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 20 00:40:30.519322 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 20 00:40:30.524379 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jan 20 00:40:30.529729 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 20 00:40:30.529807 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Jan 20 00:40:30.530804 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 20 00:40:30.536008 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 20 00:40:30.540794 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 20 00:40:30.541531 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 20 00:40:30.546293 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 20 00:40:30.559420 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 20 00:40:30.565791 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 20 00:40:30.572044 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 20 00:40:30.578292 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 20 00:40:30.581000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 00:40:30.583445 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 20 00:40:30.588840 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 20 00:40:30.593927 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 20 00:40:30.597000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 00:40:30.602285 systemd-journald[1622]: Time spent on flushing to /var/log/journal/962f4ef07ebf4cf997f0da7f1e5f58e6 is 10.051ms for 1081 entries. Jan 20 00:40:30.602285 systemd-journald[1622]: System Journal (/var/log/journal/962f4ef07ebf4cf997f0da7f1e5f58e6) is 8M, max 2.2G, 2.2G free. Jan 20 00:40:30.644732 systemd-journald[1622]: Received client request to flush runtime journal. Jan 20 00:40:30.603830 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 20 00:40:30.613942 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jan 20 00:40:30.645972 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 20 00:40:30.651000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 00:40:30.671432 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 20 00:40:30.672669 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jan 20 00:40:30.677000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 00:40:30.689007 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 20 00:40:30.695000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 00:40:30.702326 kernel: loop1: detected capacity change from 0 to 109872 Jan 20 00:40:30.803990 systemd-tmpfiles[1672]: ACLs are not supported, ignoring. Jan 20 00:40:30.804001 systemd-tmpfiles[1672]: ACLs are not supported, ignoring. Jan 20 00:40:30.806934 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 20 00:40:30.812000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 00:40:30.814843 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 20 00:40:31.147243 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 20 00:40:31.151000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 00:40:31.151000 audit: BPF prog-id=18 op=LOAD Jan 20 00:40:31.151000 audit: BPF prog-id=19 op=LOAD Jan 20 00:40:31.151000 audit: BPF prog-id=20 op=LOAD Jan 20 00:40:31.153638 systemd[1]: Starting systemd-oomd.service - Userspace Out-Of-Memory (OOM) Killer... Jan 20 00:40:31.157000 audit: BPF prog-id=21 op=LOAD Jan 20 00:40:31.159527 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 20 00:40:31.166418 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 20 00:40:31.182187 systemd-tmpfiles[1691]: ACLs are not supported, ignoring. Jan 20 00:40:31.182202 systemd-tmpfiles[1691]: ACLs are not supported, ignoring. Jan 20 00:40:31.184512 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 20 00:40:31.188000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 00:40:31.233000 audit: BPF prog-id=22 op=LOAD Jan 20 00:40:31.233000 audit: BPF prog-id=23 op=LOAD Jan 20 00:40:31.233000 audit: BPF prog-id=24 op=LOAD Jan 20 00:40:31.235547 systemd[1]: Starting systemd-nsresourced.service - Namespace Resource Manager... Jan 20 00:40:31.249000 audit: BPF prog-id=25 op=LOAD Jan 20 00:40:31.249000 audit: BPF prog-id=26 op=LOAD Jan 20 00:40:31.249000 audit: BPF prog-id=27 op=LOAD Jan 20 00:40:31.251203 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 20 00:40:31.302165 systemd-nsresourced[1694]: Not setting up BPF subsystem, as functionality has been disabled at compile time. Jan 20 00:40:31.308499 systemd[1]: Started systemd-nsresourced.service - Namespace Resource Manager. Jan 20 00:40:31.314000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-nsresourced comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 00:40:31.317288 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 20 00:40:31.321000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 00:40:31.379362 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 20 00:40:31.384000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 00:40:31.385000 audit: BPF prog-id=8 op=UNLOAD Jan 20 00:40:31.385000 audit: BPF prog-id=7 op=UNLOAD Jan 20 00:40:31.385000 audit: BPF prog-id=28 op=LOAD Jan 20 00:40:31.386000 audit: BPF prog-id=29 op=LOAD Jan 20 00:40:31.388687 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 20 00:40:31.394605 systemd-oomd[1689]: No swap; memory pressure usage will be degraded Jan 20 00:40:31.395227 systemd[1]: Started systemd-oomd.service - Userspace Out-Of-Memory (OOM) Killer. Jan 20 00:40:31.399000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-oomd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 00:40:31.406322 kernel: loop2: detected capacity change from 0 to 100192 Jan 20 00:40:31.423479 systemd-udevd[1712]: Using default interface naming scheme 'v257'. Jan 20 00:40:31.426803 systemd-resolved[1690]: Positive Trust Anchors: Jan 20 00:40:31.426819 systemd-resolved[1690]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 20 00:40:31.426822 systemd-resolved[1690]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Jan 20 00:40:31.426841 systemd-resolved[1690]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 20 00:40:31.525565 systemd-resolved[1690]: Using system hostname 'ci-4515.1.0-n-fc9e3ff023'. Jan 20 00:40:31.526663 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 20 00:40:31.530000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 00:40:31.531487 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 20 00:40:31.734552 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 20 00:40:31.740000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 00:40:31.742000 audit: BPF prog-id=30 op=LOAD Jan 20 00:40:31.744889 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 20 00:40:31.805822 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Jan 20 00:40:31.847323 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#107 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Jan 20 00:40:31.892324 kernel: hv_vmbus: registering driver hyperv_fb Jan 20 00:40:31.892400 kernel: hv_vmbus: registering driver hv_balloon Jan 20 00:40:31.909010 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Jan 20 00:40:31.909067 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Jan 20 00:40:31.909090 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Jan 20 00:40:31.909105 kernel: hv_balloon: Memory hot add disabled on ARM64 Jan 20 00:40:31.914598 kernel: Console: switching to colour dummy device 80x25 Jan 20 00:40:31.920852 kernel: Console: switching to colour frame buffer device 128x48 Jan 20 00:40:31.959317 kernel: mousedev: PS/2 mouse device common for all mice Jan 20 00:40:31.985999 systemd-networkd[1725]: lo: Link UP Jan 20 00:40:31.986254 systemd-networkd[1725]: lo: Gained carrier Jan 20 00:40:31.987631 systemd-networkd[1725]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 20 00:40:31.987636 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 20 00:40:31.987913 systemd-networkd[1725]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 20 00:40:31.990000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 00:40:31.992217 systemd[1]: Reached target network.target - Network. Jan 20 00:40:31.997079 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jan 20 00:40:32.004869 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 20 00:40:32.015751 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 20 00:40:32.029364 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 20 00:40:32.029801 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 20 00:40:32.033000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 00:40:32.033000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 00:40:32.037287 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 20 00:40:32.064353 kernel: mlx5_core 8848:00:02.0 enP34888s1: Link up Jan 20 00:40:32.086721 kernel: hv_netvsc 7ced8d8a-f9e3-7ced-8d8a-f9e37ced8d8a eth0: Data path switched to VF: enP34888s1 Jan 20 00:40:32.087387 systemd-networkd[1725]: enP34888s1: Link UP Jan 20 00:40:32.087590 systemd-networkd[1725]: eth0: Link UP Jan 20 00:40:32.087595 systemd-networkd[1725]: eth0: Gained carrier Jan 20 00:40:32.087607 systemd-networkd[1725]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 20 00:40:32.091667 systemd-networkd[1725]: enP34888s1: Gained carrier Jan 20 00:40:32.094335 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jan 20 00:40:32.101347 systemd-networkd[1725]: eth0: DHCPv4 address 10.200.20.14/24, gateway 10.200.20.1 acquired from 168.63.129.16 Jan 20 00:40:32.100000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd-persistent-storage comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 00:40:32.224327 kernel: loop3: detected capacity change from 0 to 27736 Jan 20 00:40:32.251668 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Jan 20 00:40:32.257550 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 20 00:40:32.285317 kernel: MACsec IEEE 802.1AE Jan 20 00:40:32.353966 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 20 00:40:32.357000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 00:40:32.871409 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 20 00:40:32.874000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 00:40:32.911316 kernel: loop4: detected capacity change from 0 to 211168 Jan 20 00:40:32.966412 kernel: loop5: detected capacity change from 0 to 109872 Jan 20 00:40:32.985359 kernel: loop6: detected capacity change from 0 to 100192 Jan 20 00:40:32.996321 kernel: loop7: detected capacity change from 0 to 27736 Jan 20 00:40:33.009319 kernel: loop1: detected capacity change from 0 to 211168 Jan 20 00:40:33.020843 (sd-merge)[1850]: Using extensions 'containerd-flatcar.raw', 'docker-flatcar.raw', 'kubernetes.raw', 'oem-azure.raw'. Jan 20 00:40:33.023013 (sd-merge)[1850]: Merged extensions into '/usr'. Jan 20 00:40:33.026135 systemd[1]: Reload requested from client PID 1670 ('systemd-sysext') (unit systemd-sysext.service)... Jan 20 00:40:33.026238 systemd[1]: Reloading... Jan 20 00:40:33.082492 zram_generator::config[1883]: No configuration found. Jan 20 00:40:33.263613 systemd[1]: Reloading finished in 236 ms. Jan 20 00:40:33.293595 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 20 00:40:33.298000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 00:40:33.313157 systemd[1]: Starting ensure-sysext.service... Jan 20 00:40:33.319216 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 20 00:40:33.323000 audit: BPF prog-id=31 op=LOAD Jan 20 00:40:33.323000 audit: BPF prog-id=30 op=UNLOAD Jan 20 00:40:33.324000 audit: BPF prog-id=32 op=LOAD Jan 20 00:40:33.324000 audit: BPF prog-id=25 op=UNLOAD Jan 20 00:40:33.324000 audit: BPF prog-id=33 op=LOAD Jan 20 00:40:33.324000 audit: BPF prog-id=34 op=LOAD Jan 20 00:40:33.324000 audit: BPF prog-id=26 op=UNLOAD Jan 20 00:40:33.324000 audit: BPF prog-id=27 op=UNLOAD Jan 20 00:40:33.324000 audit: BPF prog-id=35 op=LOAD Jan 20 00:40:33.324000 audit: BPF prog-id=15 op=UNLOAD Jan 20 00:40:33.324000 audit: BPF prog-id=36 op=LOAD Jan 20 00:40:33.324000 audit: BPF prog-id=37 op=LOAD Jan 20 00:40:33.324000 audit: BPF prog-id=16 op=UNLOAD Jan 20 00:40:33.324000 audit: BPF prog-id=17 op=UNLOAD Jan 20 00:40:33.325000 audit: BPF prog-id=38 op=LOAD Jan 20 00:40:33.325000 audit: BPF prog-id=22 op=UNLOAD Jan 20 00:40:33.325000 audit: BPF prog-id=39 op=LOAD Jan 20 00:40:33.325000 audit: BPF prog-id=40 op=LOAD Jan 20 00:40:33.325000 audit: BPF prog-id=23 op=UNLOAD Jan 20 00:40:33.325000 audit: BPF prog-id=24 op=UNLOAD Jan 20 00:40:33.325000 audit: BPF prog-id=41 op=LOAD Jan 20 00:40:33.325000 audit: BPF prog-id=18 op=UNLOAD Jan 20 00:40:33.325000 audit: BPF prog-id=42 op=LOAD Jan 20 00:40:33.325000 audit: BPF prog-id=43 op=LOAD Jan 20 00:40:33.325000 audit: BPF prog-id=19 op=UNLOAD Jan 20 00:40:33.325000 audit: BPF prog-id=20 op=UNLOAD Jan 20 00:40:33.326000 audit: BPF prog-id=44 op=LOAD Jan 20 00:40:33.326000 audit: BPF prog-id=21 op=UNLOAD Jan 20 00:40:33.326000 audit: BPF prog-id=45 op=LOAD Jan 20 00:40:33.326000 audit: BPF prog-id=46 op=LOAD Jan 20 00:40:33.326000 audit: BPF prog-id=28 op=UNLOAD Jan 20 00:40:33.326000 audit: BPF prog-id=29 op=UNLOAD Jan 20 00:40:33.331614 systemd[1]: Reload requested from client PID 1938 ('systemctl') (unit ensure-sysext.service)... Jan 20 00:40:33.331627 systemd[1]: Reloading... Jan 20 00:40:33.362765 systemd-tmpfiles[1939]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Jan 20 00:40:33.362789 systemd-tmpfiles[1939]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Jan 20 00:40:33.362981 systemd-tmpfiles[1939]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 20 00:40:33.364967 systemd-tmpfiles[1939]: ACLs are not supported, ignoring. Jan 20 00:40:33.365105 systemd-tmpfiles[1939]: ACLs are not supported, ignoring. Jan 20 00:40:33.399328 zram_generator::config[1991]: No configuration found. Jan 20 00:40:33.400801 systemd-tmpfiles[1939]: Detected autofs mount point /boot during canonicalization of boot. Jan 20 00:40:33.400810 systemd-tmpfiles[1939]: Skipping /boot Jan 20 00:40:33.407794 systemd-tmpfiles[1939]: Detected autofs mount point /boot during canonicalization of boot. Jan 20 00:40:33.407884 systemd-tmpfiles[1939]: Skipping /boot Jan 20 00:40:33.416455 systemd-networkd[1725]: eth0: Gained IPv6LL Jan 20 00:40:33.538790 systemd[1]: Reloading finished in 206 ms. Jan 20 00:40:33.557387 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 20 00:40:33.561000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd-wait-online comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 00:40:33.562000 audit: BPF prog-id=47 op=LOAD Jan 20 00:40:33.562000 audit: BPF prog-id=38 op=UNLOAD Jan 20 00:40:33.562000 audit: BPF prog-id=48 op=LOAD Jan 20 00:40:33.562000 audit: BPF prog-id=49 op=LOAD Jan 20 00:40:33.562000 audit: BPF prog-id=39 op=UNLOAD Jan 20 00:40:33.562000 audit: BPF prog-id=40 op=UNLOAD Jan 20 00:40:33.562000 audit: BPF prog-id=50 op=LOAD Jan 20 00:40:33.562000 audit: BPF prog-id=31 op=UNLOAD Jan 20 00:40:33.563000 audit: BPF prog-id=51 op=LOAD Jan 20 00:40:33.563000 audit: BPF prog-id=35 op=UNLOAD Jan 20 00:40:33.563000 audit: BPF prog-id=52 op=LOAD Jan 20 00:40:33.563000 audit: BPF prog-id=53 op=LOAD Jan 20 00:40:33.563000 audit: BPF prog-id=36 op=UNLOAD Jan 20 00:40:33.563000 audit: BPF prog-id=37 op=UNLOAD Jan 20 00:40:33.563000 audit: BPF prog-id=54 op=LOAD Jan 20 00:40:33.563000 audit: BPF prog-id=55 op=LOAD Jan 20 00:40:33.563000 audit: BPF prog-id=45 op=UNLOAD Jan 20 00:40:33.563000 audit: BPF prog-id=46 op=UNLOAD Jan 20 00:40:33.563000 audit: BPF prog-id=56 op=LOAD Jan 20 00:40:33.563000 audit: BPF prog-id=44 op=UNLOAD Jan 20 00:40:33.564000 audit: BPF prog-id=57 op=LOAD Jan 20 00:40:33.564000 audit: BPF prog-id=41 op=UNLOAD Jan 20 00:40:33.564000 audit: BPF prog-id=58 op=LOAD Jan 20 00:40:33.564000 audit: BPF prog-id=59 op=LOAD Jan 20 00:40:33.564000 audit: BPF prog-id=42 op=UNLOAD Jan 20 00:40:33.564000 audit: BPF prog-id=43 op=UNLOAD Jan 20 00:40:33.565000 audit: BPF prog-id=60 op=LOAD Jan 20 00:40:33.565000 audit: BPF prog-id=32 op=UNLOAD Jan 20 00:40:33.565000 audit: BPF prog-id=61 op=LOAD Jan 20 00:40:33.565000 audit: BPF prog-id=62 op=LOAD Jan 20 00:40:33.565000 audit: BPF prog-id=33 op=UNLOAD Jan 20 00:40:33.565000 audit: BPF prog-id=34 op=UNLOAD Jan 20 00:40:33.575284 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 20 00:40:33.579000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 00:40:33.585599 systemd[1]: Reached target network-online.target - Network is Online. Jan 20 00:40:33.591016 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 20 00:40:33.600954 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 20 00:40:33.607513 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 20 00:40:33.612542 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 20 00:40:33.619473 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 20 00:40:33.626935 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 20 00:40:33.627958 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 20 00:40:33.635493 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 20 00:40:33.642480 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 20 00:40:33.646599 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 20 00:40:33.646737 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Jan 20 00:40:33.646804 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 20 00:40:33.647569 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 20 00:40:33.647726 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 20 00:40:33.651000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 00:40:33.651000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 00:40:33.652733 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 20 00:40:33.652871 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 20 00:40:33.657000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 00:40:33.657000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 00:40:33.659080 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 20 00:40:33.659362 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 20 00:40:33.663000 audit[2035]: SYSTEM_BOOT pid=2035 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Jan 20 00:40:33.664000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 00:40:33.664000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 00:40:33.671899 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 20 00:40:33.677000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 00:40:33.682144 systemd[1]: Finished ensure-sysext.service. Jan 20 00:40:33.684000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 00:40:33.686910 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 20 00:40:33.687919 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 20 00:40:33.692967 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 20 00:40:33.700426 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 20 00:40:33.706445 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 20 00:40:33.710931 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 20 00:40:33.711076 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Jan 20 00:40:33.711162 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 20 00:40:33.711250 systemd[1]: Reached target time-set.target - System Time Set. Jan 20 00:40:33.716272 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 20 00:40:33.716549 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 20 00:40:33.721000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 00:40:33.721000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 00:40:33.722944 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 20 00:40:33.723298 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 20 00:40:33.727000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 00:40:33.728000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 00:40:33.729794 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 20 00:40:33.729954 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 20 00:40:33.734000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 00:40:33.734000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 00:40:33.735702 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 20 00:40:33.735913 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 20 00:40:33.739000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 00:40:33.739000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 00:40:33.742548 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 20 00:40:33.742698 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 20 00:40:34.064439 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 20 00:40:34.069000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 00:40:34.219000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Jan 20 00:40:34.219000 audit[2072]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffcec729f0 a2=420 a3=0 items=0 ppid=2031 pid=2072 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:40:34.219000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jan 20 00:40:34.221362 augenrules[2072]: No rules Jan 20 00:40:34.222457 systemd[1]: audit-rules.service: Deactivated successfully. Jan 20 00:40:34.222762 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 20 00:40:34.793939 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 20 00:40:34.799444 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 20 00:40:43.525449 ldconfig[2033]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 20 00:40:43.535992 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 20 00:40:43.542331 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 20 00:40:43.554074 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 20 00:40:43.558847 systemd[1]: Reached target sysinit.target - System Initialization. Jan 20 00:40:43.563348 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 20 00:40:43.568650 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 20 00:40:43.573998 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 20 00:40:43.578450 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 20 00:40:43.583780 systemd[1]: Started systemd-sysupdate-reboot.timer - Reboot Automatically After System Update. Jan 20 00:40:43.588979 systemd[1]: Started systemd-sysupdate.timer - Automatic System Update. Jan 20 00:40:43.593597 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 20 00:40:43.599107 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 20 00:40:43.599136 systemd[1]: Reached target paths.target - Path Units. Jan 20 00:40:43.602885 systemd[1]: Reached target timers.target - Timer Units. Jan 20 00:40:43.637043 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 20 00:40:43.642616 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 20 00:40:43.647960 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jan 20 00:40:43.653464 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jan 20 00:40:43.658692 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jan 20 00:40:43.664664 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 20 00:40:43.669102 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jan 20 00:40:43.674438 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 20 00:40:43.678975 systemd[1]: Reached target sockets.target - Socket Units. Jan 20 00:40:43.682871 systemd[1]: Reached target basic.target - Basic System. Jan 20 00:40:43.686713 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 20 00:40:43.686737 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 20 00:40:43.688467 systemd[1]: Starting chronyd.service - NTP client/server... Jan 20 00:40:43.699397 systemd[1]: Starting containerd.service - containerd container runtime... Jan 20 00:40:43.706470 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 20 00:40:43.715464 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 20 00:40:43.722435 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 20 00:40:43.725471 chronyd[2085]: chronyd version 4.8 starting (+CMDMON +REFCLOCK +RTC +PRIVDROP +SCFILTER -SIGND +NTS +SECHASH +IPV6 -DEBUG) Jan 20 00:40:43.726706 chronyd[2085]: Timezone right/UTC failed leap second check, ignoring Jan 20 00:40:43.727358 chronyd[2085]: Loaded seccomp filter (level 2) Jan 20 00:40:43.738079 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 20 00:40:43.743531 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 20 00:40:43.747771 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 20 00:40:43.748482 systemd[1]: Started hv_kvp_daemon.service - Hyper-V KVP daemon. Jan 20 00:40:43.752847 systemd[1]: hv_vss_daemon.service - Hyper-V VSS daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_vss). Jan 20 00:40:43.754750 KVP[2095]: KVP starting; pid is:2095 Jan 20 00:40:43.755850 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 00:40:43.760325 KVP[2095]: KVP LIC Version: 3.1 Jan 20 00:40:43.763322 kernel: hv_utils: KVP IC version 4.0 Jan 20 00:40:43.763579 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 20 00:40:43.769443 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 20 00:40:43.777629 jq[2093]: false Jan 20 00:40:43.777843 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 20 00:40:43.786252 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 20 00:40:43.794133 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 20 00:40:43.803441 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 20 00:40:43.809860 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 20 00:40:43.810164 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 20 00:40:43.810993 systemd[1]: Starting update-engine.service - Update Engine... Jan 20 00:40:43.820005 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 20 00:40:43.826752 systemd[1]: Started chronyd.service - NTP client/server. Jan 20 00:40:43.831075 jq[2117]: true Jan 20 00:40:43.835056 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 20 00:40:43.841985 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 20 00:40:43.843261 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 20 00:40:43.844140 extend-filesystems[2094]: Found /dev/sda6 Jan 20 00:40:43.845594 systemd[1]: motdgen.service: Deactivated successfully. Jan 20 00:40:43.845750 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 20 00:40:43.854557 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 20 00:40:43.860630 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 20 00:40:43.885200 extend-filesystems[2094]: Found /dev/sda9 Jan 20 00:40:43.892877 extend-filesystems[2094]: Checking size of /dev/sda9 Jan 20 00:40:43.892629 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 20 00:40:43.904109 jq[2131]: true Jan 20 00:40:43.932692 update_engine[2116]: I20260120 00:40:43.932627 2116 main.cc:92] Flatcar Update Engine starting Jan 20 00:40:43.933474 extend-filesystems[2094]: Resized partition /dev/sda9 Jan 20 00:40:43.945645 tar[2129]: linux-arm64/LICENSE Jan 20 00:40:43.945645 tar[2129]: linux-arm64/helm Jan 20 00:40:43.963577 systemd-logind[2110]: New seat seat0. Jan 20 00:40:43.965656 systemd-logind[2110]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard) Jan 20 00:40:43.968098 extend-filesystems[2167]: resize2fs 1.47.3 (8-Jul-2025) Jan 20 00:40:43.998873 kernel: EXT4-fs (sda9): resizing filesystem from 6359552 to 6376955 blocks Jan 20 00:40:43.998902 kernel: EXT4-fs (sda9): resized filesystem to 6376955 Jan 20 00:40:43.968190 systemd[1]: Started systemd-logind.service - User Login Management. Jan 20 00:40:44.034889 extend-filesystems[2167]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Jan 20 00:40:44.034889 extend-filesystems[2167]: old_desc_blocks = 4, new_desc_blocks = 4 Jan 20 00:40:44.034889 extend-filesystems[2167]: The filesystem on /dev/sda9 is now 6376955 (4k) blocks long. Jan 20 00:40:44.109388 bash[2163]: Updated "/home/core/.ssh/authorized_keys" Jan 20 00:40:44.109469 update_engine[2116]: I20260120 00:40:44.108831 2116 update_check_scheduler.cc:74] Next update check in 8m16s Jan 20 00:40:44.109504 extend-filesystems[2094]: Resized filesystem in /dev/sda9 Jan 20 00:40:44.100609 dbus-daemon[2088]: [system] SELinux support is enabled Jan 20 00:40:44.035562 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 20 00:40:44.136285 dbus-daemon[2088]: [system] Successfully activated service 'org.freedesktop.systemd1' Jan 20 00:40:44.036855 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 20 00:40:44.061754 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 20 00:40:44.080976 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jan 20 00:40:44.101020 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 20 00:40:44.136897 systemd[1]: Started update-engine.service - Update Engine. Jan 20 00:40:44.151601 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 20 00:40:44.151718 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 20 00:40:44.158690 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 20 00:40:44.158764 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 20 00:40:44.173611 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 20 00:40:44.193856 coreos-metadata[2087]: Jan 20 00:40:44.193 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Jan 20 00:40:44.196460 coreos-metadata[2087]: Jan 20 00:40:44.195 INFO Fetch successful Jan 20 00:40:44.196460 coreos-metadata[2087]: Jan 20 00:40:44.195 INFO Fetching http://168.63.129.16/machine/?comp=goalstate: Attempt #1 Jan 20 00:40:44.200598 coreos-metadata[2087]: Jan 20 00:40:44.200 INFO Fetch successful Jan 20 00:40:44.200598 coreos-metadata[2087]: Jan 20 00:40:44.200 INFO Fetching http://168.63.129.16/machine/72f3d0f6-8916-4114-a3f5-3b41a1113700/8bc4f046%2Dd912%2D4e8e%2D92af%2D7899e08a14d9.%5Fci%2D4515.1.0%2Dn%2Dfc9e3ff023?comp=config&type=sharedConfig&incarnation=1: Attempt #1 Jan 20 00:40:44.203836 coreos-metadata[2087]: Jan 20 00:40:44.201 INFO Fetch successful Jan 20 00:40:44.203836 coreos-metadata[2087]: Jan 20 00:40:44.201 INFO Fetching http://169.254.169.254/metadata/instance/compute/vmSize?api-version=2017-08-01&format=text: Attempt #1 Jan 20 00:40:44.205881 sshd_keygen[2115]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 20 00:40:44.211047 coreos-metadata[2087]: Jan 20 00:40:44.211 INFO Fetch successful Jan 20 00:40:44.244371 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 20 00:40:44.249623 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 20 00:40:44.255885 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 20 00:40:44.265393 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 20 00:40:44.275143 systemd[1]: Starting waagent.service - Microsoft Azure Linux Agent... Jan 20 00:40:44.282568 systemd[1]: issuegen.service: Deactivated successfully. Jan 20 00:40:44.282748 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 20 00:40:44.290605 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 20 00:40:44.305818 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 20 00:40:44.317211 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 20 00:40:44.325332 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Jan 20 00:40:44.332042 systemd[1]: Reached target getty.target - Login Prompts. Jan 20 00:40:44.339447 systemd[1]: Started waagent.service - Microsoft Azure Linux Agent. Jan 20 00:40:44.426168 tar[2129]: linux-arm64/README.md Jan 20 00:40:44.439168 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 20 00:40:44.449120 locksmithd[2237]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 20 00:40:44.706364 containerd[2132]: time="2026-01-20T00:40:44Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Jan 20 00:40:44.707173 containerd[2132]: time="2026-01-20T00:40:44.706831752Z" level=info msg="starting containerd" revision=fcd43222d6b07379a4be9786bda52438f0dd16a1 version=v2.1.5 Jan 20 00:40:44.713203 containerd[2132]: time="2026-01-20T00:40:44.713175656Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="6.28µs" Jan 20 00:40:44.713203 containerd[2132]: time="2026-01-20T00:40:44.713195808Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Jan 20 00:40:44.713279 containerd[2132]: time="2026-01-20T00:40:44.713223920Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Jan 20 00:40:44.713279 containerd[2132]: time="2026-01-20T00:40:44.713235296Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Jan 20 00:40:44.713467 containerd[2132]: time="2026-01-20T00:40:44.713355464Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Jan 20 00:40:44.713467 containerd[2132]: time="2026-01-20T00:40:44.713371216Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jan 20 00:40:44.713467 containerd[2132]: time="2026-01-20T00:40:44.713413600Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jan 20 00:40:44.713467 containerd[2132]: time="2026-01-20T00:40:44.713419976Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jan 20 00:40:44.713578 containerd[2132]: time="2026-01-20T00:40:44.713552232Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jan 20 00:40:44.713578 containerd[2132]: time="2026-01-20T00:40:44.713568576Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jan 20 00:40:44.713578 containerd[2132]: time="2026-01-20T00:40:44.713575360Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jan 20 00:40:44.713578 containerd[2132]: time="2026-01-20T00:40:44.713580264Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Jan 20 00:40:44.714331 containerd[2132]: time="2026-01-20T00:40:44.713705344Z" level=info msg="skip loading plugin" error="EROFS unsupported, please `modprobe erofs`: skip plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Jan 20 00:40:44.714331 containerd[2132]: time="2026-01-20T00:40:44.713716720Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Jan 20 00:40:44.714331 containerd[2132]: time="2026-01-20T00:40:44.713768648Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Jan 20 00:40:44.714331 containerd[2132]: time="2026-01-20T00:40:44.713882896Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jan 20 00:40:44.714331 containerd[2132]: time="2026-01-20T00:40:44.713900944Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jan 20 00:40:44.714331 containerd[2132]: time="2026-01-20T00:40:44.713911344Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Jan 20 00:40:44.714331 containerd[2132]: time="2026-01-20T00:40:44.713934960Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Jan 20 00:40:44.714331 containerd[2132]: time="2026-01-20T00:40:44.714062864Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Jan 20 00:40:44.714331 containerd[2132]: time="2026-01-20T00:40:44.714113208Z" level=info msg="metadata content store policy set" policy=shared Jan 20 00:40:44.727832 containerd[2132]: time="2026-01-20T00:40:44.727796024Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Jan 20 00:40:44.727946 containerd[2132]: time="2026-01-20T00:40:44.727893024Z" level=info msg="loading plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Jan 20 00:40:44.741223 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 00:40:45.028563 containerd[2132]: time="2026-01-20T00:40:45.028216896Z" level=info msg="skip loading plugin" error="could not find mkfs.erofs: exec: \"mkfs.erofs\": executable file not found in $PATH: skip plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Jan 20 00:40:45.028563 containerd[2132]: time="2026-01-20T00:40:45.028256368Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Jan 20 00:40:45.028563 containerd[2132]: time="2026-01-20T00:40:45.028281400Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Jan 20 00:40:45.028563 containerd[2132]: time="2026-01-20T00:40:45.028291584Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Jan 20 00:40:45.028248 (kubelet)[2293]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 20 00:40:45.029748 containerd[2132]: time="2026-01-20T00:40:45.029294536Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Jan 20 00:40:45.029748 containerd[2132]: time="2026-01-20T00:40:45.029348128Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Jan 20 00:40:45.029748 containerd[2132]: time="2026-01-20T00:40:45.029361992Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Jan 20 00:40:45.029748 containerd[2132]: time="2026-01-20T00:40:45.029371304Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Jan 20 00:40:45.029748 containerd[2132]: time="2026-01-20T00:40:45.029381152Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Jan 20 00:40:45.029748 containerd[2132]: time="2026-01-20T00:40:45.029389256Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Jan 20 00:40:45.029748 containerd[2132]: time="2026-01-20T00:40:45.029395752Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Jan 20 00:40:45.029748 containerd[2132]: time="2026-01-20T00:40:45.029406248Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Jan 20 00:40:45.029748 containerd[2132]: time="2026-01-20T00:40:45.029539184Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Jan 20 00:40:45.029748 containerd[2132]: time="2026-01-20T00:40:45.029552840Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Jan 20 00:40:45.029748 containerd[2132]: time="2026-01-20T00:40:45.029563656Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Jan 20 00:40:45.029748 containerd[2132]: time="2026-01-20T00:40:45.029595960Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Jan 20 00:40:45.029748 containerd[2132]: time="2026-01-20T00:40:45.029603040Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Jan 20 00:40:45.029748 containerd[2132]: time="2026-01-20T00:40:45.029609512Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Jan 20 00:40:45.030322 containerd[2132]: time="2026-01-20T00:40:45.030104664Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Jan 20 00:40:45.030322 containerd[2132]: time="2026-01-20T00:40:45.030131232Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Jan 20 00:40:45.030322 containerd[2132]: time="2026-01-20T00:40:45.030145200Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Jan 20 00:40:45.030322 containerd[2132]: time="2026-01-20T00:40:45.030155592Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Jan 20 00:40:45.030322 containerd[2132]: time="2026-01-20T00:40:45.030165560Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Jan 20 00:40:45.030322 containerd[2132]: time="2026-01-20T00:40:45.030194128Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Jan 20 00:40:45.030322 containerd[2132]: time="2026-01-20T00:40:45.030237456Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Jan 20 00:40:45.031738 containerd[2132]: time="2026-01-20T00:40:45.030825944Z" level=info msg="Start snapshots syncer" Jan 20 00:40:45.031738 containerd[2132]: time="2026-01-20T00:40:45.030884800Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Jan 20 00:40:45.031738 containerd[2132]: time="2026-01-20T00:40:45.031184168Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"cgroupWritable\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"\",\"binDirs\":[\"/opt/cni/bin\"],\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogLineSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Jan 20 00:40:45.031872 containerd[2132]: time="2026-01-20T00:40:45.031228600Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Jan 20 00:40:45.031872 containerd[2132]: time="2026-01-20T00:40:45.031280416Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Jan 20 00:40:45.031872 containerd[2132]: time="2026-01-20T00:40:45.031411584Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Jan 20 00:40:45.031872 containerd[2132]: time="2026-01-20T00:40:45.031434544Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Jan 20 00:40:45.031872 containerd[2132]: time="2026-01-20T00:40:45.031443592Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Jan 20 00:40:45.031872 containerd[2132]: time="2026-01-20T00:40:45.031454064Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Jan 20 00:40:45.031872 containerd[2132]: time="2026-01-20T00:40:45.031465688Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Jan 20 00:40:45.031872 containerd[2132]: time="2026-01-20T00:40:45.031475008Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Jan 20 00:40:45.031872 containerd[2132]: time="2026-01-20T00:40:45.031488608Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Jan 20 00:40:45.031872 containerd[2132]: time="2026-01-20T00:40:45.031498856Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Jan 20 00:40:45.031872 containerd[2132]: time="2026-01-20T00:40:45.031509192Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Jan 20 00:40:45.031872 containerd[2132]: time="2026-01-20T00:40:45.031534272Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jan 20 00:40:45.031872 containerd[2132]: time="2026-01-20T00:40:45.031543696Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jan 20 00:40:45.031872 containerd[2132]: time="2026-01-20T00:40:45.031551320Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jan 20 00:40:45.032026 containerd[2132]: time="2026-01-20T00:40:45.031560272Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jan 20 00:40:45.032026 containerd[2132]: time="2026-01-20T00:40:45.031567656Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Jan 20 00:40:45.032026 containerd[2132]: time="2026-01-20T00:40:45.031574488Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Jan 20 00:40:45.032026 containerd[2132]: time="2026-01-20T00:40:45.031584264Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Jan 20 00:40:45.032026 containerd[2132]: time="2026-01-20T00:40:45.031595768Z" level=info msg="runtime interface created" Jan 20 00:40:45.032026 containerd[2132]: time="2026-01-20T00:40:45.031599472Z" level=info msg="created NRI interface" Jan 20 00:40:45.032026 containerd[2132]: time="2026-01-20T00:40:45.031605232Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Jan 20 00:40:45.032026 containerd[2132]: time="2026-01-20T00:40:45.031615584Z" level=info msg="Connect containerd service" Jan 20 00:40:45.032026 containerd[2132]: time="2026-01-20T00:40:45.031634520Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 20 00:40:45.032243 containerd[2132]: time="2026-01-20T00:40:45.032214168Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 20 00:40:45.353334 kubelet[2293]: E0120 00:40:45.352253 2293 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 20 00:40:45.354545 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 20 00:40:45.354763 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 20 00:40:45.356383 systemd[1]: kubelet.service: Consumed 542ms CPU time, 257.1M memory peak. Jan 20 00:40:45.604153 containerd[2132]: time="2026-01-20T00:40:45.604012080Z" level=info msg="Start subscribing containerd event" Jan 20 00:40:45.604153 containerd[2132]: time="2026-01-20T00:40:45.604076176Z" level=info msg="Start recovering state" Jan 20 00:40:45.604247 containerd[2132]: time="2026-01-20T00:40:45.604155144Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 20 00:40:45.604247 containerd[2132]: time="2026-01-20T00:40:45.604196416Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 20 00:40:45.604545 containerd[2132]: time="2026-01-20T00:40:45.604390240Z" level=info msg="Start event monitor" Jan 20 00:40:45.604545 containerd[2132]: time="2026-01-20T00:40:45.604409616Z" level=info msg="Start cni network conf syncer for default" Jan 20 00:40:45.604545 containerd[2132]: time="2026-01-20T00:40:45.604415792Z" level=info msg="Start streaming server" Jan 20 00:40:45.604545 containerd[2132]: time="2026-01-20T00:40:45.604422688Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Jan 20 00:40:45.604545 containerd[2132]: time="2026-01-20T00:40:45.604428152Z" level=info msg="runtime interface starting up..." Jan 20 00:40:45.604545 containerd[2132]: time="2026-01-20T00:40:45.604431984Z" level=info msg="starting plugins..." Jan 20 00:40:45.604545 containerd[2132]: time="2026-01-20T00:40:45.604443216Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Jan 20 00:40:45.608469 containerd[2132]: time="2026-01-20T00:40:45.604699184Z" level=info msg="containerd successfully booted in 0.898667s" Jan 20 00:40:45.604871 systemd[1]: Started containerd.service - containerd container runtime. Jan 20 00:40:45.611857 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 20 00:40:45.622005 systemd[1]: Startup finished in 4.214s (kernel) + 16.250s (initrd) + 20.955s (userspace) = 41.420s. Jan 20 00:40:46.093065 login[2270]: pam_lastlog(login:session): file /var/log/lastlog is locked/write, retrying Jan 20 00:40:46.093251 login[2269]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Jan 20 00:40:46.106163 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 20 00:40:46.109498 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 20 00:40:46.111550 systemd-logind[2110]: New session 1 of user core. Jan 20 00:40:46.158353 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 20 00:40:46.160420 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 20 00:40:46.169793 (systemd)[2318]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 20 00:40:46.171829 systemd-logind[2110]: New session c1 of user core. Jan 20 00:40:46.355330 systemd[2318]: Queued start job for default target default.target. Jan 20 00:40:46.359946 systemd[2318]: Created slice app.slice - User Application Slice. Jan 20 00:40:46.360065 systemd[2318]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of User's Temporary Directories. Jan 20 00:40:46.360131 systemd[2318]: Reached target paths.target - Paths. Jan 20 00:40:46.360210 systemd[2318]: Reached target timers.target - Timers. Jan 20 00:40:46.361271 systemd[2318]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 20 00:40:46.361914 systemd[2318]: Starting systemd-tmpfiles-setup.service - Create User Files and Directories... Jan 20 00:40:46.368607 systemd[2318]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 20 00:40:46.368868 systemd[2318]: Reached target sockets.target - Sockets. Jan 20 00:40:46.372707 systemd[2318]: Finished systemd-tmpfiles-setup.service - Create User Files and Directories. Jan 20 00:40:46.372865 systemd[2318]: Reached target basic.target - Basic System. Jan 20 00:40:46.372976 systemd[2318]: Reached target default.target - Main User Target. Jan 20 00:40:46.373161 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 20 00:40:46.373421 systemd[2318]: Startup finished in 197ms. Jan 20 00:40:46.379466 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 20 00:40:47.094601 login[2270]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Jan 20 00:40:47.100559 systemd-logind[2110]: New session 2 of user core. Jan 20 00:40:47.109445 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 20 00:40:47.168034 waagent[2272]: 2026-01-20T00:40:47.167975Z INFO Daemon Daemon Azure Linux Agent Version: 2.12.0.4 Jan 20 00:40:47.172543 waagent[2272]: 2026-01-20T00:40:47.172504Z INFO Daemon Daemon OS: flatcar 4515.1.0 Jan 20 00:40:47.175957 waagent[2272]: 2026-01-20T00:40:47.175929Z INFO Daemon Daemon Python: 3.11.13 Jan 20 00:40:47.181329 waagent[2272]: 2026-01-20T00:40:47.179205Z INFO Daemon Daemon Run daemon Jan 20 00:40:47.182211 waagent[2272]: 2026-01-20T00:40:47.182179Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='4515.1.0' Jan 20 00:40:47.189150 waagent[2272]: 2026-01-20T00:40:47.189063Z INFO Daemon Daemon Using waagent for provisioning Jan 20 00:40:47.193226 waagent[2272]: 2026-01-20T00:40:47.193191Z INFO Daemon Daemon Activate resource disk Jan 20 00:40:47.196692 waagent[2272]: 2026-01-20T00:40:47.196663Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Jan 20 00:40:47.204797 waagent[2272]: 2026-01-20T00:40:47.204764Z INFO Daemon Daemon Found device: None Jan 20 00:40:47.208081 waagent[2272]: 2026-01-20T00:40:47.208053Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Jan 20 00:40:47.214323 waagent[2272]: 2026-01-20T00:40:47.214286Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Jan 20 00:40:47.222961 waagent[2272]: 2026-01-20T00:40:47.222911Z INFO Daemon Daemon Clean protocol and wireserver endpoint Jan 20 00:40:47.227572 waagent[2272]: 2026-01-20T00:40:47.227177Z INFO Daemon Daemon Running default provisioning handler Jan 20 00:40:47.235891 waagent[2272]: 2026-01-20T00:40:47.235847Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 4. Jan 20 00:40:47.246062 waagent[2272]: 2026-01-20T00:40:47.246027Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Jan 20 00:40:47.253370 waagent[2272]: 2026-01-20T00:40:47.253342Z INFO Daemon Daemon cloud-init is enabled: False Jan 20 00:40:47.257134 waagent[2272]: 2026-01-20T00:40:47.257110Z INFO Daemon Daemon Copying ovf-env.xml Jan 20 00:40:47.364007 waagent[2272]: 2026-01-20T00:40:47.363909Z INFO Daemon Daemon Successfully mounted dvd Jan 20 00:40:47.405205 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Jan 20 00:40:47.407088 waagent[2272]: 2026-01-20T00:40:47.407055Z INFO Daemon Daemon Detect protocol endpoint Jan 20 00:40:47.411319 waagent[2272]: 2026-01-20T00:40:47.410916Z INFO Daemon Daemon Clean protocol and wireserver endpoint Jan 20 00:40:47.415328 waagent[2272]: 2026-01-20T00:40:47.415285Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Jan 20 00:40:47.420061 waagent[2272]: 2026-01-20T00:40:47.420035Z INFO Daemon Daemon Test for route to 168.63.129.16 Jan 20 00:40:47.424007 waagent[2272]: 2026-01-20T00:40:47.423979Z INFO Daemon Daemon Route to 168.63.129.16 exists Jan 20 00:40:47.427783 waagent[2272]: 2026-01-20T00:40:47.427758Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Jan 20 00:40:47.468367 waagent[2272]: 2026-01-20T00:40:47.468333Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Jan 20 00:40:47.473297 waagent[2272]: 2026-01-20T00:40:47.473278Z INFO Daemon Daemon Wire protocol version:2012-11-30 Jan 20 00:40:47.477148 waagent[2272]: 2026-01-20T00:40:47.477124Z INFO Daemon Daemon Server preferred version:2015-04-05 Jan 20 00:40:47.675387 waagent[2272]: 2026-01-20T00:40:47.674679Z INFO Daemon Daemon Initializing goal state during protocol detection Jan 20 00:40:47.679876 waagent[2272]: 2026-01-20T00:40:47.679837Z INFO Daemon Daemon Forcing an update of the goal state. Jan 20 00:40:47.686366 waagent[2272]: 2026-01-20T00:40:47.686328Z INFO Daemon Fetched a new incarnation for the WireServer goal state [incarnation 1] Jan 20 00:40:47.739549 waagent[2272]: 2026-01-20T00:40:47.739518Z INFO Daemon Daemon HostGAPlugin version: 1.0.8.177 Jan 20 00:40:47.743831 waagent[2272]: 2026-01-20T00:40:47.743800Z INFO Daemon Jan 20 00:40:47.745922 waagent[2272]: 2026-01-20T00:40:47.745896Z INFO Daemon Fetched new vmSettings [HostGAPlugin correlation ID: b06e5223-fc5f-4e37-a7fa-7a00a3dcfa5c eTag: 10441655265563587581 source: Fabric] Jan 20 00:40:47.754128 waagent[2272]: 2026-01-20T00:40:47.754098Z INFO Daemon The vmSettings originated via Fabric; will ignore them. Jan 20 00:40:47.759039 waagent[2272]: 2026-01-20T00:40:47.759011Z INFO Daemon Jan 20 00:40:47.761089 waagent[2272]: 2026-01-20T00:40:47.761066Z INFO Daemon Fetching full goal state from the WireServer [incarnation 1] Jan 20 00:40:47.768370 waagent[2272]: 2026-01-20T00:40:47.768343Z INFO Daemon Daemon Downloading artifacts profile blob Jan 20 00:40:47.822327 waagent[2272]: 2026-01-20T00:40:47.821625Z INFO Daemon Downloaded certificate {'thumbprint': 'D7475F5640D56B75CE412ED987DD024D26219A76', 'hasPrivateKey': True} Jan 20 00:40:47.828842 waagent[2272]: 2026-01-20T00:40:47.828808Z INFO Daemon Fetch goal state completed Jan 20 00:40:47.836780 waagent[2272]: 2026-01-20T00:40:47.836753Z INFO Daemon Daemon Starting provisioning Jan 20 00:40:47.840455 waagent[2272]: 2026-01-20T00:40:47.840423Z INFO Daemon Daemon Handle ovf-env.xml. Jan 20 00:40:47.844317 waagent[2272]: 2026-01-20T00:40:47.843912Z INFO Daemon Daemon Set hostname [ci-4515.1.0-n-fc9e3ff023] Jan 20 00:40:47.849550 waagent[2272]: 2026-01-20T00:40:47.849514Z INFO Daemon Daemon Publish hostname [ci-4515.1.0-n-fc9e3ff023] Jan 20 00:40:47.854249 waagent[2272]: 2026-01-20T00:40:47.854214Z INFO Daemon Daemon Examine /proc/net/route for primary interface Jan 20 00:40:47.858908 waagent[2272]: 2026-01-20T00:40:47.858879Z INFO Daemon Daemon Primary interface is [eth0] Jan 20 00:40:47.897137 systemd-networkd[1725]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 20 00:40:47.897150 systemd-networkd[1725]: eth0: Reconfiguring with /usr/lib/systemd/network/zz-default.network. Jan 20 00:40:47.897211 systemd-networkd[1725]: eth0: DHCP lease lost Jan 20 00:40:47.916311 waagent[2272]: 2026-01-20T00:40:47.912518Z INFO Daemon Daemon Create user account if not exists Jan 20 00:40:47.916942 waagent[2272]: 2026-01-20T00:40:47.916908Z INFO Daemon Daemon User core already exists, skip useradd Jan 20 00:40:47.921220 waagent[2272]: 2026-01-20T00:40:47.921191Z INFO Daemon Daemon Configure sudoer Jan 20 00:40:47.924339 systemd-networkd[1725]: eth0: DHCPv4 address 10.200.20.14/24, gateway 10.200.20.1 acquired from 168.63.129.16 Jan 20 00:40:47.928926 waagent[2272]: 2026-01-20T00:40:47.928850Z INFO Daemon Daemon Configure sshd Jan 20 00:40:47.936383 waagent[2272]: 2026-01-20T00:40:47.936345Z INFO Daemon Daemon Added a configuration snippet disabling SSH password-based authentication methods. It also configures SSH client probing to keep connections alive. Jan 20 00:40:47.945994 waagent[2272]: 2026-01-20T00:40:47.945961Z INFO Daemon Daemon Deploy ssh public key. Jan 20 00:40:49.029063 waagent[2272]: 2026-01-20T00:40:49.029008Z INFO Daemon Daemon Provisioning complete Jan 20 00:40:49.043291 waagent[2272]: 2026-01-20T00:40:49.043256Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Jan 20 00:40:49.047938 waagent[2272]: 2026-01-20T00:40:49.047908Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Jan 20 00:40:49.055831 waagent[2272]: 2026-01-20T00:40:49.055804Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.12.0.4 is the most current agent Jan 20 00:40:49.153346 waagent[2371]: 2026-01-20T00:40:49.152405Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.12.0.4) Jan 20 00:40:49.153346 waagent[2371]: 2026-01-20T00:40:49.152514Z INFO ExtHandler ExtHandler OS: flatcar 4515.1.0 Jan 20 00:40:49.153346 waagent[2371]: 2026-01-20T00:40:49.152553Z INFO ExtHandler ExtHandler Python: 3.11.13 Jan 20 00:40:49.153346 waagent[2371]: 2026-01-20T00:40:49.152586Z INFO ExtHandler ExtHandler CPU Arch: aarch64 Jan 20 00:40:49.187536 waagent[2371]: 2026-01-20T00:40:49.187493Z INFO ExtHandler ExtHandler Distro: flatcar-4515.1.0; OSUtil: FlatcarUtil; AgentService: waagent; Python: 3.11.13; Arch: aarch64; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.22.0; Jan 20 00:40:49.187795 waagent[2371]: 2026-01-20T00:40:49.187765Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jan 20 00:40:49.187906 waagent[2371]: 2026-01-20T00:40:49.187883Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Jan 20 00:40:49.192651 waagent[2371]: 2026-01-20T00:40:49.192603Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Jan 20 00:40:49.196626 waagent[2371]: 2026-01-20T00:40:49.196594Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.177 Jan 20 00:40:49.197036 waagent[2371]: 2026-01-20T00:40:49.197004Z INFO ExtHandler Jan 20 00:40:49.197159 waagent[2371]: 2026-01-20T00:40:49.197135Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: d23adf70-7d79-41a9-a04b-29078de23fa8 eTag: 10441655265563587581 source: Fabric] Jan 20 00:40:49.197508 waagent[2371]: 2026-01-20T00:40:49.197474Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Jan 20 00:40:49.198007 waagent[2371]: 2026-01-20T00:40:49.197974Z INFO ExtHandler Jan 20 00:40:49.198122 waagent[2371]: 2026-01-20T00:40:49.198098Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Jan 20 00:40:49.200608 waagent[2371]: 2026-01-20T00:40:49.200581Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Jan 20 00:40:49.305510 waagent[2371]: 2026-01-20T00:40:49.304463Z INFO ExtHandler Downloaded certificate {'thumbprint': 'D7475F5640D56B75CE412ED987DD024D26219A76', 'hasPrivateKey': True} Jan 20 00:40:49.305510 waagent[2371]: 2026-01-20T00:40:49.304861Z INFO ExtHandler Fetch goal state completed Jan 20 00:40:49.315091 waagent[2371]: 2026-01-20T00:40:49.315060Z INFO ExtHandler ExtHandler OpenSSL version: OpenSSL 3.4.3 30 Sep 2025 (Library: OpenSSL 3.4.3 30 Sep 2025) Jan 20 00:40:49.318362 waagent[2371]: 2026-01-20T00:40:49.318330Z INFO ExtHandler ExtHandler WALinuxAgent-2.12.0.4 running as process 2371 Jan 20 00:40:49.318536 waagent[2371]: 2026-01-20T00:40:49.318513Z INFO ExtHandler ExtHandler ******** AutoUpdate.Enabled is set to False, not processing the operation ******** Jan 20 00:40:49.318874 waagent[2371]: 2026-01-20T00:40:49.318848Z INFO ExtHandler ExtHandler ******** AutoUpdate.UpdateToLatestVersion is set to False, not processing the operation ******** Jan 20 00:40:49.320036 waagent[2371]: 2026-01-20T00:40:49.320002Z INFO ExtHandler ExtHandler [CGI] Cgroup monitoring is not supported on ['flatcar', '4515.1.0', '', 'Flatcar Container Linux by Kinvolk'] Jan 20 00:40:49.320456 waagent[2371]: 2026-01-20T00:40:49.320426Z INFO ExtHandler ExtHandler [CGI] Agent will reset the quotas in case distro: ['flatcar', '4515.1.0', '', 'Flatcar Container Linux by Kinvolk'] went from supported to unsupported Jan 20 00:40:49.320646 waagent[2371]: 2026-01-20T00:40:49.320618Z INFO ExtHandler ExtHandler [CGI] Agent cgroups enabled: False Jan 20 00:40:49.321167 waagent[2371]: 2026-01-20T00:40:49.321137Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Jan 20 00:40:50.171342 waagent[2371]: 2026-01-20T00:40:50.170986Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Jan 20 00:40:50.171342 waagent[2371]: 2026-01-20T00:40:50.171157Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Jan 20 00:40:50.175988 waagent[2371]: 2026-01-20T00:40:50.175951Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Jan 20 00:40:50.180290 systemd[1]: Reload requested from client PID 2389 ('systemctl') (unit waagent.service)... Jan 20 00:40:50.180315 systemd[1]: Reloading... Jan 20 00:40:50.249346 zram_generator::config[2434]: No configuration found. Jan 20 00:40:50.397191 systemd[1]: Reloading finished in 216 ms. Jan 20 00:40:50.418336 waagent[2371]: 2026-01-20T00:40:50.417897Z INFO ExtHandler ExtHandler Successfully added and enabled the waagent-network-setup.service Jan 20 00:40:50.419908 waagent[2371]: 2026-01-20T00:40:50.418282Z INFO ExtHandler ExtHandler Persistent firewall rules setup successfully Jan 20 00:40:51.000340 waagent[2371]: 2026-01-20T00:40:50.999763Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. Jan 20 00:40:51.000340 waagent[2371]: 2026-01-20T00:40:51.000079Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: 1. configuration enabled [True], 2. cgroups v1 enabled [False] OR cgroups v2 is in use and v2 resource limiting configuration enabled [False], 3. python supported: [True] Jan 20 00:40:51.000766 waagent[2371]: 2026-01-20T00:40:51.000723Z INFO ExtHandler ExtHandler Starting env monitor service. Jan 20 00:40:51.001065 waagent[2371]: 2026-01-20T00:40:51.000998Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Jan 20 00:40:51.001319 waagent[2371]: 2026-01-20T00:40:51.001230Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jan 20 00:40:51.001424 waagent[2371]: 2026-01-20T00:40:51.001393Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Jan 20 00:40:51.001552 waagent[2371]: 2026-01-20T00:40:51.001521Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Jan 20 00:40:51.001654 waagent[2371]: 2026-01-20T00:40:51.001619Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Jan 20 00:40:51.002342 waagent[2371]: 2026-01-20T00:40:51.001850Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jan 20 00:40:51.002342 waagent[2371]: 2026-01-20T00:40:51.001908Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Jan 20 00:40:51.002342 waagent[2371]: 2026-01-20T00:40:51.002016Z INFO EnvHandler ExtHandler Configure routes Jan 20 00:40:51.002342 waagent[2371]: 2026-01-20T00:40:51.002056Z INFO EnvHandler ExtHandler Gateway:None Jan 20 00:40:51.002342 waagent[2371]: 2026-01-20T00:40:51.002080Z INFO EnvHandler ExtHandler Routes:None Jan 20 00:40:51.002628 waagent[2371]: 2026-01-20T00:40:51.002569Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Jan 20 00:40:51.002682 waagent[2371]: 2026-01-20T00:40:51.002626Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Jan 20 00:40:51.003121 waagent[2371]: 2026-01-20T00:40:51.003087Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Jan 20 00:40:51.003237 waagent[2371]: 2026-01-20T00:40:51.003217Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Jan 20 00:40:51.008321 waagent[2371]: 2026-01-20T00:40:51.007592Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Jan 20 00:40:51.008321 waagent[2371]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Jan 20 00:40:51.008321 waagent[2371]: eth0 00000000 0114C80A 0003 0 0 1024 00000000 0 0 0 Jan 20 00:40:51.008321 waagent[2371]: eth0 0014C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Jan 20 00:40:51.008321 waagent[2371]: eth0 0114C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Jan 20 00:40:51.008321 waagent[2371]: eth0 10813FA8 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Jan 20 00:40:51.008321 waagent[2371]: eth0 FEA9FEA9 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Jan 20 00:40:51.010334 waagent[2371]: 2026-01-20T00:40:51.010051Z INFO ExtHandler ExtHandler Jan 20 00:40:51.010334 waagent[2371]: 2026-01-20T00:40:51.010111Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: 77d7bf1c-b8c3-4ab5-a78c-d9d23b99aec5 correlation 68776c76-e18b-4b11-a85c-f4b1877da454 created: 2026-01-20T00:39:35.382896Z] Jan 20 00:40:51.011313 waagent[2371]: 2026-01-20T00:40:51.011272Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Jan 20 00:40:51.011807 waagent[2371]: 2026-01-20T00:40:51.011778Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 1 ms] Jan 20 00:40:51.045546 waagent[2371]: 2026-01-20T00:40:51.045505Z WARNING ExtHandler ExtHandler Failed to get firewall packets: 'iptables -w -t security -L OUTPUT --zero OUTPUT -nxv' failed: 2 (iptables v1.8.11 (nf_tables): Illegal option `--numeric' with this command Jan 20 00:40:51.045546 waagent[2371]: Try `iptables -h' or 'iptables --help' for more information.) Jan 20 00:40:51.046189 waagent[2371]: 2026-01-20T00:40:51.046154Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.12.0.4 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: 8784A557-837A-4D48-A649-8C68F747EE25;DroppedPackets: -1;UpdateGSErrors: 0;AutoUpdate: 0;UpdateMode: SelfUpdate;] Jan 20 00:40:51.093866 waagent[2371]: 2026-01-20T00:40:51.093813Z INFO MonitorHandler ExtHandler Network interfaces: Jan 20 00:40:51.093866 waagent[2371]: Executing ['ip', '-a', '-o', 'link']: Jan 20 00:40:51.093866 waagent[2371]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Jan 20 00:40:51.093866 waagent[2371]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 7c:ed:8d:8a:f9:e3 brd ff:ff:ff:ff:ff:ff\ altname enx7ced8d8af9e3 Jan 20 00:40:51.093866 waagent[2371]: 3: enP34888s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 7c:ed:8d:8a:f9:e3 brd ff:ff:ff:ff:ff:ff\ altname enP34888p0s2 Jan 20 00:40:51.093866 waagent[2371]: Executing ['ip', '-4', '-a', '-o', 'address']: Jan 20 00:40:51.093866 waagent[2371]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Jan 20 00:40:51.093866 waagent[2371]: 2: eth0 inet 10.200.20.14/24 metric 1024 brd 10.200.20.255 scope global eth0\ valid_lft forever preferred_lft forever Jan 20 00:40:51.093866 waagent[2371]: Executing ['ip', '-6', '-a', '-o', 'address']: Jan 20 00:40:51.093866 waagent[2371]: 1: lo inet6 ::1/128 scope host noprefixroute \ valid_lft forever preferred_lft forever Jan 20 00:40:51.093866 waagent[2371]: 2: eth0 inet6 fe80::7eed:8dff:fe8a:f9e3/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Jan 20 00:40:51.164925 waagent[2371]: 2026-01-20T00:40:51.164272Z INFO EnvHandler ExtHandler Created firewall rules for the Azure Fabric: Jan 20 00:40:51.164925 waagent[2371]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Jan 20 00:40:51.164925 waagent[2371]: pkts bytes target prot opt in out source destination Jan 20 00:40:51.164925 waagent[2371]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Jan 20 00:40:51.164925 waagent[2371]: pkts bytes target prot opt in out source destination Jan 20 00:40:51.164925 waagent[2371]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Jan 20 00:40:51.164925 waagent[2371]: pkts bytes target prot opt in out source destination Jan 20 00:40:51.164925 waagent[2371]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Jan 20 00:40:51.164925 waagent[2371]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Jan 20 00:40:51.164925 waagent[2371]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Jan 20 00:40:51.166532 waagent[2371]: 2026-01-20T00:40:51.166502Z INFO EnvHandler ExtHandler Current Firewall rules: Jan 20 00:40:51.166532 waagent[2371]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Jan 20 00:40:51.166532 waagent[2371]: pkts bytes target prot opt in out source destination Jan 20 00:40:51.166532 waagent[2371]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Jan 20 00:40:51.166532 waagent[2371]: pkts bytes target prot opt in out source destination Jan 20 00:40:51.166532 waagent[2371]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Jan 20 00:40:51.166532 waagent[2371]: pkts bytes target prot opt in out source destination Jan 20 00:40:51.166532 waagent[2371]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Jan 20 00:40:51.166532 waagent[2371]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Jan 20 00:40:51.166532 waagent[2371]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Jan 20 00:40:51.166895 waagent[2371]: 2026-01-20T00:40:51.166872Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300 Jan 20 00:40:51.528037 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 20 00:40:51.529050 systemd[1]: Started sshd@0-10.200.20.14:22-10.200.16.10:49250.service - OpenSSH per-connection server daemon (10.200.16.10:49250). Jan 20 00:40:52.292356 sshd[2519]: Accepted publickey for core from 10.200.16.10 port 49250 ssh2: RSA SHA256:cmmm7c4wjQpU6I6GIDB2gDRUMOHvT66UOlhswyLAq5I Jan 20 00:40:52.293292 sshd-session[2519]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 00:40:52.296952 systemd-logind[2110]: New session 3 of user core. Jan 20 00:40:52.302440 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 20 00:40:52.606811 systemd[1]: Started sshd@1-10.200.20.14:22-10.200.16.10:49252.service - OpenSSH per-connection server daemon (10.200.16.10:49252). Jan 20 00:40:52.993541 sshd[2525]: Accepted publickey for core from 10.200.16.10 port 49252 ssh2: RSA SHA256:cmmm7c4wjQpU6I6GIDB2gDRUMOHvT66UOlhswyLAq5I Jan 20 00:40:52.996410 sshd-session[2525]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 00:40:53.000068 systemd-logind[2110]: New session 4 of user core. Jan 20 00:40:53.007436 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 20 00:40:53.209102 sshd[2528]: Connection closed by 10.200.16.10 port 49252 Jan 20 00:40:53.209582 sshd-session[2525]: pam_unix(sshd:session): session closed for user core Jan 20 00:40:53.212705 systemd[1]: sshd@1-10.200.20.14:22-10.200.16.10:49252.service: Deactivated successfully. Jan 20 00:40:53.213985 systemd[1]: session-4.scope: Deactivated successfully. Jan 20 00:40:53.216322 systemd-logind[2110]: Session 4 logged out. Waiting for processes to exit. Jan 20 00:40:53.217029 systemd-logind[2110]: Removed session 4. Jan 20 00:40:53.289391 systemd[1]: Started sshd@2-10.200.20.14:22-10.200.16.10:49262.service - OpenSSH per-connection server daemon (10.200.16.10:49262). Jan 20 00:40:53.672741 sshd[2534]: Accepted publickey for core from 10.200.16.10 port 49262 ssh2: RSA SHA256:cmmm7c4wjQpU6I6GIDB2gDRUMOHvT66UOlhswyLAq5I Jan 20 00:40:53.673703 sshd-session[2534]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 00:40:53.677415 systemd-logind[2110]: New session 5 of user core. Jan 20 00:40:53.684599 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 20 00:40:53.884895 sshd[2537]: Connection closed by 10.200.16.10 port 49262 Jan 20 00:40:53.885278 sshd-session[2534]: pam_unix(sshd:session): session closed for user core Jan 20 00:40:53.888617 systemd[1]: sshd@2-10.200.20.14:22-10.200.16.10:49262.service: Deactivated successfully. Jan 20 00:40:53.890148 systemd[1]: session-5.scope: Deactivated successfully. Jan 20 00:40:53.891055 systemd-logind[2110]: Session 5 logged out. Waiting for processes to exit. Jan 20 00:40:53.892193 systemd-logind[2110]: Removed session 5. Jan 20 00:40:53.970530 systemd[1]: Started sshd@3-10.200.20.14:22-10.200.16.10:49276.service - OpenSSH per-connection server daemon (10.200.16.10:49276). Jan 20 00:40:54.355675 sshd[2543]: Accepted publickey for core from 10.200.16.10 port 49276 ssh2: RSA SHA256:cmmm7c4wjQpU6I6GIDB2gDRUMOHvT66UOlhswyLAq5I Jan 20 00:40:54.356365 sshd-session[2543]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 00:40:54.360242 systemd-logind[2110]: New session 6 of user core. Jan 20 00:40:54.369435 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 20 00:40:54.570514 sshd[2546]: Connection closed by 10.200.16.10 port 49276 Jan 20 00:40:54.570936 sshd-session[2543]: pam_unix(sshd:session): session closed for user core Jan 20 00:40:54.574020 systemd-logind[2110]: Session 6 logged out. Waiting for processes to exit. Jan 20 00:40:54.574654 systemd[1]: sshd@3-10.200.20.14:22-10.200.16.10:49276.service: Deactivated successfully. Jan 20 00:40:54.575873 systemd[1]: session-6.scope: Deactivated successfully. Jan 20 00:40:54.577048 systemd-logind[2110]: Removed session 6. Jan 20 00:40:54.651483 systemd[1]: Started sshd@4-10.200.20.14:22-10.200.16.10:49288.service - OpenSSH per-connection server daemon (10.200.16.10:49288). Jan 20 00:40:55.042183 sshd[2552]: Accepted publickey for core from 10.200.16.10 port 49288 ssh2: RSA SHA256:cmmm7c4wjQpU6I6GIDB2gDRUMOHvT66UOlhswyLAq5I Jan 20 00:40:55.043141 sshd-session[2552]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 00:40:55.046890 systemd-logind[2110]: New session 7 of user core. Jan 20 00:40:55.057635 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 20 00:40:55.425924 sudo[2556]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 20 00:40:55.426329 sudo[2556]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 20 00:40:55.427172 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 20 00:40:55.429133 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 00:40:55.467858 sudo[2556]: pam_unix(sudo:session): session closed for user root Jan 20 00:40:55.538794 sshd[2555]: Connection closed by 10.200.16.10 port 49288 Jan 20 00:40:55.539335 sshd-session[2552]: pam_unix(sshd:session): session closed for user core Jan 20 00:40:55.542161 systemd-logind[2110]: Session 7 logged out. Waiting for processes to exit. Jan 20 00:40:55.542318 systemd[1]: sshd@4-10.200.20.14:22-10.200.16.10:49288.service: Deactivated successfully. Jan 20 00:40:55.543575 systemd[1]: session-7.scope: Deactivated successfully. Jan 20 00:40:55.545595 systemd-logind[2110]: Removed session 7. Jan 20 00:40:55.628541 systemd[1]: Started sshd@5-10.200.20.14:22-10.200.16.10:49302.service - OpenSSH per-connection server daemon (10.200.16.10:49302). Jan 20 00:40:55.656689 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 00:40:55.666490 (kubelet)[2573]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 20 00:40:55.695358 kubelet[2573]: E0120 00:40:55.695225 2573 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 20 00:40:55.697996 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 20 00:40:55.698095 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 20 00:40:55.699402 systemd[1]: kubelet.service: Consumed 109ms CPU time, 106.2M memory peak. Jan 20 00:40:56.050802 sshd[2565]: Accepted publickey for core from 10.200.16.10 port 49302 ssh2: RSA SHA256:cmmm7c4wjQpU6I6GIDB2gDRUMOHvT66UOlhswyLAq5I Jan 20 00:40:56.052149 sshd-session[2565]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 00:40:56.056239 systemd-logind[2110]: New session 8 of user core. Jan 20 00:40:56.062452 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 20 00:40:56.210508 sudo[2581]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 20 00:40:56.210702 sudo[2581]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 20 00:40:56.217379 sudo[2581]: pam_unix(sudo:session): session closed for user root Jan 20 00:40:56.221576 sudo[2580]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jan 20 00:40:56.221767 sudo[2580]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 20 00:40:56.228433 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 20 00:40:56.257321 kernel: kauditd_printk_skb: 153 callbacks suppressed Jan 20 00:40:56.257401 kernel: audit: type=1305 audit(1768869656.252:253): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Jan 20 00:40:56.252000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Jan 20 00:40:56.257492 augenrules[2603]: No rules Jan 20 00:40:56.265485 systemd[1]: audit-rules.service: Deactivated successfully. Jan 20 00:40:56.265663 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 20 00:40:56.252000 audit[2603]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffef302fd0 a2=420 a3=0 items=0 ppid=2584 pid=2603 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:40:56.274753 sudo[2580]: pam_unix(sudo:session): session closed for user root Jan 20 00:40:56.282976 kernel: audit: type=1300 audit(1768869656.252:253): arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffef302fd0 a2=420 a3=0 items=0 ppid=2584 pid=2603 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:40:56.252000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jan 20 00:40:56.290710 kernel: audit: type=1327 audit(1768869656.252:253): proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jan 20 00:40:56.266000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 00:40:56.302838 kernel: audit: type=1130 audit(1768869656.266:254): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 00:40:56.302888 kernel: audit: type=1131 audit(1768869656.266:255): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 00:40:56.266000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 00:40:56.273000 audit[2580]: USER_END pid=2580 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 20 00:40:56.329382 kernel: audit: type=1106 audit(1768869656.273:256): pid=2580 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 20 00:40:56.329413 kernel: audit: type=1104 audit(1768869656.273:257): pid=2580 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 20 00:40:56.273000 audit[2580]: CRED_DISP pid=2580 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 20 00:40:56.359938 sshd[2579]: Connection closed by 10.200.16.10 port 49302 Jan 20 00:40:56.360320 sshd-session[2565]: pam_unix(sshd:session): session closed for user core Jan 20 00:40:56.359000 audit[2565]: USER_END pid=2565 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 00:40:56.368190 systemd[1]: sshd@5-10.200.20.14:22-10.200.16.10:49302.service: Deactivated successfully. Jan 20 00:40:56.369946 systemd[1]: session-8.scope: Deactivated successfully. Jan 20 00:40:56.359000 audit[2565]: CRED_DISP pid=2565 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 00:40:56.394649 kernel: audit: type=1106 audit(1768869656.359:258): pid=2565 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 00:40:56.394693 kernel: audit: type=1104 audit(1768869656.359:259): pid=2565 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 00:40:56.365000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-10.200.20.14:22-10.200.16.10:49302 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 00:40:56.395053 systemd-logind[2110]: Session 8 logged out. Waiting for processes to exit. Jan 20 00:40:56.408642 kernel: audit: type=1131 audit(1768869656.365:260): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-10.200.20.14:22-10.200.16.10:49302 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 00:40:56.409570 systemd-logind[2110]: Removed session 8. Jan 20 00:40:56.445480 systemd[1]: Started sshd@6-10.200.20.14:22-10.200.16.10:49312.service - OpenSSH per-connection server daemon (10.200.16.10:49312). Jan 20 00:40:56.444000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.200.20.14:22-10.200.16.10:49312 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 00:40:56.865000 audit[2612]: USER_ACCT pid=2612 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 00:40:56.866828 sshd[2612]: Accepted publickey for core from 10.200.16.10 port 49312 ssh2: RSA SHA256:cmmm7c4wjQpU6I6GIDB2gDRUMOHvT66UOlhswyLAq5I Jan 20 00:40:56.866000 audit[2612]: CRED_ACQ pid=2612 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 00:40:56.866000 audit[2612]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffe8a9da20 a2=3 a3=0 items=0 ppid=1 pid=2612 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=9 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:40:56.866000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 00:40:56.868100 sshd-session[2612]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 00:40:56.872476 systemd-logind[2110]: New session 9 of user core. Jan 20 00:40:56.878587 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 20 00:40:56.879000 audit[2612]: USER_START pid=2612 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 00:40:56.880000 audit[2615]: CRED_ACQ pid=2615 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 00:40:57.022000 audit[2616]: USER_ACCT pid=2616 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 20 00:40:57.024271 sudo[2616]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 20 00:40:57.023000 audit[2616]: CRED_REFR pid=2616 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 20 00:40:57.024484 sudo[2616]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 20 00:40:57.024000 audit[2616]: USER_START pid=2616 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 20 00:40:58.879785 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 20 00:40:58.888535 (dockerd)[2634]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 20 00:41:00.597326 dockerd[2634]: time="2026-01-20T00:41:00.597251808Z" level=info msg="Starting up" Jan 20 00:41:00.598158 dockerd[2634]: time="2026-01-20T00:41:00.598132200Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Jan 20 00:41:00.605597 dockerd[2634]: time="2026-01-20T00:41:00.605568224Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Jan 20 00:41:00.732014 dockerd[2634]: time="2026-01-20T00:41:00.731985512Z" level=info msg="Loading containers: start." Jan 20 00:41:00.802336 kernel: Initializing XFRM netlink socket Jan 20 00:41:00.883000 audit[2680]: NETFILTER_CFG table=nat:5 family=2 entries=2 op=nft_register_chain pid=2680 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 00:41:00.883000 audit[2680]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=116 a0=3 a1=ffffc60fcef0 a2=0 a3=0 items=0 ppid=2634 pid=2680 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:41:00.883000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Jan 20 00:41:00.885000 audit[2682]: NETFILTER_CFG table=filter:6 family=2 entries=2 op=nft_register_chain pid=2682 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 00:41:00.885000 audit[2682]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=124 a0=3 a1=fffff07da570 a2=0 a3=0 items=0 ppid=2634 pid=2682 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:41:00.885000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Jan 20 00:41:00.886000 audit[2684]: NETFILTER_CFG table=filter:7 family=2 entries=1 op=nft_register_chain pid=2684 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 00:41:00.886000 audit[2684]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffe95a4850 a2=0 a3=0 items=0 ppid=2634 pid=2684 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:41:00.886000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D464F5257415244 Jan 20 00:41:00.888000 audit[2686]: NETFILTER_CFG table=filter:8 family=2 entries=1 op=nft_register_chain pid=2686 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 00:41:00.888000 audit[2686]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffede59480 a2=0 a3=0 items=0 ppid=2634 pid=2686 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:41:00.888000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D425249444745 Jan 20 00:41:00.889000 audit[2688]: NETFILTER_CFG table=filter:9 family=2 entries=1 op=nft_register_chain pid=2688 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 00:41:00.889000 audit[2688]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffde9262f0 a2=0 a3=0 items=0 ppid=2634 pid=2688 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:41:00.889000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D4354 Jan 20 00:41:00.891000 audit[2690]: NETFILTER_CFG table=filter:10 family=2 entries=1 op=nft_register_chain pid=2690 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 00:41:00.891000 audit[2690]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=112 a0=3 a1=ffffdfeeaae0 a2=0 a3=0 items=0 ppid=2634 pid=2690 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:41:00.891000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 20 00:41:00.893000 audit[2692]: NETFILTER_CFG table=filter:11 family=2 entries=1 op=nft_register_chain pid=2692 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 00:41:00.893000 audit[2692]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=112 a0=3 a1=ffffec5d7e10 a2=0 a3=0 items=0 ppid=2634 pid=2692 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:41:00.893000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Jan 20 00:41:00.894000 audit[2694]: NETFILTER_CFG table=nat:12 family=2 entries=2 op=nft_register_chain pid=2694 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 00:41:00.894000 audit[2694]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=384 a0=3 a1=ffffe6a4da60 a2=0 a3=0 items=0 ppid=2634 pid=2694 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:41:00.894000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Jan 20 00:41:00.978000 audit[2697]: NETFILTER_CFG table=nat:13 family=2 entries=2 op=nft_register_chain pid=2697 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 00:41:00.978000 audit[2697]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=472 a0=3 a1=ffffccb02d00 a2=0 a3=0 items=0 ppid=2634 pid=2697 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:41:00.978000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003132372E302E302E302F38 Jan 20 00:41:00.979000 audit[2699]: NETFILTER_CFG table=filter:14 family=2 entries=2 op=nft_register_chain pid=2699 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 00:41:00.979000 audit[2699]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=340 a0=3 a1=ffffe85dbbc0 a2=0 a3=0 items=0 ppid=2634 pid=2699 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:41:00.979000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D464F5257415244 Jan 20 00:41:00.981000 audit[2701]: NETFILTER_CFG table=filter:15 family=2 entries=1 op=nft_register_rule pid=2701 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 00:41:00.981000 audit[2701]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=236 a0=3 a1=ffffdaed2000 a2=0 a3=0 items=0 ppid=2634 pid=2701 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:41:00.981000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D425249444745 Jan 20 00:41:00.983000 audit[2703]: NETFILTER_CFG table=filter:16 family=2 entries=1 op=nft_register_rule pid=2703 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 00:41:00.983000 audit[2703]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=248 a0=3 a1=ffffd8f6d0a0 a2=0 a3=0 items=0 ppid=2634 pid=2703 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:41:00.983000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 20 00:41:00.985000 audit[2705]: NETFILTER_CFG table=filter:17 family=2 entries=1 op=nft_register_rule pid=2705 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 00:41:00.985000 audit[2705]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=232 a0=3 a1=ffffcee11630 a2=0 a3=0 items=0 ppid=2634 pid=2705 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:41:00.985000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D4354 Jan 20 00:41:01.140000 audit[2735]: NETFILTER_CFG table=nat:18 family=10 entries=2 op=nft_register_chain pid=2735 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 00:41:01.140000 audit[2735]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=116 a0=3 a1=fffff3d9cba0 a2=0 a3=0 items=0 ppid=2634 pid=2735 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:41:01.140000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Jan 20 00:41:01.142000 audit[2737]: NETFILTER_CFG table=filter:19 family=10 entries=2 op=nft_register_chain pid=2737 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 00:41:01.142000 audit[2737]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=124 a0=3 a1=ffffddcd6890 a2=0 a3=0 items=0 ppid=2634 pid=2737 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:41:01.142000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Jan 20 00:41:01.143000 audit[2739]: NETFILTER_CFG table=filter:20 family=10 entries=1 op=nft_register_chain pid=2739 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 00:41:01.143000 audit[2739]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffff6a7fa40 a2=0 a3=0 items=0 ppid=2634 pid=2739 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:41:01.143000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D464F5257415244 Jan 20 00:41:01.145000 audit[2741]: NETFILTER_CFG table=filter:21 family=10 entries=1 op=nft_register_chain pid=2741 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 00:41:01.145000 audit[2741]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffcbfa41a0 a2=0 a3=0 items=0 ppid=2634 pid=2741 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:41:01.145000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D425249444745 Jan 20 00:41:01.147000 audit[2743]: NETFILTER_CFG table=filter:22 family=10 entries=1 op=nft_register_chain pid=2743 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 00:41:01.147000 audit[2743]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffcd2f97c0 a2=0 a3=0 items=0 ppid=2634 pid=2743 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:41:01.147000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D4354 Jan 20 00:41:01.148000 audit[2745]: NETFILTER_CFG table=filter:23 family=10 entries=1 op=nft_register_chain pid=2745 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 00:41:01.148000 audit[2745]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=112 a0=3 a1=ffffeffe6b30 a2=0 a3=0 items=0 ppid=2634 pid=2745 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:41:01.148000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 20 00:41:01.150000 audit[2747]: NETFILTER_CFG table=filter:24 family=10 entries=1 op=nft_register_chain pid=2747 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 00:41:01.150000 audit[2747]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=112 a0=3 a1=fffff5de7a30 a2=0 a3=0 items=0 ppid=2634 pid=2747 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:41:01.150000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Jan 20 00:41:01.151000 audit[2749]: NETFILTER_CFG table=nat:25 family=10 entries=2 op=nft_register_chain pid=2749 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 00:41:01.151000 audit[2749]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=384 a0=3 a1=ffffe4588c70 a2=0 a3=0 items=0 ppid=2634 pid=2749 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:41:01.151000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Jan 20 00:41:01.153000 audit[2751]: NETFILTER_CFG table=nat:26 family=10 entries=2 op=nft_register_chain pid=2751 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 00:41:01.153000 audit[2751]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=484 a0=3 a1=ffffd86dfd70 a2=0 a3=0 items=0 ppid=2634 pid=2751 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:41:01.153000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003A3A312F313238 Jan 20 00:41:01.155000 audit[2753]: NETFILTER_CFG table=filter:27 family=10 entries=2 op=nft_register_chain pid=2753 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 00:41:01.155000 audit[2753]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=340 a0=3 a1=ffffd953e7b0 a2=0 a3=0 items=0 ppid=2634 pid=2753 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:41:01.155000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D464F5257415244 Jan 20 00:41:01.156000 audit[2755]: NETFILTER_CFG table=filter:28 family=10 entries=1 op=nft_register_rule pid=2755 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 00:41:01.156000 audit[2755]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=236 a0=3 a1=ffffdf33cd70 a2=0 a3=0 items=0 ppid=2634 pid=2755 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:41:01.156000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D425249444745 Jan 20 00:41:01.158000 audit[2757]: NETFILTER_CFG table=filter:29 family=10 entries=1 op=nft_register_rule pid=2757 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 00:41:01.158000 audit[2757]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=248 a0=3 a1=ffffcc786b40 a2=0 a3=0 items=0 ppid=2634 pid=2757 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:41:01.158000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 20 00:41:01.160000 audit[2759]: NETFILTER_CFG table=filter:30 family=10 entries=1 op=nft_register_rule pid=2759 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 00:41:01.160000 audit[2759]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=232 a0=3 a1=ffffe7d82ea0 a2=0 a3=0 items=0 ppid=2634 pid=2759 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:41:01.160000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D4354 Jan 20 00:41:01.163000 audit[2764]: NETFILTER_CFG table=filter:31 family=2 entries=1 op=nft_register_chain pid=2764 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 00:41:01.163000 audit[2764]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffc0ded4d0 a2=0 a3=0 items=0 ppid=2634 pid=2764 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:41:01.163000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Jan 20 00:41:01.165000 audit[2766]: NETFILTER_CFG table=filter:32 family=2 entries=1 op=nft_register_rule pid=2766 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 00:41:01.165000 audit[2766]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=212 a0=3 a1=ffffcbe90090 a2=0 a3=0 items=0 ppid=2634 pid=2766 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:41:01.165000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Jan 20 00:41:01.167000 audit[2768]: NETFILTER_CFG table=filter:33 family=2 entries=1 op=nft_register_rule pid=2768 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 00:41:01.167000 audit[2768]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=224 a0=3 a1=fffffd04da60 a2=0 a3=0 items=0 ppid=2634 pid=2768 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:41:01.167000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Jan 20 00:41:01.168000 audit[2770]: NETFILTER_CFG table=filter:34 family=10 entries=1 op=nft_register_chain pid=2770 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 00:41:01.168000 audit[2770]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=fffffc4efa60 a2=0 a3=0 items=0 ppid=2634 pid=2770 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:41:01.168000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Jan 20 00:41:01.170000 audit[2772]: NETFILTER_CFG table=filter:35 family=10 entries=1 op=nft_register_rule pid=2772 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 00:41:01.170000 audit[2772]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=212 a0=3 a1=ffffcd2b2770 a2=0 a3=0 items=0 ppid=2634 pid=2772 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:41:01.170000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Jan 20 00:41:01.171000 audit[2774]: NETFILTER_CFG table=filter:36 family=10 entries=1 op=nft_register_rule pid=2774 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 00:41:01.171000 audit[2774]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=224 a0=3 a1=fffff4700470 a2=0 a3=0 items=0 ppid=2634 pid=2774 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:41:01.171000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Jan 20 00:41:01.225000 audit[2779]: NETFILTER_CFG table=nat:37 family=2 entries=2 op=nft_register_chain pid=2779 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 00:41:01.225000 audit[2779]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=520 a0=3 a1=fffff5bccb70 a2=0 a3=0 items=0 ppid=2634 pid=2779 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:41:01.225000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4900504F5354524F5554494E47002D73003137322E31372E302E302F31360000002D6F00646F636B657230002D6A004D415351554552414445 Jan 20 00:41:01.227000 audit[2781]: NETFILTER_CFG table=nat:38 family=2 entries=1 op=nft_register_rule pid=2781 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 00:41:01.227000 audit[2781]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=288 a0=3 a1=ffffc3a8c8c0 a2=0 a3=0 items=0 ppid=2634 pid=2781 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:41:01.227000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4900444F434B4552002D6900646F636B657230002D6A0052455455524E Jan 20 00:41:01.233000 audit[2789]: NETFILTER_CFG table=filter:39 family=2 entries=1 op=nft_register_rule pid=2789 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 00:41:01.233000 audit[2789]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=300 a0=3 a1=ffffdcf1fac0 a2=0 a3=0 items=0 ppid=2634 pid=2789 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:41:01.233000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D464F5257415244002D6900646F636B657230002D6A00414343455054 Jan 20 00:41:01.236000 audit[2794]: NETFILTER_CFG table=filter:40 family=2 entries=1 op=nft_register_rule pid=2794 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 00:41:01.236000 audit[2794]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=376 a0=3 a1=ffffc11bde00 a2=0 a3=0 items=0 ppid=2634 pid=2794 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:41:01.236000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45520000002D6900646F636B657230002D6F00646F636B657230002D6A0044524F50 Jan 20 00:41:01.238000 audit[2796]: NETFILTER_CFG table=filter:41 family=2 entries=1 op=nft_register_rule pid=2796 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 00:41:01.238000 audit[2796]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=512 a0=3 a1=ffffd3e9a0d0 a2=0 a3=0 items=0 ppid=2634 pid=2796 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:41:01.238000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D4354002D6F00646F636B657230002D6D00636F6E6E747261636B002D2D637473746174650052454C415445442C45535441424C4953484544002D6A00414343455054 Jan 20 00:41:01.239000 audit[2798]: NETFILTER_CFG table=filter:42 family=2 entries=1 op=nft_register_rule pid=2798 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 00:41:01.239000 audit[2798]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=312 a0=3 a1=ffffc0877710 a2=0 a3=0 items=0 ppid=2634 pid=2798 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:41:01.239000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D425249444745002D6F00646F636B657230002D6A00444F434B4552 Jan 20 00:41:01.241000 audit[2800]: NETFILTER_CFG table=filter:43 family=2 entries=1 op=nft_register_rule pid=2800 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 00:41:01.241000 audit[2800]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=428 a0=3 a1=ffffc32efae0 a2=0 a3=0 items=0 ppid=2634 pid=2800 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:41:01.241000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D49534F4C4154494F4E2D53544147452D31002D6900646F636B6572300000002D6F00646F636B657230002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D32 Jan 20 00:41:01.242000 audit[2802]: NETFILTER_CFG table=filter:44 family=2 entries=1 op=nft_register_rule pid=2802 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 00:41:01.242000 audit[2802]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=312 a0=3 a1=fffff9e190e0 a2=0 a3=0 items=0 ppid=2634 pid=2802 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:41:01.242000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D32002D6F00646F636B657230002D6A0044524F50 Jan 20 00:41:01.244606 systemd-networkd[1725]: docker0: Link UP Jan 20 00:41:01.259296 dockerd[2634]: time="2026-01-20T00:41:01.259121600Z" level=info msg="Loading containers: done." Jan 20 00:41:01.268175 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3447247196-merged.mount: Deactivated successfully. Jan 20 00:41:01.317748 dockerd[2634]: time="2026-01-20T00:41:01.317709512Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 20 00:41:01.317871 dockerd[2634]: time="2026-01-20T00:41:01.317778096Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Jan 20 00:41:01.317871 dockerd[2634]: time="2026-01-20T00:41:01.317862984Z" level=info msg="Initializing buildkit" Jan 20 00:41:01.361982 dockerd[2634]: time="2026-01-20T00:41:01.361950792Z" level=info msg="Completed buildkit initialization" Jan 20 00:41:01.366597 dockerd[2634]: time="2026-01-20T00:41:01.366559712Z" level=info msg="Daemon has completed initialization" Jan 20 00:41:01.366875 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 20 00:41:01.366999 dockerd[2634]: time="2026-01-20T00:41:01.366960072Z" level=info msg="API listen on /run/docker.sock" Jan 20 00:41:01.375399 kernel: kauditd_printk_skb: 131 callbacks suppressed Jan 20 00:41:01.375472 kernel: audit: type=1130 audit(1768869661.366:310): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=docker comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 00:41:01.366000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=docker comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 00:41:02.061647 containerd[2132]: time="2026-01-20T00:41:02.061609256Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.7\"" Jan 20 00:41:02.903296 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount564182787.mount: Deactivated successfully. Jan 20 00:41:03.849599 containerd[2132]: time="2026-01-20T00:41:03.849548440Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 00:41:03.853240 containerd[2132]: time="2026-01-20T00:41:03.853202208Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.7: active requests=0, bytes read=25876322" Jan 20 00:41:03.856092 containerd[2132]: time="2026-01-20T00:41:03.856068928Z" level=info msg="ImageCreate event name:\"sha256:6d7bc8e445519fe4d49eee834f33f3e165eef4d3c0919ba08c67cdf8db905b7e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 00:41:03.860786 containerd[2132]: time="2026-01-20T00:41:03.860758056Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:9585226cb85d1dc0f0ef5f7a75f04e4bc91ddd82de249533bd293aa3cf958dab\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 00:41:03.861516 containerd[2132]: time="2026-01-20T00:41:03.861359088Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.7\" with image id \"sha256:6d7bc8e445519fe4d49eee834f33f3e165eef4d3c0919ba08c67cdf8db905b7e\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.7\", repo digest \"registry.k8s.io/kube-apiserver@sha256:9585226cb85d1dc0f0ef5f7a75f04e4bc91ddd82de249533bd293aa3cf958dab\", size \"27383880\" in 1.799656704s" Jan 20 00:41:03.861516 containerd[2132]: time="2026-01-20T00:41:03.861393352Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.7\" returns image reference \"sha256:6d7bc8e445519fe4d49eee834f33f3e165eef4d3c0919ba08c67cdf8db905b7e\"" Jan 20 00:41:03.862499 containerd[2132]: time="2026-01-20T00:41:03.862478048Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.7\"" Jan 20 00:41:05.555439 containerd[2132]: time="2026-01-20T00:41:05.555387736Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 00:41:05.558131 containerd[2132]: time="2026-01-20T00:41:05.557955696Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.7: active requests=0, bytes read=23544927" Jan 20 00:41:05.560928 containerd[2132]: time="2026-01-20T00:41:05.560906144Z" level=info msg="ImageCreate event name:\"sha256:a94595d0240bcc5e538b4b33bbc890512a731425be69643cbee284072f7d8f64\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 00:41:05.564777 containerd[2132]: time="2026-01-20T00:41:05.564755296Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:f69d77ca0626b5a4b7b432c18de0952941181db7341c80eb89731f46d1d0c230\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 00:41:05.565386 containerd[2132]: time="2026-01-20T00:41:05.565359712Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.7\" with image id \"sha256:a94595d0240bcc5e538b4b33bbc890512a731425be69643cbee284072f7d8f64\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.7\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:f69d77ca0626b5a4b7b432c18de0952941181db7341c80eb89731f46d1d0c230\", size \"25137562\" in 1.70285828s" Jan 20 00:41:05.565386 containerd[2132]: time="2026-01-20T00:41:05.565386616Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.7\" returns image reference \"sha256:a94595d0240bcc5e538b4b33bbc890512a731425be69643cbee284072f7d8f64\"" Jan 20 00:41:05.566315 containerd[2132]: time="2026-01-20T00:41:05.565730808Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.7\"" Jan 20 00:41:05.785537 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 20 00:41:05.788465 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 00:41:05.886134 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 00:41:05.884000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 00:41:05.899125 (kubelet)[2912]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 20 00:41:05.899626 kernel: audit: type=1130 audit(1768869665.884:311): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 00:41:06.004902 kubelet[2912]: E0120 00:41:06.004855 2912 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 20 00:41:06.006908 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 20 00:41:06.007078 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 20 00:41:06.006000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 20 00:41:06.007945 systemd[1]: kubelet.service: Consumed 101ms CPU time, 107M memory peak. Jan 20 00:41:06.020323 kernel: audit: type=1131 audit(1768869666.006:312): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 20 00:41:07.509384 chronyd[2085]: Selected source PHC0 Jan 20 00:41:07.617329 containerd[2132]: time="2026-01-20T00:41:07.616934971Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 00:41:07.619762 containerd[2132]: time="2026-01-20T00:41:07.619735840Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.7: active requests=0, bytes read=18293164" Jan 20 00:41:07.622734 containerd[2132]: time="2026-01-20T00:41:07.622709283Z" level=info msg="ImageCreate event name:\"sha256:94005b6be50f054c8a4ef3f0d6976644e8b3c6a8bf15a9e8a2eeac3e8331b010\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 00:41:07.627139 containerd[2132]: time="2026-01-20T00:41:07.627111029Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:21bda321d8b4d48eb059fbc1593203d55d8b3bc7acd0584e04e55504796d78d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 00:41:07.627608 containerd[2132]: time="2026-01-20T00:41:07.627462452Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.7\" with image id \"sha256:94005b6be50f054c8a4ef3f0d6976644e8b3c6a8bf15a9e8a2eeac3e8331b010\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.7\", repo digest \"registry.k8s.io/kube-scheduler@sha256:21bda321d8b4d48eb059fbc1593203d55d8b3bc7acd0584e04e55504796d78d0\", size \"19882566\" in 2.061713793s" Jan 20 00:41:07.627608 containerd[2132]: time="2026-01-20T00:41:07.627489205Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.7\" returns image reference \"sha256:94005b6be50f054c8a4ef3f0d6976644e8b3c6a8bf15a9e8a2eeac3e8331b010\"" Jan 20 00:41:07.627997 containerd[2132]: time="2026-01-20T00:41:07.627980141Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.7\"" Jan 20 00:41:08.605180 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3906047573.mount: Deactivated successfully. Jan 20 00:41:08.952095 containerd[2132]: time="2026-01-20T00:41:08.951979615Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 00:41:08.954691 containerd[2132]: time="2026-01-20T00:41:08.954560605Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.7: active requests=0, bytes read=28254952" Jan 20 00:41:08.957864 containerd[2132]: time="2026-01-20T00:41:08.957838703Z" level=info msg="ImageCreate event name:\"sha256:78ccb937011a53894db229033fd54e237d478ec85315f8b08e5dcaa0f737111b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 00:41:08.961718 containerd[2132]: time="2026-01-20T00:41:08.961691979Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:ec25702b19026e9c0d339bc1c3bd231435a59f28b5fccb21e1b1078a357380f5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 00:41:08.962114 containerd[2132]: time="2026-01-20T00:41:08.961911602Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.7\" with image id \"sha256:78ccb937011a53894db229033fd54e237d478ec85315f8b08e5dcaa0f737111b\", repo tag \"registry.k8s.io/kube-proxy:v1.33.7\", repo digest \"registry.k8s.io/kube-proxy@sha256:ec25702b19026e9c0d339bc1c3bd231435a59f28b5fccb21e1b1078a357380f5\", size \"28257692\" in 1.333627201s" Jan 20 00:41:08.962114 containerd[2132]: time="2026-01-20T00:41:08.961942003Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.7\" returns image reference \"sha256:78ccb937011a53894db229033fd54e237d478ec85315f8b08e5dcaa0f737111b\"" Jan 20 00:41:08.962463 containerd[2132]: time="2026-01-20T00:41:08.962439314Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Jan 20 00:41:09.577498 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount269287730.mount: Deactivated successfully. Jan 20 00:41:10.774409 containerd[2132]: time="2026-01-20T00:41:10.773735461Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 00:41:10.776327 containerd[2132]: time="2026-01-20T00:41:10.776288634Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=18338344" Jan 20 00:41:10.779253 containerd[2132]: time="2026-01-20T00:41:10.779234435Z" level=info msg="ImageCreate event name:\"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 00:41:10.782964 containerd[2132]: time="2026-01-20T00:41:10.782934139Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 00:41:10.783651 containerd[2132]: time="2026-01-20T00:41:10.783628152Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"19148915\" in 1.821163445s" Jan 20 00:41:10.783736 containerd[2132]: time="2026-01-20T00:41:10.783721755Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\"" Jan 20 00:41:10.784322 containerd[2132]: time="2026-01-20T00:41:10.784273211Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jan 20 00:41:11.329337 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount245973099.mount: Deactivated successfully. Jan 20 00:41:11.344363 containerd[2132]: time="2026-01-20T00:41:11.344327123Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 20 00:41:11.347655 containerd[2132]: time="2026-01-20T00:41:11.347617631Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Jan 20 00:41:11.350342 containerd[2132]: time="2026-01-20T00:41:11.350320824Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 20 00:41:11.354423 containerd[2132]: time="2026-01-20T00:41:11.354380395Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 20 00:41:11.354907 containerd[2132]: time="2026-01-20T00:41:11.354766862Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 570.376471ms" Jan 20 00:41:11.354907 containerd[2132]: time="2026-01-20T00:41:11.354792767Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Jan 20 00:41:11.355415 containerd[2132]: time="2026-01-20T00:41:11.355394521Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Jan 20 00:41:12.020975 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3349050363.mount: Deactivated successfully. Jan 20 00:41:14.886343 containerd[2132]: time="2026-01-20T00:41:14.885714537Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 00:41:14.888236 containerd[2132]: time="2026-01-20T00:41:14.888199268Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=68134789" Jan 20 00:41:14.891613 containerd[2132]: time="2026-01-20T00:41:14.891334675Z" level=info msg="ImageCreate event name:\"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 00:41:14.895203 containerd[2132]: time="2026-01-20T00:41:14.895181191Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 00:41:14.895900 containerd[2132]: time="2026-01-20T00:41:14.895876060Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"70026017\" in 3.540456778s" Jan 20 00:41:14.895982 containerd[2132]: time="2026-01-20T00:41:14.895969303Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\"" Jan 20 00:41:16.035898 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jan 20 00:41:16.040465 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 00:41:16.150000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 00:41:16.151475 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 00:41:16.168324 kernel: audit: type=1130 audit(1768869676.150:313): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 00:41:16.170517 (kubelet)[3070]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 20 00:41:16.260719 kubelet[3070]: E0120 00:41:16.260681 3070 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 20 00:41:16.262000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 20 00:41:16.263600 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 20 00:41:16.263699 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 20 00:41:16.263989 systemd[1]: kubelet.service: Consumed 158ms CPU time, 105.5M memory peak. Jan 20 00:41:16.278467 kernel: audit: type=1131 audit(1768869676.262:314): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 20 00:41:17.813421 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 00:41:17.813753 systemd[1]: kubelet.service: Consumed 158ms CPU time, 105.5M memory peak. Jan 20 00:41:17.812000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 00:41:17.820504 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 00:41:17.812000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 00:41:17.838940 kernel: audit: type=1130 audit(1768869677.812:315): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 00:41:17.839000 kernel: audit: type=1131 audit(1768869677.812:316): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 00:41:17.859403 systemd[1]: Reload requested from client PID 3084 ('systemctl') (unit session-9.scope)... Jan 20 00:41:17.859497 systemd[1]: Reloading... Jan 20 00:41:17.950342 zram_generator::config[3132]: No configuration found. Jan 20 00:41:18.111784 systemd[1]: Reloading finished in 252 ms. Jan 20 00:41:18.137000 audit: BPF prog-id=87 op=LOAD Jan 20 00:41:18.137000 audit: BPF prog-id=83 op=UNLOAD Jan 20 00:41:18.151153 kernel: audit: type=1334 audit(1768869678.137:317): prog-id=87 op=LOAD Jan 20 00:41:18.151196 kernel: audit: type=1334 audit(1768869678.137:318): prog-id=83 op=UNLOAD Jan 20 00:41:18.137000 audit: BPF prog-id=88 op=LOAD Jan 20 00:41:18.155284 kernel: audit: type=1334 audit(1768869678.137:319): prog-id=88 op=LOAD Jan 20 00:41:18.137000 audit: BPF prog-id=77 op=UNLOAD Jan 20 00:41:18.159869 kernel: audit: type=1334 audit(1768869678.137:320): prog-id=77 op=UNLOAD Jan 20 00:41:18.165496 kernel: audit: type=1334 audit(1768869678.138:321): prog-id=89 op=LOAD Jan 20 00:41:18.138000 audit: BPF prog-id=89 op=LOAD Jan 20 00:41:18.138000 audit: BPF prog-id=90 op=LOAD Jan 20 00:41:18.169646 kernel: audit: type=1334 audit(1768869678.138:322): prog-id=90 op=LOAD Jan 20 00:41:18.138000 audit: BPF prog-id=78 op=UNLOAD Jan 20 00:41:18.138000 audit: BPF prog-id=79 op=UNLOAD Jan 20 00:41:18.138000 audit: BPF prog-id=91 op=LOAD Jan 20 00:41:18.138000 audit: BPF prog-id=84 op=UNLOAD Jan 20 00:41:18.138000 audit: BPF prog-id=92 op=LOAD Jan 20 00:41:18.138000 audit: BPF prog-id=93 op=LOAD Jan 20 00:41:18.138000 audit: BPF prog-id=85 op=UNLOAD Jan 20 00:41:18.138000 audit: BPF prog-id=86 op=UNLOAD Jan 20 00:41:18.145000 audit: BPF prog-id=94 op=LOAD Jan 20 00:41:18.158000 audit: BPF prog-id=80 op=UNLOAD Jan 20 00:41:18.159000 audit: BPF prog-id=95 op=LOAD Jan 20 00:41:18.159000 audit: BPF prog-id=96 op=LOAD Jan 20 00:41:18.159000 audit: BPF prog-id=81 op=UNLOAD Jan 20 00:41:18.159000 audit: BPF prog-id=82 op=UNLOAD Jan 20 00:41:18.159000 audit: BPF prog-id=97 op=LOAD Jan 20 00:41:18.159000 audit: BPF prog-id=67 op=UNLOAD Jan 20 00:41:18.164000 audit: BPF prog-id=98 op=LOAD Jan 20 00:41:18.164000 audit: BPF prog-id=74 op=UNLOAD Jan 20 00:41:18.165000 audit: BPF prog-id=99 op=LOAD Jan 20 00:41:18.165000 audit: BPF prog-id=100 op=LOAD Jan 20 00:41:18.165000 audit: BPF prog-id=75 op=UNLOAD Jan 20 00:41:18.165000 audit: BPF prog-id=76 op=UNLOAD Jan 20 00:41:18.168000 audit: BPF prog-id=101 op=LOAD Jan 20 00:41:18.168000 audit: BPF prog-id=102 op=LOAD Jan 20 00:41:18.168000 audit: BPF prog-id=72 op=UNLOAD Jan 20 00:41:18.168000 audit: BPF prog-id=73 op=UNLOAD Jan 20 00:41:18.168000 audit: BPF prog-id=103 op=LOAD Jan 20 00:41:18.168000 audit: BPF prog-id=68 op=UNLOAD Jan 20 00:41:18.170000 audit: BPF prog-id=104 op=LOAD Jan 20 00:41:18.170000 audit: BPF prog-id=69 op=UNLOAD Jan 20 00:41:18.170000 audit: BPF prog-id=105 op=LOAD Jan 20 00:41:18.170000 audit: BPF prog-id=106 op=LOAD Jan 20 00:41:18.170000 audit: BPF prog-id=70 op=UNLOAD Jan 20 00:41:18.170000 audit: BPF prog-id=71 op=UNLOAD Jan 20 00:41:18.180660 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 20 00:41:18.180716 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 20 00:41:18.180989 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 00:41:18.179000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 20 00:41:18.181033 systemd[1]: kubelet.service: Consumed 71ms CPU time, 95.1M memory peak. Jan 20 00:41:18.182216 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 00:41:18.454823 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 00:41:18.454000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 00:41:18.462487 (kubelet)[3201]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 20 00:41:18.578571 kubelet[3201]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 20 00:41:18.580325 kubelet[3201]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 20 00:41:18.580325 kubelet[3201]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 20 00:41:18.580325 kubelet[3201]: I0120 00:41:18.578920 3201 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 20 00:41:18.861843 kubelet[3201]: I0120 00:41:18.861269 3201 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Jan 20 00:41:18.861843 kubelet[3201]: I0120 00:41:18.861297 3201 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 20 00:41:18.861843 kubelet[3201]: I0120 00:41:18.861743 3201 server.go:956] "Client rotation is on, will bootstrap in background" Jan 20 00:41:18.880357 kubelet[3201]: I0120 00:41:18.880341 3201 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 20 00:41:18.880644 kubelet[3201]: E0120 00:41:18.880623 3201 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.200.20.14:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.20.14:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jan 20 00:41:18.887495 kubelet[3201]: I0120 00:41:18.887475 3201 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 20 00:41:18.889946 kubelet[3201]: I0120 00:41:18.889924 3201 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 20 00:41:18.890157 kubelet[3201]: I0120 00:41:18.890133 3201 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 20 00:41:18.890293 kubelet[3201]: I0120 00:41:18.890154 3201 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4515.1.0-n-fc9e3ff023","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 20 00:41:18.890374 kubelet[3201]: I0120 00:41:18.890298 3201 topology_manager.go:138] "Creating topology manager with none policy" Jan 20 00:41:18.890374 kubelet[3201]: I0120 00:41:18.890312 3201 container_manager_linux.go:303] "Creating device plugin manager" Jan 20 00:41:18.890971 kubelet[3201]: I0120 00:41:18.890951 3201 state_mem.go:36] "Initialized new in-memory state store" Jan 20 00:41:18.893351 kubelet[3201]: I0120 00:41:18.893336 3201 kubelet.go:480] "Attempting to sync node with API server" Jan 20 00:41:18.893382 kubelet[3201]: I0120 00:41:18.893353 3201 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 20 00:41:18.893382 kubelet[3201]: I0120 00:41:18.893375 3201 kubelet.go:386] "Adding apiserver pod source" Jan 20 00:41:18.894587 kubelet[3201]: I0120 00:41:18.894285 3201 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 20 00:41:18.898432 kubelet[3201]: E0120 00:41:18.898411 3201 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.200.20.14:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4515.1.0-n-fc9e3ff023&limit=500&resourceVersion=0\": dial tcp 10.200.20.14:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 20 00:41:18.899584 kubelet[3201]: I0120 00:41:18.899565 3201 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.1.5" apiVersion="v1" Jan 20 00:41:18.899919 kubelet[3201]: I0120 00:41:18.899894 3201 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jan 20 00:41:18.899970 kubelet[3201]: W0120 00:41:18.899952 3201 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 20 00:41:18.900097 kubelet[3201]: E0120 00:41:18.900080 3201 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.200.20.14:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.20.14:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 20 00:41:18.902867 kubelet[3201]: I0120 00:41:18.902846 3201 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 20 00:41:18.903979 kubelet[3201]: I0120 00:41:18.903962 3201 server.go:1289] "Started kubelet" Jan 20 00:41:18.904274 kubelet[3201]: I0120 00:41:18.904234 3201 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jan 20 00:41:18.904913 kubelet[3201]: I0120 00:41:18.904897 3201 server.go:317] "Adding debug handlers to kubelet server" Jan 20 00:41:18.907041 kubelet[3201]: I0120 00:41:18.906982 3201 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 20 00:41:18.907313 kubelet[3201]: I0120 00:41:18.907284 3201 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 20 00:41:18.908195 kubelet[3201]: E0120 00:41:18.907410 3201 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.20.14:6443/api/v1/namespaces/default/events\": dial tcp 10.200.20.14:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4515.1.0-n-fc9e3ff023.188c4999e107333b default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4515.1.0-n-fc9e3ff023,UID:ci-4515.1.0-n-fc9e3ff023,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4515.1.0-n-fc9e3ff023,},FirstTimestamp:2026-01-20 00:41:18.903931707 +0000 UTC m=+0.438539298,LastTimestamp:2026-01-20 00:41:18.903931707 +0000 UTC m=+0.438539298,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4515.1.0-n-fc9e3ff023,}" Jan 20 00:41:18.909818 kubelet[3201]: I0120 00:41:18.909804 3201 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 20 00:41:18.910507 kubelet[3201]: I0120 00:41:18.909867 3201 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 20 00:41:18.910650 kubelet[3201]: I0120 00:41:18.910639 3201 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 20 00:41:18.911215 kubelet[3201]: E0120 00:41:18.909900 3201 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 20 00:41:18.911489 kubelet[3201]: E0120 00:41:18.910870 3201 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4515.1.0-n-fc9e3ff023\" not found" Jan 20 00:41:18.911571 kubelet[3201]: E0120 00:41:18.911437 3201 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.14:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4515.1.0-n-fc9e3ff023?timeout=10s\": dial tcp 10.200.20.14:6443: connect: connection refused" interval="200ms" Jan 20 00:41:18.911614 kubelet[3201]: I0120 00:41:18.911537 3201 factory.go:223] Registration of the systemd container factory successfully Jan 20 00:41:18.912031 kubelet[3201]: I0120 00:41:18.912013 3201 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 20 00:41:18.912249 kubelet[3201]: I0120 00:41:18.910781 3201 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 20 00:41:18.912319 kubelet[3201]: I0120 00:41:18.911732 3201 reconciler.go:26] "Reconciler: start to sync state" Jan 20 00:41:18.912402 kubelet[3201]: E0120 00:41:18.912381 3201 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.200.20.14:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.20.14:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 20 00:41:18.913226 kubelet[3201]: I0120 00:41:18.913211 3201 factory.go:223] Registration of the containerd container factory successfully Jan 20 00:41:18.918000 audit[3220]: NETFILTER_CFG table=mangle:45 family=2 entries=2 op=nft_register_chain pid=3220 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 00:41:18.918000 audit[3220]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=136 a0=3 a1=ffffc5256470 a2=0 a3=0 items=0 ppid=3201 pid=3220 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:41:18.918000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Jan 20 00:41:18.919000 audit[3221]: NETFILTER_CFG table=filter:46 family=2 entries=1 op=nft_register_chain pid=3221 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 00:41:18.919000 audit[3221]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffdd8b7b50 a2=0 a3=0 items=0 ppid=3201 pid=3221 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:41:18.919000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Jan 20 00:41:18.921894 kubelet[3201]: I0120 00:41:18.921874 3201 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 20 00:41:18.921894 kubelet[3201]: I0120 00:41:18.921889 3201 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 20 00:41:18.921973 kubelet[3201]: I0120 00:41:18.921902 3201 state_mem.go:36] "Initialized new in-memory state store" Jan 20 00:41:18.922000 audit[3223]: NETFILTER_CFG table=filter:47 family=2 entries=2 op=nft_register_chain pid=3223 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 00:41:18.922000 audit[3223]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=340 a0=3 a1=ffffc0e4db70 a2=0 a3=0 items=0 ppid=3201 pid=3223 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:41:18.922000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 20 00:41:18.923000 audit[3225]: NETFILTER_CFG table=filter:48 family=2 entries=2 op=nft_register_chain pid=3225 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 00:41:18.923000 audit[3225]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=340 a0=3 a1=ffffea333760 a2=0 a3=0 items=0 ppid=3201 pid=3225 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:41:18.923000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 20 00:41:18.975621 kubelet[3201]: I0120 00:41:18.975595 3201 policy_none.go:49] "None policy: Start" Jan 20 00:41:18.975621 kubelet[3201]: I0120 00:41:18.975623 3201 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 20 00:41:18.975697 kubelet[3201]: I0120 00:41:18.975633 3201 state_mem.go:35] "Initializing new in-memory state store" Jan 20 00:41:18.985612 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 20 00:41:18.986000 audit[3228]: NETFILTER_CFG table=filter:49 family=2 entries=1 op=nft_register_rule pid=3228 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 00:41:18.986000 audit[3228]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=924 a0=3 a1=fffff9acc930 a2=0 a3=0 items=0 ppid=3201 pid=3228 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:41:18.986000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E7400626C6F636B20696E636F6D696E67206C6F63616C6E657420636F6E6E656374696F6E73002D2D647374003132372E302E302E302F38 Jan 20 00:41:18.988487 kubelet[3201]: I0120 00:41:18.988271 3201 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Jan 20 00:41:18.987000 audit[3230]: NETFILTER_CFG table=mangle:50 family=10 entries=2 op=nft_register_chain pid=3230 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 00:41:18.987000 audit[3230]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=136 a0=3 a1=fffff3cb0120 a2=0 a3=0 items=0 ppid=3201 pid=3230 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:41:18.987000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Jan 20 00:41:18.988000 audit[3231]: NETFILTER_CFG table=mangle:51 family=2 entries=1 op=nft_register_chain pid=3231 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 00:41:18.989531 kubelet[3201]: I0120 00:41:18.989388 3201 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Jan 20 00:41:18.989531 kubelet[3201]: I0120 00:41:18.989407 3201 status_manager.go:230] "Starting to sync pod status with apiserver" Jan 20 00:41:18.988000 audit[3231]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffda7db520 a2=0 a3=0 items=0 ppid=3201 pid=3231 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:41:18.988000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Jan 20 00:41:18.989783 kubelet[3201]: I0120 00:41:18.989757 3201 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 20 00:41:18.989783 kubelet[3201]: I0120 00:41:18.989771 3201 kubelet.go:2436] "Starting kubelet main sync loop" Jan 20 00:41:18.990055 kubelet[3201]: E0120 00:41:18.990029 3201 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.200.20.14:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.20.14:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 20 00:41:18.990106 kubelet[3201]: E0120 00:41:18.990083 3201 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 20 00:41:18.989000 audit[3233]: NETFILTER_CFG table=nat:52 family=2 entries=1 op=nft_register_chain pid=3233 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 00:41:18.989000 audit[3233]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffc8e08bd0 a2=0 a3=0 items=0 ppid=3201 pid=3233 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:41:18.989000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Jan 20 00:41:18.991000 audit[3234]: NETFILTER_CFG table=filter:53 family=2 entries=1 op=nft_register_chain pid=3234 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 00:41:18.991000 audit[3234]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffc6677470 a2=0 a3=0 items=0 ppid=3201 pid=3234 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:41:18.991000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Jan 20 00:41:18.991000 audit[3232]: NETFILTER_CFG table=mangle:54 family=10 entries=1 op=nft_register_chain pid=3232 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 00:41:18.991000 audit[3232]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffd0a79510 a2=0 a3=0 items=0 ppid=3201 pid=3232 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:41:18.991000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Jan 20 00:41:18.993000 audit[3235]: NETFILTER_CFG table=nat:55 family=10 entries=1 op=nft_register_chain pid=3235 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 00:41:18.993000 audit[3235]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffd4f8b170 a2=0 a3=0 items=0 ppid=3201 pid=3235 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:41:18.993000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Jan 20 00:41:18.995257 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 20 00:41:18.995000 audit[3236]: NETFILTER_CFG table=filter:56 family=10 entries=1 op=nft_register_chain pid=3236 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 00:41:18.995000 audit[3236]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=fffffc1423a0 a2=0 a3=0 items=0 ppid=3201 pid=3236 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:41:18.995000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Jan 20 00:41:18.999044 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 20 00:41:19.010831 kubelet[3201]: E0120 00:41:19.010815 3201 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jan 20 00:41:19.011076 kubelet[3201]: I0120 00:41:19.011060 3201 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 20 00:41:19.011392 kubelet[3201]: I0120 00:41:19.011365 3201 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 20 00:41:19.011603 kubelet[3201]: I0120 00:41:19.011590 3201 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 20 00:41:19.012846 kubelet[3201]: E0120 00:41:19.012830 3201 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 20 00:41:19.012943 kubelet[3201]: E0120 00:41:19.012933 3201 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4515.1.0-n-fc9e3ff023\" not found" Jan 20 00:41:19.101176 systemd[1]: Created slice kubepods-burstable-pod11281982cdb7ed7a1aaf8ba660a43116.slice - libcontainer container kubepods-burstable-pod11281982cdb7ed7a1aaf8ba660a43116.slice. Jan 20 00:41:19.108858 kubelet[3201]: E0120 00:41:19.108824 3201 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4515.1.0-n-fc9e3ff023\" not found" node="ci-4515.1.0-n-fc9e3ff023" Jan 20 00:41:19.114159 kubelet[3201]: E0120 00:41:19.113967 3201 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.14:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4515.1.0-n-fc9e3ff023?timeout=10s\": dial tcp 10.200.20.14:6443: connect: connection refused" interval="400ms" Jan 20 00:41:19.114131 systemd[1]: Created slice kubepods-burstable-podd9a2e614beaf3a143e560e6dcc6fd253.slice - libcontainer container kubepods-burstable-podd9a2e614beaf3a143e560e6dcc6fd253.slice. Jan 20 00:41:19.118065 kubelet[3201]: I0120 00:41:19.117781 3201 kubelet_node_status.go:75] "Attempting to register node" node="ci-4515.1.0-n-fc9e3ff023" Jan 20 00:41:19.118065 kubelet[3201]: E0120 00:41:19.117823 3201 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4515.1.0-n-fc9e3ff023\" not found" node="ci-4515.1.0-n-fc9e3ff023" Jan 20 00:41:19.118065 kubelet[3201]: E0120 00:41:19.118009 3201 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.14:6443/api/v1/nodes\": dial tcp 10.200.20.14:6443: connect: connection refused" node="ci-4515.1.0-n-fc9e3ff023" Jan 20 00:41:19.131524 systemd[1]: Created slice kubepods-burstable-poda31c39c670cf6f5e4f97d00602c10539.slice - libcontainer container kubepods-burstable-poda31c39c670cf6f5e4f97d00602c10539.slice. Jan 20 00:41:19.132916 kubelet[3201]: E0120 00:41:19.132841 3201 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4515.1.0-n-fc9e3ff023\" not found" node="ci-4515.1.0-n-fc9e3ff023" Jan 20 00:41:19.213340 kubelet[3201]: I0120 00:41:19.213288 3201 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d9a2e614beaf3a143e560e6dcc6fd253-kubeconfig\") pod \"kube-controller-manager-ci-4515.1.0-n-fc9e3ff023\" (UID: \"d9a2e614beaf3a143e560e6dcc6fd253\") " pod="kube-system/kube-controller-manager-ci-4515.1.0-n-fc9e3ff023" Jan 20 00:41:19.213340 kubelet[3201]: I0120 00:41:19.213344 3201 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d9a2e614beaf3a143e560e6dcc6fd253-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4515.1.0-n-fc9e3ff023\" (UID: \"d9a2e614beaf3a143e560e6dcc6fd253\") " pod="kube-system/kube-controller-manager-ci-4515.1.0-n-fc9e3ff023" Jan 20 00:41:19.213447 kubelet[3201]: I0120 00:41:19.213358 3201 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/11281982cdb7ed7a1aaf8ba660a43116-ca-certs\") pod \"kube-apiserver-ci-4515.1.0-n-fc9e3ff023\" (UID: \"11281982cdb7ed7a1aaf8ba660a43116\") " pod="kube-system/kube-apiserver-ci-4515.1.0-n-fc9e3ff023" Jan 20 00:41:19.213447 kubelet[3201]: I0120 00:41:19.213368 3201 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/11281982cdb7ed7a1aaf8ba660a43116-k8s-certs\") pod \"kube-apiserver-ci-4515.1.0-n-fc9e3ff023\" (UID: \"11281982cdb7ed7a1aaf8ba660a43116\") " pod="kube-system/kube-apiserver-ci-4515.1.0-n-fc9e3ff023" Jan 20 00:41:19.213447 kubelet[3201]: I0120 00:41:19.213378 3201 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/11281982cdb7ed7a1aaf8ba660a43116-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4515.1.0-n-fc9e3ff023\" (UID: \"11281982cdb7ed7a1aaf8ba660a43116\") " pod="kube-system/kube-apiserver-ci-4515.1.0-n-fc9e3ff023" Jan 20 00:41:19.213447 kubelet[3201]: I0120 00:41:19.213390 3201 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a31c39c670cf6f5e4f97d00602c10539-kubeconfig\") pod \"kube-scheduler-ci-4515.1.0-n-fc9e3ff023\" (UID: \"a31c39c670cf6f5e4f97d00602c10539\") " pod="kube-system/kube-scheduler-ci-4515.1.0-n-fc9e3ff023" Jan 20 00:41:19.213447 kubelet[3201]: I0120 00:41:19.213398 3201 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d9a2e614beaf3a143e560e6dcc6fd253-ca-certs\") pod \"kube-controller-manager-ci-4515.1.0-n-fc9e3ff023\" (UID: \"d9a2e614beaf3a143e560e6dcc6fd253\") " pod="kube-system/kube-controller-manager-ci-4515.1.0-n-fc9e3ff023" Jan 20 00:41:19.213526 kubelet[3201]: I0120 00:41:19.213407 3201 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d9a2e614beaf3a143e560e6dcc6fd253-flexvolume-dir\") pod \"kube-controller-manager-ci-4515.1.0-n-fc9e3ff023\" (UID: \"d9a2e614beaf3a143e560e6dcc6fd253\") " pod="kube-system/kube-controller-manager-ci-4515.1.0-n-fc9e3ff023" Jan 20 00:41:19.213526 kubelet[3201]: I0120 00:41:19.213415 3201 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d9a2e614beaf3a143e560e6dcc6fd253-k8s-certs\") pod \"kube-controller-manager-ci-4515.1.0-n-fc9e3ff023\" (UID: \"d9a2e614beaf3a143e560e6dcc6fd253\") " pod="kube-system/kube-controller-manager-ci-4515.1.0-n-fc9e3ff023" Jan 20 00:41:19.320205 kubelet[3201]: I0120 00:41:19.320157 3201 kubelet_node_status.go:75] "Attempting to register node" node="ci-4515.1.0-n-fc9e3ff023" Jan 20 00:41:19.320633 kubelet[3201]: E0120 00:41:19.320611 3201 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.14:6443/api/v1/nodes\": dial tcp 10.200.20.14:6443: connect: connection refused" node="ci-4515.1.0-n-fc9e3ff023" Jan 20 00:41:19.409870 containerd[2132]: time="2026-01-20T00:41:19.409739998Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4515.1.0-n-fc9e3ff023,Uid:11281982cdb7ed7a1aaf8ba660a43116,Namespace:kube-system,Attempt:0,}" Jan 20 00:41:19.418590 containerd[2132]: time="2026-01-20T00:41:19.418562905Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4515.1.0-n-fc9e3ff023,Uid:d9a2e614beaf3a143e560e6dcc6fd253,Namespace:kube-system,Attempt:0,}" Jan 20 00:41:19.434650 containerd[2132]: time="2026-01-20T00:41:19.434600086Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4515.1.0-n-fc9e3ff023,Uid:a31c39c670cf6f5e4f97d00602c10539,Namespace:kube-system,Attempt:0,}" Jan 20 00:41:19.481326 containerd[2132]: time="2026-01-20T00:41:19.481171758Z" level=info msg="connecting to shim 0b8e21dc5c93a5dc8ccd633f9864cdc909ccd44b9d745be0fc78e632ecfd7a45" address="unix:///run/containerd/s/5e93ad64092734b1a952b93ab5690d01234b13244bf39c1c712c67afa2812ea2" namespace=k8s.io protocol=ttrpc version=3 Jan 20 00:41:19.501454 systemd[1]: Started cri-containerd-0b8e21dc5c93a5dc8ccd633f9864cdc909ccd44b9d745be0fc78e632ecfd7a45.scope - libcontainer container 0b8e21dc5c93a5dc8ccd633f9864cdc909ccd44b9d745be0fc78e632ecfd7a45. Jan 20 00:41:19.516189 kubelet[3201]: E0120 00:41:19.516159 3201 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.14:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4515.1.0-n-fc9e3ff023?timeout=10s\": dial tcp 10.200.20.14:6443: connect: connection refused" interval="800ms" Jan 20 00:41:19.515000 audit: BPF prog-id=107 op=LOAD Jan 20 00:41:19.516000 audit: BPF prog-id=108 op=LOAD Jan 20 00:41:19.516000 audit[3257]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000138180 a2=98 a3=0 items=0 ppid=3246 pid=3257 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:41:19.516000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3062386532316463356339336135646338636364363333663938363463 Jan 20 00:41:19.516000 audit: BPF prog-id=108 op=UNLOAD Jan 20 00:41:19.516000 audit[3257]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3246 pid=3257 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:41:19.516000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3062386532316463356339336135646338636364363333663938363463 Jan 20 00:41:19.516000 audit: BPF prog-id=109 op=LOAD Jan 20 00:41:19.516000 audit[3257]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001383e8 a2=98 a3=0 items=0 ppid=3246 pid=3257 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:41:19.516000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3062386532316463356339336135646338636364363333663938363463 Jan 20 00:41:19.516000 audit: BPF prog-id=110 op=LOAD Jan 20 00:41:19.516000 audit[3257]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000138168 a2=98 a3=0 items=0 ppid=3246 pid=3257 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:41:19.516000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3062386532316463356339336135646338636364363333663938363463 Jan 20 00:41:19.517000 audit: BPF prog-id=110 op=UNLOAD Jan 20 00:41:19.517000 audit[3257]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3246 pid=3257 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:41:19.517000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3062386532316463356339336135646338636364363333663938363463 Jan 20 00:41:19.517000 audit: BPF prog-id=109 op=UNLOAD Jan 20 00:41:19.517000 audit[3257]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3246 pid=3257 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:41:19.517000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3062386532316463356339336135646338636364363333663938363463 Jan 20 00:41:19.517000 audit: BPF prog-id=111 op=LOAD Jan 20 00:41:19.517000 audit[3257]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000138648 a2=98 a3=0 items=0 ppid=3246 pid=3257 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:41:19.517000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3062386532316463356339336135646338636364363333663938363463 Jan 20 00:41:19.521515 containerd[2132]: time="2026-01-20T00:41:19.521481298Z" level=info msg="connecting to shim 781924b0ba3f3712499a0af841b90c822fc1b1babd6e9ce717c178cc7e7e45ed" address="unix:///run/containerd/s/4db54fa0dd7b2274d26dc2e210a1878155279565e479f3f4c206ba11ef147167" namespace=k8s.io protocol=ttrpc version=3 Jan 20 00:41:19.547539 systemd[1]: Started cri-containerd-781924b0ba3f3712499a0af841b90c822fc1b1babd6e9ce717c178cc7e7e45ed.scope - libcontainer container 781924b0ba3f3712499a0af841b90c822fc1b1babd6e9ce717c178cc7e7e45ed. Jan 20 00:41:19.554000 audit: BPF prog-id=112 op=LOAD Jan 20 00:41:19.554000 audit: BPF prog-id=113 op=LOAD Jan 20 00:41:19.554000 audit[3295]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000176180 a2=98 a3=0 items=0 ppid=3284 pid=3295 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:41:19.554000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3738313932346230626133663337313234393961306166383431623930 Jan 20 00:41:19.554000 audit: BPF prog-id=113 op=UNLOAD Jan 20 00:41:19.554000 audit[3295]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3284 pid=3295 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:41:19.554000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3738313932346230626133663337313234393961306166383431623930 Jan 20 00:41:19.554000 audit: BPF prog-id=114 op=LOAD Jan 20 00:41:19.554000 audit[3295]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001763e8 a2=98 a3=0 items=0 ppid=3284 pid=3295 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:41:19.554000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3738313932346230626133663337313234393961306166383431623930 Jan 20 00:41:19.554000 audit: BPF prog-id=115 op=LOAD Jan 20 00:41:19.554000 audit[3295]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000176168 a2=98 a3=0 items=0 ppid=3284 pid=3295 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:41:19.554000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3738313932346230626133663337313234393961306166383431623930 Jan 20 00:41:19.555000 audit: BPF prog-id=115 op=UNLOAD Jan 20 00:41:19.555000 audit[3295]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3284 pid=3295 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:41:19.555000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3738313932346230626133663337313234393961306166383431623930 Jan 20 00:41:19.555000 audit: BPF prog-id=114 op=UNLOAD Jan 20 00:41:19.555000 audit[3295]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3284 pid=3295 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:41:19.555000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3738313932346230626133663337313234393961306166383431623930 Jan 20 00:41:19.555000 audit: BPF prog-id=116 op=LOAD Jan 20 00:41:19.555000 audit[3295]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000176648 a2=98 a3=0 items=0 ppid=3284 pid=3295 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:41:19.555000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3738313932346230626133663337313234393961306166383431623930 Jan 20 00:41:19.722824 kubelet[3201]: I0120 00:41:19.722732 3201 kubelet_node_status.go:75] "Attempting to register node" node="ci-4515.1.0-n-fc9e3ff023" Jan 20 00:41:19.723615 kubelet[3201]: E0120 00:41:19.723588 3201 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.14:6443/api/v1/nodes\": dial tcp 10.200.20.14:6443: connect: connection refused" node="ci-4515.1.0-n-fc9e3ff023" Jan 20 00:41:20.552049 kernel: hv_balloon: Max. dynamic memory size: 4096 MB Jan 20 00:41:20.552126 kubelet[3201]: E0120 00:41:20.024086 3201 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.200.20.14:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.20.14:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 20 00:41:20.552126 kubelet[3201]: E0120 00:41:20.100554 3201 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.200.20.14:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.20.14:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 20 00:41:20.552126 kubelet[3201]: E0120 00:41:20.184446 3201 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.200.20.14:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.20.14:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 20 00:41:20.552126 kubelet[3201]: E0120 00:41:20.238258 3201 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.200.20.14:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4515.1.0-n-fc9e3ff023&limit=500&resourceVersion=0\": dial tcp 10.200.20.14:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 20 00:41:20.552126 kubelet[3201]: E0120 00:41:20.317003 3201 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.14:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4515.1.0-n-fc9e3ff023?timeout=10s\": dial tcp 10.200.20.14:6443: connect: connection refused" interval="1.6s" Jan 20 00:41:20.552126 kubelet[3201]: I0120 00:41:20.525870 3201 kubelet_node_status.go:75] "Attempting to register node" node="ci-4515.1.0-n-fc9e3ff023" Jan 20 00:41:20.552254 kubelet[3201]: E0120 00:41:20.526147 3201 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.14:6443/api/v1/nodes\": dial tcp 10.200.20.14:6443: connect: connection refused" node="ci-4515.1.0-n-fc9e3ff023" Jan 20 00:41:20.601532 containerd[2132]: time="2026-01-20T00:41:20.601405668Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4515.1.0-n-fc9e3ff023,Uid:11281982cdb7ed7a1aaf8ba660a43116,Namespace:kube-system,Attempt:0,} returns sandbox id \"0b8e21dc5c93a5dc8ccd633f9864cdc909ccd44b9d745be0fc78e632ecfd7a45\"" Jan 20 00:41:20.604708 containerd[2132]: time="2026-01-20T00:41:20.604676354Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4515.1.0-n-fc9e3ff023,Uid:d9a2e614beaf3a143e560e6dcc6fd253,Namespace:kube-system,Attempt:0,} returns sandbox id \"781924b0ba3f3712499a0af841b90c822fc1b1babd6e9ce717c178cc7e7e45ed\"" Jan 20 00:41:20.611280 containerd[2132]: time="2026-01-20T00:41:20.611254831Z" level=info msg="CreateContainer within sandbox \"0b8e21dc5c93a5dc8ccd633f9864cdc909ccd44b9d745be0fc78e632ecfd7a45\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 20 00:41:20.616186 containerd[2132]: time="2026-01-20T00:41:20.616121472Z" level=info msg="CreateContainer within sandbox \"781924b0ba3f3712499a0af841b90c822fc1b1babd6e9ce717c178cc7e7e45ed\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 20 00:41:20.624690 containerd[2132]: time="2026-01-20T00:41:20.624659731Z" level=info msg="connecting to shim 95cd67d0f362e0a5baff635094bca5eb51f4c8642f3ebdbb20218bba03a14e6f" address="unix:///run/containerd/s/5030a003d70cef2af70cee91d8986ef55b9d36547fde2f4c043d884c8dd85406" namespace=k8s.io protocol=ttrpc version=3 Jan 20 00:41:20.648474 systemd[1]: Started cri-containerd-95cd67d0f362e0a5baff635094bca5eb51f4c8642f3ebdbb20218bba03a14e6f.scope - libcontainer container 95cd67d0f362e0a5baff635094bca5eb51f4c8642f3ebdbb20218bba03a14e6f. Jan 20 00:41:20.655000 audit: BPF prog-id=117 op=LOAD Jan 20 00:41:20.655000 audit: BPF prog-id=118 op=LOAD Jan 20 00:41:20.655000 audit[3348]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000176180 a2=98 a3=0 items=0 ppid=3336 pid=3348 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:41:20.655000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3935636436376430663336326530613562616666363335303934626361 Jan 20 00:41:20.655000 audit: BPF prog-id=118 op=UNLOAD Jan 20 00:41:20.655000 audit[3348]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3336 pid=3348 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:41:20.655000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3935636436376430663336326530613562616666363335303934626361 Jan 20 00:41:20.655000 audit: BPF prog-id=119 op=LOAD Jan 20 00:41:20.655000 audit[3348]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001763e8 a2=98 a3=0 items=0 ppid=3336 pid=3348 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:41:20.655000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3935636436376430663336326530613562616666363335303934626361 Jan 20 00:41:20.655000 audit: BPF prog-id=120 op=LOAD Jan 20 00:41:20.655000 audit[3348]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=4000176168 a2=98 a3=0 items=0 ppid=3336 pid=3348 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:41:20.655000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3935636436376430663336326530613562616666363335303934626361 Jan 20 00:41:20.655000 audit: BPF prog-id=120 op=UNLOAD Jan 20 00:41:20.655000 audit[3348]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3336 pid=3348 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:41:20.655000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3935636436376430663336326530613562616666363335303934626361 Jan 20 00:41:20.655000 audit: BPF prog-id=119 op=UNLOAD Jan 20 00:41:20.655000 audit[3348]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3336 pid=3348 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:41:20.655000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3935636436376430663336326530613562616666363335303934626361 Jan 20 00:41:20.655000 audit: BPF prog-id=121 op=LOAD Jan 20 00:41:20.655000 audit[3348]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000176648 a2=98 a3=0 items=0 ppid=3336 pid=3348 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:41:20.655000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3935636436376430663336326530613562616666363335303934626361 Jan 20 00:41:20.668075 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3336817575.mount: Deactivated successfully. Jan 20 00:41:20.669399 containerd[2132]: time="2026-01-20T00:41:20.669351448Z" level=info msg="Container 2d73d7e97b356e291722a298f677ee6acf711412f6689fafffe1b4ccf60195f9: CDI devices from CRI Config.CDIDevices: []" Jan 20 00:41:20.670693 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount531946193.mount: Deactivated successfully. Jan 20 00:41:20.675346 containerd[2132]: time="2026-01-20T00:41:20.675100460Z" level=info msg="Container bbf6806a539e0e772b04e081d34acd40b8b409c249a318c94b74f7bb992c660e: CDI devices from CRI Config.CDIDevices: []" Jan 20 00:41:20.690943 containerd[2132]: time="2026-01-20T00:41:20.690912898Z" level=info msg="CreateContainer within sandbox \"0b8e21dc5c93a5dc8ccd633f9864cdc909ccd44b9d745be0fc78e632ecfd7a45\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"2d73d7e97b356e291722a298f677ee6acf711412f6689fafffe1b4ccf60195f9\"" Jan 20 00:41:20.691378 containerd[2132]: time="2026-01-20T00:41:20.691353232Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4515.1.0-n-fc9e3ff023,Uid:a31c39c670cf6f5e4f97d00602c10539,Namespace:kube-system,Attempt:0,} returns sandbox id \"95cd67d0f362e0a5baff635094bca5eb51f4c8642f3ebdbb20218bba03a14e6f\"" Jan 20 00:41:20.691501 containerd[2132]: time="2026-01-20T00:41:20.691478588Z" level=info msg="StartContainer for \"2d73d7e97b356e291722a298f677ee6acf711412f6689fafffe1b4ccf60195f9\"" Jan 20 00:41:20.692228 containerd[2132]: time="2026-01-20T00:41:20.692200787Z" level=info msg="connecting to shim 2d73d7e97b356e291722a298f677ee6acf711412f6689fafffe1b4ccf60195f9" address="unix:///run/containerd/s/5e93ad64092734b1a952b93ab5690d01234b13244bf39c1c712c67afa2812ea2" protocol=ttrpc version=3 Jan 20 00:41:20.701536 containerd[2132]: time="2026-01-20T00:41:20.701500309Z" level=info msg="CreateContainer within sandbox \"781924b0ba3f3712499a0af841b90c822fc1b1babd6e9ce717c178cc7e7e45ed\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"bbf6806a539e0e772b04e081d34acd40b8b409c249a318c94b74f7bb992c660e\"" Jan 20 00:41:20.702565 containerd[2132]: time="2026-01-20T00:41:20.702537590Z" level=info msg="StartContainer for \"bbf6806a539e0e772b04e081d34acd40b8b409c249a318c94b74f7bb992c660e\"" Jan 20 00:41:20.703508 containerd[2132]: time="2026-01-20T00:41:20.702897545Z" level=info msg="CreateContainer within sandbox \"95cd67d0f362e0a5baff635094bca5eb51f4c8642f3ebdbb20218bba03a14e6f\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 20 00:41:20.703508 containerd[2132]: time="2026-01-20T00:41:20.703403545Z" level=info msg="connecting to shim bbf6806a539e0e772b04e081d34acd40b8b409c249a318c94b74f7bb992c660e" address="unix:///run/containerd/s/4db54fa0dd7b2274d26dc2e210a1878155279565e479f3f4c206ba11ef147167" protocol=ttrpc version=3 Jan 20 00:41:20.704456 systemd[1]: Started cri-containerd-2d73d7e97b356e291722a298f677ee6acf711412f6689fafffe1b4ccf60195f9.scope - libcontainer container 2d73d7e97b356e291722a298f677ee6acf711412f6689fafffe1b4ccf60195f9. Jan 20 00:41:20.713000 audit: BPF prog-id=122 op=LOAD Jan 20 00:41:20.714000 audit: BPF prog-id=123 op=LOAD Jan 20 00:41:20.714000 audit[3376]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40000fe180 a2=98 a3=0 items=0 ppid=3246 pid=3376 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:41:20.714000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3264373364376539376233353665323931373232613239386636373765 Jan 20 00:41:20.714000 audit: BPF prog-id=123 op=UNLOAD Jan 20 00:41:20.714000 audit[3376]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3246 pid=3376 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:41:20.714000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3264373364376539376233353665323931373232613239386636373765 Jan 20 00:41:20.714000 audit: BPF prog-id=124 op=LOAD Jan 20 00:41:20.714000 audit[3376]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40000fe3e8 a2=98 a3=0 items=0 ppid=3246 pid=3376 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:41:20.714000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3264373364376539376233353665323931373232613239386636373765 Jan 20 00:41:20.714000 audit: BPF prog-id=125 op=LOAD Jan 20 00:41:20.714000 audit[3376]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=40000fe168 a2=98 a3=0 items=0 ppid=3246 pid=3376 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:41:20.714000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3264373364376539376233353665323931373232613239386636373765 Jan 20 00:41:20.714000 audit: BPF prog-id=125 op=UNLOAD Jan 20 00:41:20.714000 audit[3376]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3246 pid=3376 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:41:20.714000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3264373364376539376233353665323931373232613239386636373765 Jan 20 00:41:20.714000 audit: BPF prog-id=124 op=UNLOAD Jan 20 00:41:20.714000 audit[3376]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3246 pid=3376 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:41:20.714000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3264373364376539376233353665323931373232613239386636373765 Jan 20 00:41:20.714000 audit: BPF prog-id=126 op=LOAD Jan 20 00:41:20.714000 audit[3376]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40000fe648 a2=98 a3=0 items=0 ppid=3246 pid=3376 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:41:20.714000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3264373364376539376233353665323931373232613239386636373765 Jan 20 00:41:20.721425 systemd[1]: Started cri-containerd-bbf6806a539e0e772b04e081d34acd40b8b409c249a318c94b74f7bb992c660e.scope - libcontainer container bbf6806a539e0e772b04e081d34acd40b8b409c249a318c94b74f7bb992c660e. Jan 20 00:41:20.728454 containerd[2132]: time="2026-01-20T00:41:20.728425039Z" level=info msg="Container 52b0794985830ee62bddc78e02ecbd7aff1ffc8e737bf8162ef05776b6356884: CDI devices from CRI Config.CDIDevices: []" Jan 20 00:41:20.748377 containerd[2132]: time="2026-01-20T00:41:20.748149040Z" level=info msg="StartContainer for \"2d73d7e97b356e291722a298f677ee6acf711412f6689fafffe1b4ccf60195f9\" returns successfully" Jan 20 00:41:20.750000 audit: BPF prog-id=127 op=LOAD Jan 20 00:41:20.750000 audit: BPF prog-id=128 op=LOAD Jan 20 00:41:20.750000 audit[3390]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130180 a2=98 a3=0 items=0 ppid=3284 pid=3390 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:41:20.750000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6262663638303661353339653065373732623034653038316433346163 Jan 20 00:41:20.750000 audit: BPF prog-id=128 op=UNLOAD Jan 20 00:41:20.750000 audit[3390]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3284 pid=3390 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:41:20.750000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6262663638303661353339653065373732623034653038316433346163 Jan 20 00:41:20.750000 audit: BPF prog-id=129 op=LOAD Jan 20 00:41:20.750000 audit[3390]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001303e8 a2=98 a3=0 items=0 ppid=3284 pid=3390 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:41:20.750000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6262663638303661353339653065373732623034653038316433346163 Jan 20 00:41:20.750000 audit: BPF prog-id=130 op=LOAD Jan 20 00:41:20.750000 audit[3390]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000130168 a2=98 a3=0 items=0 ppid=3284 pid=3390 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:41:20.750000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6262663638303661353339653065373732623034653038316433346163 Jan 20 00:41:20.750000 audit: BPF prog-id=130 op=UNLOAD Jan 20 00:41:20.750000 audit[3390]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3284 pid=3390 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:41:20.750000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6262663638303661353339653065373732623034653038316433346163 Jan 20 00:41:20.750000 audit: BPF prog-id=129 op=UNLOAD Jan 20 00:41:20.750000 audit[3390]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3284 pid=3390 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:41:20.750000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6262663638303661353339653065373732623034653038316433346163 Jan 20 00:41:20.750000 audit: BPF prog-id=131 op=LOAD Jan 20 00:41:20.750000 audit[3390]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130648 a2=98 a3=0 items=0 ppid=3284 pid=3390 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:41:20.750000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6262663638303661353339653065373732623034653038316433346163 Jan 20 00:41:20.755194 containerd[2132]: time="2026-01-20T00:41:20.755147315Z" level=info msg="CreateContainer within sandbox \"95cd67d0f362e0a5baff635094bca5eb51f4c8642f3ebdbb20218bba03a14e6f\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"52b0794985830ee62bddc78e02ecbd7aff1ffc8e737bf8162ef05776b6356884\"" Jan 20 00:41:20.756397 containerd[2132]: time="2026-01-20T00:41:20.756374001Z" level=info msg="StartContainer for \"52b0794985830ee62bddc78e02ecbd7aff1ffc8e737bf8162ef05776b6356884\"" Jan 20 00:41:20.758695 containerd[2132]: time="2026-01-20T00:41:20.758535197Z" level=info msg="connecting to shim 52b0794985830ee62bddc78e02ecbd7aff1ffc8e737bf8162ef05776b6356884" address="unix:///run/containerd/s/5030a003d70cef2af70cee91d8986ef55b9d36547fde2f4c043d884c8dd85406" protocol=ttrpc version=3 Jan 20 00:41:20.777452 systemd[1]: Started cri-containerd-52b0794985830ee62bddc78e02ecbd7aff1ffc8e737bf8162ef05776b6356884.scope - libcontainer container 52b0794985830ee62bddc78e02ecbd7aff1ffc8e737bf8162ef05776b6356884. Jan 20 00:41:20.797000 audit: BPF prog-id=132 op=LOAD Jan 20 00:41:20.797000 audit: BPF prog-id=133 op=LOAD Jan 20 00:41:20.797000 audit[3429]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000174180 a2=98 a3=0 items=0 ppid=3336 pid=3429 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:41:20.797000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3532623037393439383538333065653632626464633738653032656362 Jan 20 00:41:20.797000 audit: BPF prog-id=133 op=UNLOAD Jan 20 00:41:20.797000 audit[3429]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3336 pid=3429 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:41:20.797000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3532623037393439383538333065653632626464633738653032656362 Jan 20 00:41:20.797000 audit: BPF prog-id=134 op=LOAD Jan 20 00:41:20.797000 audit[3429]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001743e8 a2=98 a3=0 items=0 ppid=3336 pid=3429 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:41:20.797000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3532623037393439383538333065653632626464633738653032656362 Jan 20 00:41:20.797000 audit: BPF prog-id=135 op=LOAD Jan 20 00:41:20.797000 audit[3429]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000174168 a2=98 a3=0 items=0 ppid=3336 pid=3429 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:41:20.797000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3532623037393439383538333065653632626464633738653032656362 Jan 20 00:41:20.797000 audit: BPF prog-id=135 op=UNLOAD Jan 20 00:41:20.797000 audit[3429]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3336 pid=3429 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:41:20.797000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3532623037393439383538333065653632626464633738653032656362 Jan 20 00:41:20.797000 audit: BPF prog-id=134 op=UNLOAD Jan 20 00:41:20.797000 audit[3429]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3336 pid=3429 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:41:20.797000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3532623037393439383538333065653632626464633738653032656362 Jan 20 00:41:20.797000 audit: BPF prog-id=136 op=LOAD Jan 20 00:41:20.797000 audit[3429]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000174648 a2=98 a3=0 items=0 ppid=3336 pid=3429 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:41:20.797000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3532623037393439383538333065653632626464633738653032656362 Jan 20 00:41:20.800867 containerd[2132]: time="2026-01-20T00:41:20.799107898Z" level=info msg="StartContainer for \"bbf6806a539e0e772b04e081d34acd40b8b409c249a318c94b74f7bb992c660e\" returns successfully" Jan 20 00:41:20.837384 containerd[2132]: time="2026-01-20T00:41:20.836479882Z" level=info msg="StartContainer for \"52b0794985830ee62bddc78e02ecbd7aff1ffc8e737bf8162ef05776b6356884\" returns successfully" Jan 20 00:41:21.000645 kubelet[3201]: E0120 00:41:21.000460 3201 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4515.1.0-n-fc9e3ff023\" not found" node="ci-4515.1.0-n-fc9e3ff023" Jan 20 00:41:21.000645 kubelet[3201]: E0120 00:41:21.000578 3201 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4515.1.0-n-fc9e3ff023\" not found" node="ci-4515.1.0-n-fc9e3ff023" Jan 20 00:41:21.001509 kubelet[3201]: E0120 00:41:21.001416 3201 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4515.1.0-n-fc9e3ff023\" not found" node="ci-4515.1.0-n-fc9e3ff023" Jan 20 00:41:21.989817 kubelet[3201]: E0120 00:41:21.989775 3201 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4515.1.0-n-fc9e3ff023\" not found" node="ci-4515.1.0-n-fc9e3ff023" Jan 20 00:41:22.005694 kubelet[3201]: E0120 00:41:22.005534 3201 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4515.1.0-n-fc9e3ff023\" not found" node="ci-4515.1.0-n-fc9e3ff023" Jan 20 00:41:22.006126 kubelet[3201]: E0120 00:41:22.006111 3201 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4515.1.0-n-fc9e3ff023\" not found" node="ci-4515.1.0-n-fc9e3ff023" Jan 20 00:41:22.097905 kubelet[3201]: E0120 00:41:22.097884 3201 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4515.1.0-n-fc9e3ff023\" not found" node="ci-4515.1.0-n-fc9e3ff023" Jan 20 00:41:22.128152 kubelet[3201]: I0120 00:41:22.128131 3201 kubelet_node_status.go:75] "Attempting to register node" node="ci-4515.1.0-n-fc9e3ff023" Jan 20 00:41:22.143515 kubelet[3201]: I0120 00:41:22.143008 3201 kubelet_node_status.go:78] "Successfully registered node" node="ci-4515.1.0-n-fc9e3ff023" Jan 20 00:41:22.143515 kubelet[3201]: E0120 00:41:22.143032 3201 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"ci-4515.1.0-n-fc9e3ff023\": node \"ci-4515.1.0-n-fc9e3ff023\" not found" Jan 20 00:41:22.158506 kubelet[3201]: E0120 00:41:22.158486 3201 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4515.1.0-n-fc9e3ff023\" not found" Jan 20 00:41:22.259101 kubelet[3201]: E0120 00:41:22.259000 3201 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4515.1.0-n-fc9e3ff023\" not found" Jan 20 00:41:22.359979 kubelet[3201]: E0120 00:41:22.359949 3201 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4515.1.0-n-fc9e3ff023\" not found" Jan 20 00:41:22.460550 kubelet[3201]: E0120 00:41:22.460511 3201 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4515.1.0-n-fc9e3ff023\" not found" Jan 20 00:41:22.560742 kubelet[3201]: E0120 00:41:22.560706 3201 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4515.1.0-n-fc9e3ff023\" not found" Jan 20 00:41:22.661212 kubelet[3201]: E0120 00:41:22.661178 3201 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4515.1.0-n-fc9e3ff023\" not found" Jan 20 00:41:22.761393 kubelet[3201]: E0120 00:41:22.761357 3201 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4515.1.0-n-fc9e3ff023\" not found" Jan 20 00:41:22.862106 kubelet[3201]: E0120 00:41:22.861871 3201 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4515.1.0-n-fc9e3ff023\" not found" Jan 20 00:41:22.962387 kubelet[3201]: E0120 00:41:22.962360 3201 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4515.1.0-n-fc9e3ff023\" not found" Jan 20 00:41:23.063193 kubelet[3201]: E0120 00:41:23.063157 3201 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4515.1.0-n-fc9e3ff023\" not found" Jan 20 00:41:23.211384 kubelet[3201]: I0120 00:41:23.211278 3201 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4515.1.0-n-fc9e3ff023" Jan 20 00:41:23.221035 kubelet[3201]: I0120 00:41:23.221011 3201 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Jan 20 00:41:23.221155 kubelet[3201]: I0120 00:41:23.221137 3201 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4515.1.0-n-fc9e3ff023" Jan 20 00:41:23.228413 kubelet[3201]: I0120 00:41:23.228387 3201 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Jan 20 00:41:23.228501 kubelet[3201]: I0120 00:41:23.228454 3201 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4515.1.0-n-fc9e3ff023" Jan 20 00:41:23.238010 kubelet[3201]: I0120 00:41:23.237938 3201 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Jan 20 00:41:23.899987 kubelet[3201]: I0120 00:41:23.899962 3201 apiserver.go:52] "Watching apiserver" Jan 20 00:41:23.912674 kubelet[3201]: I0120 00:41:23.912651 3201 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 20 00:41:23.987258 systemd[1]: Reload requested from client PID 3485 ('systemctl') (unit session-9.scope)... Jan 20 00:41:23.987271 systemd[1]: Reloading... Jan 20 00:41:24.060369 zram_generator::config[3541]: No configuration found. Jan 20 00:41:24.218121 systemd[1]: Reloading finished in 230 ms. Jan 20 00:41:24.239966 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 00:41:24.257953 systemd[1]: kubelet.service: Deactivated successfully. Jan 20 00:41:24.258200 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 00:41:24.256000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 00:41:24.258256 systemd[1]: kubelet.service: Consumed 581ms CPU time, 127.3M memory peak. Jan 20 00:41:24.261426 kernel: kauditd_printk_skb: 204 callbacks suppressed Jan 20 00:41:24.261466 kernel: audit: type=1131 audit(1768869684.256:419): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 00:41:24.279841 kernel: audit: type=1334 audit(1768869684.273:420): prog-id=137 op=LOAD Jan 20 00:41:24.273000 audit: BPF prog-id=137 op=LOAD Jan 20 00:41:24.273889 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 00:41:24.273000 audit: BPF prog-id=94 op=UNLOAD Jan 20 00:41:24.284134 kernel: audit: type=1334 audit(1768869684.273:421): prog-id=94 op=UNLOAD Jan 20 00:41:24.273000 audit: BPF prog-id=138 op=LOAD Jan 20 00:41:24.288323 kernel: audit: type=1334 audit(1768869684.273:422): prog-id=138 op=LOAD Jan 20 00:41:24.288377 kernel: audit: type=1334 audit(1768869684.273:423): prog-id=139 op=LOAD Jan 20 00:41:24.273000 audit: BPF prog-id=139 op=LOAD Jan 20 00:41:24.273000 audit: BPF prog-id=95 op=UNLOAD Jan 20 00:41:24.296988 kernel: audit: type=1334 audit(1768869684.273:424): prog-id=95 op=UNLOAD Jan 20 00:41:24.273000 audit: BPF prog-id=96 op=UNLOAD Jan 20 00:41:24.301156 kernel: audit: type=1334 audit(1768869684.273:425): prog-id=96 op=UNLOAD Jan 20 00:41:24.274000 audit: BPF prog-id=140 op=LOAD Jan 20 00:41:24.305202 kernel: audit: type=1334 audit(1768869684.274:426): prog-id=140 op=LOAD Jan 20 00:41:24.274000 audit: BPF prog-id=87 op=UNLOAD Jan 20 00:41:24.309298 kernel: audit: type=1334 audit(1768869684.274:427): prog-id=87 op=UNLOAD Jan 20 00:41:24.278000 audit: BPF prog-id=141 op=LOAD Jan 20 00:41:24.313197 kernel: audit: type=1334 audit(1768869684.278:428): prog-id=141 op=LOAD Jan 20 00:41:24.278000 audit: BPF prog-id=91 op=UNLOAD Jan 20 00:41:24.278000 audit: BPF prog-id=142 op=LOAD Jan 20 00:41:24.278000 audit: BPF prog-id=143 op=LOAD Jan 20 00:41:24.278000 audit: BPF prog-id=92 op=UNLOAD Jan 20 00:41:24.278000 audit: BPF prog-id=93 op=UNLOAD Jan 20 00:41:24.283000 audit: BPF prog-id=144 op=LOAD Jan 20 00:41:24.291000 audit: BPF prog-id=103 op=UNLOAD Jan 20 00:41:24.291000 audit: BPF prog-id=145 op=LOAD Jan 20 00:41:24.291000 audit: BPF prog-id=97 op=UNLOAD Jan 20 00:41:24.296000 audit: BPF prog-id=146 op=LOAD Jan 20 00:41:24.296000 audit: BPF prog-id=147 op=LOAD Jan 20 00:41:24.296000 audit: BPF prog-id=101 op=UNLOAD Jan 20 00:41:24.296000 audit: BPF prog-id=102 op=UNLOAD Jan 20 00:41:24.300000 audit: BPF prog-id=148 op=LOAD Jan 20 00:41:24.300000 audit: BPF prog-id=98 op=UNLOAD Jan 20 00:41:24.304000 audit: BPF prog-id=149 op=LOAD Jan 20 00:41:24.308000 audit: BPF prog-id=150 op=LOAD Jan 20 00:41:24.308000 audit: BPF prog-id=99 op=UNLOAD Jan 20 00:41:24.308000 audit: BPF prog-id=100 op=UNLOAD Jan 20 00:41:24.312000 audit: BPF prog-id=151 op=LOAD Jan 20 00:41:24.312000 audit: BPF prog-id=104 op=UNLOAD Jan 20 00:41:24.312000 audit: BPF prog-id=152 op=LOAD Jan 20 00:41:24.312000 audit: BPF prog-id=153 op=LOAD Jan 20 00:41:24.312000 audit: BPF prog-id=105 op=UNLOAD Jan 20 00:41:24.312000 audit: BPF prog-id=106 op=UNLOAD Jan 20 00:41:24.313000 audit: BPF prog-id=154 op=LOAD Jan 20 00:41:24.313000 audit: BPF prog-id=88 op=UNLOAD Jan 20 00:41:24.313000 audit: BPF prog-id=155 op=LOAD Jan 20 00:41:24.314000 audit: BPF prog-id=156 op=LOAD Jan 20 00:41:24.314000 audit: BPF prog-id=89 op=UNLOAD Jan 20 00:41:24.314000 audit: BPF prog-id=90 op=UNLOAD Jan 20 00:41:24.438217 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 00:41:24.437000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 00:41:24.448955 (kubelet)[3599]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 20 00:41:24.476766 kubelet[3599]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 20 00:41:24.476766 kubelet[3599]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 20 00:41:24.476766 kubelet[3599]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 20 00:41:24.476766 kubelet[3599]: I0120 00:41:24.475419 3599 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 20 00:41:24.480344 kubelet[3599]: I0120 00:41:24.480323 3599 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Jan 20 00:41:24.480427 kubelet[3599]: I0120 00:41:24.480419 3599 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 20 00:41:24.480613 kubelet[3599]: I0120 00:41:24.480599 3599 server.go:956] "Client rotation is on, will bootstrap in background" Jan 20 00:41:24.481531 kubelet[3599]: I0120 00:41:24.481513 3599 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Jan 20 00:41:24.483154 kubelet[3599]: I0120 00:41:24.483133 3599 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 20 00:41:24.486129 kubelet[3599]: I0120 00:41:24.486115 3599 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 20 00:41:24.488360 kubelet[3599]: I0120 00:41:24.488346 3599 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 20 00:41:24.488597 kubelet[3599]: I0120 00:41:24.488578 3599 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 20 00:41:24.488754 kubelet[3599]: I0120 00:41:24.488651 3599 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4515.1.0-n-fc9e3ff023","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 20 00:41:24.488868 kubelet[3599]: I0120 00:41:24.488858 3599 topology_manager.go:138] "Creating topology manager with none policy" Jan 20 00:41:24.488914 kubelet[3599]: I0120 00:41:24.488909 3599 container_manager_linux.go:303] "Creating device plugin manager" Jan 20 00:41:24.488983 kubelet[3599]: I0120 00:41:24.488976 3599 state_mem.go:36] "Initialized new in-memory state store" Jan 20 00:41:24.489152 kubelet[3599]: I0120 00:41:24.489141 3599 kubelet.go:480] "Attempting to sync node with API server" Jan 20 00:41:24.489207 kubelet[3599]: I0120 00:41:24.489199 3599 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 20 00:41:24.489265 kubelet[3599]: I0120 00:41:24.489258 3599 kubelet.go:386] "Adding apiserver pod source" Jan 20 00:41:24.489722 kubelet[3599]: I0120 00:41:24.489695 3599 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 20 00:41:24.497881 kubelet[3599]: I0120 00:41:24.497864 3599 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.1.5" apiVersion="v1" Jan 20 00:41:24.499053 kubelet[3599]: I0120 00:41:24.498282 3599 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jan 20 00:41:24.505579 kubelet[3599]: I0120 00:41:24.505565 3599 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 20 00:41:24.505691 kubelet[3599]: I0120 00:41:24.505684 3599 server.go:1289] "Started kubelet" Jan 20 00:41:24.506607 kubelet[3599]: I0120 00:41:24.506572 3599 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 20 00:41:24.506836 kubelet[3599]: I0120 00:41:24.506820 3599 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 20 00:41:24.506896 kubelet[3599]: I0120 00:41:24.506873 3599 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jan 20 00:41:24.507727 kubelet[3599]: I0120 00:41:24.507715 3599 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 20 00:41:24.508801 kubelet[3599]: I0120 00:41:24.508767 3599 server.go:317] "Adding debug handlers to kubelet server" Jan 20 00:41:24.510340 kubelet[3599]: I0120 00:41:24.510323 3599 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 20 00:41:24.510857 kubelet[3599]: I0120 00:41:24.510802 3599 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 20 00:41:24.513755 kubelet[3599]: I0120 00:41:24.513706 3599 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 20 00:41:24.514620 kubelet[3599]: I0120 00:41:24.514541 3599 reconciler.go:26] "Reconciler: start to sync state" Jan 20 00:41:24.515962 kubelet[3599]: I0120 00:41:24.515232 3599 factory.go:223] Registration of the systemd container factory successfully Jan 20 00:41:24.515962 kubelet[3599]: I0120 00:41:24.515816 3599 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 20 00:41:24.519329 kubelet[3599]: I0120 00:41:24.518599 3599 factory.go:223] Registration of the containerd container factory successfully Jan 20 00:41:24.525264 kubelet[3599]: E0120 00:41:24.525241 3599 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 20 00:41:24.536513 kubelet[3599]: I0120 00:41:24.536496 3599 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Jan 20 00:41:24.539322 kubelet[3599]: I0120 00:41:24.539177 3599 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Jan 20 00:41:24.539322 kubelet[3599]: I0120 00:41:24.539194 3599 status_manager.go:230] "Starting to sync pod status with apiserver" Jan 20 00:41:24.539322 kubelet[3599]: I0120 00:41:24.539213 3599 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 20 00:41:24.539322 kubelet[3599]: I0120 00:41:24.539217 3599 kubelet.go:2436] "Starting kubelet main sync loop" Jan 20 00:41:24.539322 kubelet[3599]: E0120 00:41:24.539255 3599 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 20 00:41:24.557413 kubelet[3599]: I0120 00:41:24.557397 3599 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 20 00:41:24.557512 kubelet[3599]: I0120 00:41:24.557502 3599 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 20 00:41:24.557561 kubelet[3599]: I0120 00:41:24.557554 3599 state_mem.go:36] "Initialized new in-memory state store" Jan 20 00:41:24.557750 kubelet[3599]: I0120 00:41:24.557697 3599 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 20 00:41:24.557750 kubelet[3599]: I0120 00:41:24.557707 3599 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 20 00:41:24.557750 kubelet[3599]: I0120 00:41:24.557719 3599 policy_none.go:49] "None policy: Start" Jan 20 00:41:24.557750 kubelet[3599]: I0120 00:41:24.557726 3599 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 20 00:41:24.557750 kubelet[3599]: I0120 00:41:24.557733 3599 state_mem.go:35] "Initializing new in-memory state store" Jan 20 00:41:24.558011 kubelet[3599]: I0120 00:41:24.557935 3599 state_mem.go:75] "Updated machine memory state" Jan 20 00:41:24.561216 kubelet[3599]: E0120 00:41:24.561203 3599 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jan 20 00:41:24.561507 kubelet[3599]: I0120 00:41:24.561439 3599 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 20 00:41:24.561507 kubelet[3599]: I0120 00:41:24.561452 3599 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 20 00:41:24.561946 kubelet[3599]: I0120 00:41:24.561933 3599 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 20 00:41:24.564669 kubelet[3599]: E0120 00:41:24.564650 3599 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 20 00:41:24.640478 kubelet[3599]: I0120 00:41:24.640445 3599 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4515.1.0-n-fc9e3ff023" Jan 20 00:41:24.640606 kubelet[3599]: I0120 00:41:24.640592 3599 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4515.1.0-n-fc9e3ff023" Jan 20 00:41:24.640791 kubelet[3599]: I0120 00:41:24.640763 3599 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4515.1.0-n-fc9e3ff023" Jan 20 00:41:24.655196 kubelet[3599]: I0120 00:41:24.655174 3599 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Jan 20 00:41:24.655695 kubelet[3599]: E0120 00:41:24.655400 3599 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4515.1.0-n-fc9e3ff023\" already exists" pod="kube-system/kube-apiserver-ci-4515.1.0-n-fc9e3ff023" Jan 20 00:41:24.655817 kubelet[3599]: I0120 00:41:24.655752 3599 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Jan 20 00:41:24.656254 kubelet[3599]: E0120 00:41:24.656239 3599 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4515.1.0-n-fc9e3ff023\" already exists" pod="kube-system/kube-scheduler-ci-4515.1.0-n-fc9e3ff023" Jan 20 00:41:24.656351 kubelet[3599]: I0120 00:41:24.655763 3599 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Jan 20 00:41:24.656431 kubelet[3599]: E0120 00:41:24.656419 3599 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4515.1.0-n-fc9e3ff023\" already exists" pod="kube-system/kube-controller-manager-ci-4515.1.0-n-fc9e3ff023" Jan 20 00:41:24.667344 kubelet[3599]: I0120 00:41:24.667138 3599 kubelet_node_status.go:75] "Attempting to register node" node="ci-4515.1.0-n-fc9e3ff023" Jan 20 00:41:24.677588 kubelet[3599]: I0120 00:41:24.677561 3599 kubelet_node_status.go:124] "Node was previously registered" node="ci-4515.1.0-n-fc9e3ff023" Jan 20 00:41:24.677679 kubelet[3599]: I0120 00:41:24.677616 3599 kubelet_node_status.go:78] "Successfully registered node" node="ci-4515.1.0-n-fc9e3ff023" Jan 20 00:41:24.716117 kubelet[3599]: I0120 00:41:24.716028 3599 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/11281982cdb7ed7a1aaf8ba660a43116-k8s-certs\") pod \"kube-apiserver-ci-4515.1.0-n-fc9e3ff023\" (UID: \"11281982cdb7ed7a1aaf8ba660a43116\") " pod="kube-system/kube-apiserver-ci-4515.1.0-n-fc9e3ff023" Jan 20 00:41:24.716117 kubelet[3599]: I0120 00:41:24.716056 3599 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/11281982cdb7ed7a1aaf8ba660a43116-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4515.1.0-n-fc9e3ff023\" (UID: \"11281982cdb7ed7a1aaf8ba660a43116\") " pod="kube-system/kube-apiserver-ci-4515.1.0-n-fc9e3ff023" Jan 20 00:41:24.716117 kubelet[3599]: I0120 00:41:24.716070 3599 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d9a2e614beaf3a143e560e6dcc6fd253-flexvolume-dir\") pod \"kube-controller-manager-ci-4515.1.0-n-fc9e3ff023\" (UID: \"d9a2e614beaf3a143e560e6dcc6fd253\") " pod="kube-system/kube-controller-manager-ci-4515.1.0-n-fc9e3ff023" Jan 20 00:41:24.716117 kubelet[3599]: I0120 00:41:24.716081 3599 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d9a2e614beaf3a143e560e6dcc6fd253-k8s-certs\") pod \"kube-controller-manager-ci-4515.1.0-n-fc9e3ff023\" (UID: \"d9a2e614beaf3a143e560e6dcc6fd253\") " pod="kube-system/kube-controller-manager-ci-4515.1.0-n-fc9e3ff023" Jan 20 00:41:24.716252 kubelet[3599]: I0120 00:41:24.716123 3599 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d9a2e614beaf3a143e560e6dcc6fd253-kubeconfig\") pod \"kube-controller-manager-ci-4515.1.0-n-fc9e3ff023\" (UID: \"d9a2e614beaf3a143e560e6dcc6fd253\") " pod="kube-system/kube-controller-manager-ci-4515.1.0-n-fc9e3ff023" Jan 20 00:41:24.716252 kubelet[3599]: I0120 00:41:24.716148 3599 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/11281982cdb7ed7a1aaf8ba660a43116-ca-certs\") pod \"kube-apiserver-ci-4515.1.0-n-fc9e3ff023\" (UID: \"11281982cdb7ed7a1aaf8ba660a43116\") " pod="kube-system/kube-apiserver-ci-4515.1.0-n-fc9e3ff023" Jan 20 00:41:24.716252 kubelet[3599]: I0120 00:41:24.716160 3599 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d9a2e614beaf3a143e560e6dcc6fd253-ca-certs\") pod \"kube-controller-manager-ci-4515.1.0-n-fc9e3ff023\" (UID: \"d9a2e614beaf3a143e560e6dcc6fd253\") " pod="kube-system/kube-controller-manager-ci-4515.1.0-n-fc9e3ff023" Jan 20 00:41:24.716252 kubelet[3599]: I0120 00:41:24.716172 3599 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d9a2e614beaf3a143e560e6dcc6fd253-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4515.1.0-n-fc9e3ff023\" (UID: \"d9a2e614beaf3a143e560e6dcc6fd253\") " pod="kube-system/kube-controller-manager-ci-4515.1.0-n-fc9e3ff023" Jan 20 00:41:24.716252 kubelet[3599]: I0120 00:41:24.716185 3599 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a31c39c670cf6f5e4f97d00602c10539-kubeconfig\") pod \"kube-scheduler-ci-4515.1.0-n-fc9e3ff023\" (UID: \"a31c39c670cf6f5e4f97d00602c10539\") " pod="kube-system/kube-scheduler-ci-4515.1.0-n-fc9e3ff023" Jan 20 00:41:25.490601 kubelet[3599]: I0120 00:41:25.490566 3599 apiserver.go:52] "Watching apiserver" Jan 20 00:41:25.514510 kubelet[3599]: I0120 00:41:25.514479 3599 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 20 00:41:25.556582 kubelet[3599]: I0120 00:41:25.556532 3599 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4515.1.0-n-fc9e3ff023" Jan 20 00:41:25.556965 kubelet[3599]: I0120 00:41:25.556874 3599 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4515.1.0-n-fc9e3ff023" Jan 20 00:41:25.556965 kubelet[3599]: I0120 00:41:25.556943 3599 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4515.1.0-n-fc9e3ff023" Jan 20 00:41:25.566225 kubelet[3599]: I0120 00:41:25.566209 3599 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Jan 20 00:41:25.566750 kubelet[3599]: E0120 00:41:25.566433 3599 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4515.1.0-n-fc9e3ff023\" already exists" pod="kube-system/kube-controller-manager-ci-4515.1.0-n-fc9e3ff023" Jan 20 00:41:25.566750 kubelet[3599]: I0120 00:41:25.566389 3599 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Jan 20 00:41:25.566750 kubelet[3599]: E0120 00:41:25.566594 3599 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4515.1.0-n-fc9e3ff023\" already exists" pod="kube-system/kube-apiserver-ci-4515.1.0-n-fc9e3ff023" Jan 20 00:41:25.566750 kubelet[3599]: I0120 00:41:25.566357 3599 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Jan 20 00:41:25.566750 kubelet[3599]: E0120 00:41:25.566665 3599 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4515.1.0-n-fc9e3ff023\" already exists" pod="kube-system/kube-scheduler-ci-4515.1.0-n-fc9e3ff023" Jan 20 00:41:25.593474 kubelet[3599]: I0120 00:41:25.593418 3599 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4515.1.0-n-fc9e3ff023" podStartSLOduration=2.593410065 podStartE2EDuration="2.593410065s" podCreationTimestamp="2026-01-20 00:41:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 00:41:25.580241345 +0000 UTC m=+1.127206410" watchObservedRunningTime="2026-01-20 00:41:25.593410065 +0000 UTC m=+1.140375130" Jan 20 00:41:25.604573 kubelet[3599]: I0120 00:41:25.604532 3599 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4515.1.0-n-fc9e3ff023" podStartSLOduration=2.60452107 podStartE2EDuration="2.60452107s" podCreationTimestamp="2026-01-20 00:41:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 00:41:25.593400025 +0000 UTC m=+1.140365098" watchObservedRunningTime="2026-01-20 00:41:25.60452107 +0000 UTC m=+1.151486135" Jan 20 00:41:25.616375 kubelet[3599]: I0120 00:41:25.616246 3599 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4515.1.0-n-fc9e3ff023" podStartSLOduration=2.616237839 podStartE2EDuration="2.616237839s" podCreationTimestamp="2026-01-20 00:41:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 00:41:25.605581377 +0000 UTC m=+1.152546450" watchObservedRunningTime="2026-01-20 00:41:25.616237839 +0000 UTC m=+1.163202904" Jan 20 00:41:29.203527 update_engine[2116]: I20260120 00:41:29.203467 2116 update_attempter.cc:509] Updating boot flags... Jan 20 00:41:30.822115 kubelet[3599]: I0120 00:41:30.822065 3599 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 20 00:41:30.822725 containerd[2132]: time="2026-01-20T00:41:30.822689410Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 20 00:41:30.823218 kubelet[3599]: I0120 00:41:30.822826 3599 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 20 00:41:31.477795 systemd[1]: Created slice kubepods-besteffort-podde6f107c_5516_4132_b771_321726659bd5.slice - libcontainer container kubepods-besteffort-podde6f107c_5516_4132_b771_321726659bd5.slice. Jan 20 00:41:31.565040 kubelet[3599]: I0120 00:41:31.564950 3599 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/de6f107c-5516-4132-b771-321726659bd5-lib-modules\") pod \"kube-proxy-7tcgx\" (UID: \"de6f107c-5516-4132-b771-321726659bd5\") " pod="kube-system/kube-proxy-7tcgx" Jan 20 00:41:31.565040 kubelet[3599]: I0120 00:41:31.564998 3599 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s6n7c\" (UniqueName: \"kubernetes.io/projected/de6f107c-5516-4132-b771-321726659bd5-kube-api-access-s6n7c\") pod \"kube-proxy-7tcgx\" (UID: \"de6f107c-5516-4132-b771-321726659bd5\") " pod="kube-system/kube-proxy-7tcgx" Jan 20 00:41:31.565183 kubelet[3599]: I0120 00:41:31.565109 3599 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/de6f107c-5516-4132-b771-321726659bd5-kube-proxy\") pod \"kube-proxy-7tcgx\" (UID: \"de6f107c-5516-4132-b771-321726659bd5\") " pod="kube-system/kube-proxy-7tcgx" Jan 20 00:41:31.565183 kubelet[3599]: I0120 00:41:31.565122 3599 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/de6f107c-5516-4132-b771-321726659bd5-xtables-lock\") pod \"kube-proxy-7tcgx\" (UID: \"de6f107c-5516-4132-b771-321726659bd5\") " pod="kube-system/kube-proxy-7tcgx" Jan 20 00:41:31.702199 systemd[1]: Created slice kubepods-besteffort-pod5f8d2b68_690f_4c8d_9def_c7e8b1e03896.slice - libcontainer container kubepods-besteffort-pod5f8d2b68_690f_4c8d_9def_c7e8b1e03896.slice. Jan 20 00:41:31.767188 kubelet[3599]: I0120 00:41:31.767059 3599 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/5f8d2b68-690f-4c8d-9def-c7e8b1e03896-var-lib-calico\") pod \"tigera-operator-7dcd859c48-swdwv\" (UID: \"5f8d2b68-690f-4c8d-9def-c7e8b1e03896\") " pod="tigera-operator/tigera-operator-7dcd859c48-swdwv" Jan 20 00:41:31.767188 kubelet[3599]: I0120 00:41:31.767095 3599 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fkxnf\" (UniqueName: \"kubernetes.io/projected/5f8d2b68-690f-4c8d-9def-c7e8b1e03896-kube-api-access-fkxnf\") pod \"tigera-operator-7dcd859c48-swdwv\" (UID: \"5f8d2b68-690f-4c8d-9def-c7e8b1e03896\") " pod="tigera-operator/tigera-operator-7dcd859c48-swdwv" Jan 20 00:41:31.786715 containerd[2132]: time="2026-01-20T00:41:31.786673812Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-7tcgx,Uid:de6f107c-5516-4132-b771-321726659bd5,Namespace:kube-system,Attempt:0,}" Jan 20 00:41:31.820934 containerd[2132]: time="2026-01-20T00:41:31.820883414Z" level=info msg="connecting to shim 8bf5ea36f781cfce09bacc658f132de2103551b60c11845ee4031cfde98b1c3a" address="unix:///run/containerd/s/3670cd60fe9f4e58dec5215f544db763fbb557912190d3f04e707cee8b640985" namespace=k8s.io protocol=ttrpc version=3 Jan 20 00:41:31.839439 systemd[1]: Started cri-containerd-8bf5ea36f781cfce09bacc658f132de2103551b60c11845ee4031cfde98b1c3a.scope - libcontainer container 8bf5ea36f781cfce09bacc658f132de2103551b60c11845ee4031cfde98b1c3a. Jan 20 00:41:31.845000 audit: BPF prog-id=157 op=LOAD Jan 20 00:41:31.854039 kernel: kauditd_printk_skb: 32 callbacks suppressed Jan 20 00:41:31.854096 kernel: audit: type=1334 audit(1768869691.845:461): prog-id=157 op=LOAD Jan 20 00:41:31.845000 audit: BPF prog-id=158 op=LOAD Jan 20 00:41:31.858789 kernel: audit: type=1334 audit(1768869691.845:462): prog-id=158 op=LOAD Jan 20 00:41:31.845000 audit[3731]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000176180 a2=98 a3=0 items=0 ppid=3718 pid=3731 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:41:31.875608 kernel: audit: type=1300 audit(1768869691.845:462): arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000176180 a2=98 a3=0 items=0 ppid=3718 pid=3731 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:41:31.845000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3862663565613336663738316366636530396261636336353866313332 Jan 20 00:41:31.893024 kernel: audit: type=1327 audit(1768869691.845:462): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3862663565613336663738316366636530396261636336353866313332 Jan 20 00:41:31.852000 audit: BPF prog-id=158 op=UNLOAD Jan 20 00:41:31.897639 kernel: audit: type=1334 audit(1768869691.852:463): prog-id=158 op=UNLOAD Jan 20 00:41:31.852000 audit[3731]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3718 pid=3731 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:41:31.913293 kernel: audit: type=1300 audit(1768869691.852:463): arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3718 pid=3731 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:41:31.852000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3862663565613336663738316366636530396261636336353866313332 Jan 20 00:41:31.929293 kernel: audit: type=1327 audit(1768869691.852:463): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3862663565613336663738316366636530396261636336353866313332 Jan 20 00:41:31.852000 audit: BPF prog-id=159 op=LOAD Jan 20 00:41:31.934162 kernel: audit: type=1334 audit(1768869691.852:464): prog-id=159 op=LOAD Jan 20 00:41:31.852000 audit[3731]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001763e8 a2=98 a3=0 items=0 ppid=3718 pid=3731 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:41:31.950504 kernel: audit: type=1300 audit(1768869691.852:464): arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001763e8 a2=98 a3=0 items=0 ppid=3718 pid=3731 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:41:31.852000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3862663565613336663738316366636530396261636336353866313332 Jan 20 00:41:31.966368 kernel: audit: type=1327 audit(1768869691.852:464): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3862663565613336663738316366636530396261636336353866313332 Jan 20 00:41:31.852000 audit: BPF prog-id=160 op=LOAD Jan 20 00:41:31.852000 audit[3731]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000176168 a2=98 a3=0 items=0 ppid=3718 pid=3731 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:41:31.852000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3862663565613336663738316366636530396261636336353866313332 Jan 20 00:41:31.852000 audit: BPF prog-id=160 op=UNLOAD Jan 20 00:41:31.852000 audit[3731]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3718 pid=3731 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:41:31.852000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3862663565613336663738316366636530396261636336353866313332 Jan 20 00:41:31.852000 audit: BPF prog-id=159 op=UNLOAD Jan 20 00:41:31.852000 audit[3731]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3718 pid=3731 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:41:31.852000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3862663565613336663738316366636530396261636336353866313332 Jan 20 00:41:31.852000 audit: BPF prog-id=161 op=LOAD Jan 20 00:41:31.852000 audit[3731]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000176648 a2=98 a3=0 items=0 ppid=3718 pid=3731 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:41:31.852000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3862663565613336663738316366636530396261636336353866313332 Jan 20 00:41:31.972498 containerd[2132]: time="2026-01-20T00:41:31.972432006Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-7tcgx,Uid:de6f107c-5516-4132-b771-321726659bd5,Namespace:kube-system,Attempt:0,} returns sandbox id \"8bf5ea36f781cfce09bacc658f132de2103551b60c11845ee4031cfde98b1c3a\"" Jan 20 00:41:31.982066 containerd[2132]: time="2026-01-20T00:41:31.982006363Z" level=info msg="CreateContainer within sandbox \"8bf5ea36f781cfce09bacc658f132de2103551b60c11845ee4031cfde98b1c3a\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 20 00:41:32.004934 containerd[2132]: time="2026-01-20T00:41:32.004585286Z" level=info msg="Container 7152ef87093284bb1f09ed8838556ac20ed092ec6f7920b6bfa675eadfee3311: CDI devices from CRI Config.CDIDevices: []" Jan 20 00:41:32.005056 containerd[2132]: time="2026-01-20T00:41:32.005029355Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-swdwv,Uid:5f8d2b68-690f-4c8d-9def-c7e8b1e03896,Namespace:tigera-operator,Attempt:0,}" Jan 20 00:41:32.026861 containerd[2132]: time="2026-01-20T00:41:32.026780951Z" level=info msg="CreateContainer within sandbox \"8bf5ea36f781cfce09bacc658f132de2103551b60c11845ee4031cfde98b1c3a\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"7152ef87093284bb1f09ed8838556ac20ed092ec6f7920b6bfa675eadfee3311\"" Jan 20 00:41:32.027585 containerd[2132]: time="2026-01-20T00:41:32.027561117Z" level=info msg="StartContainer for \"7152ef87093284bb1f09ed8838556ac20ed092ec6f7920b6bfa675eadfee3311\"" Jan 20 00:41:32.028462 containerd[2132]: time="2026-01-20T00:41:32.028434165Z" level=info msg="connecting to shim 7152ef87093284bb1f09ed8838556ac20ed092ec6f7920b6bfa675eadfee3311" address="unix:///run/containerd/s/3670cd60fe9f4e58dec5215f544db763fbb557912190d3f04e707cee8b640985" protocol=ttrpc version=3 Jan 20 00:41:32.044450 systemd[1]: Started cri-containerd-7152ef87093284bb1f09ed8838556ac20ed092ec6f7920b6bfa675eadfee3311.scope - libcontainer container 7152ef87093284bb1f09ed8838556ac20ed092ec6f7920b6bfa675eadfee3311. Jan 20 00:41:32.059035 containerd[2132]: time="2026-01-20T00:41:32.058925343Z" level=info msg="connecting to shim bb2bb38c19bf1418ab1ffd04829d02174a75948e7c9117506b6e74d00636b8dd" address="unix:///run/containerd/s/7eb6fb2261922d07de14efa1fcbc765346b513e011e0fe640e06b35c04b12f46" namespace=k8s.io protocol=ttrpc version=3 Jan 20 00:41:32.081411 systemd[1]: Started cri-containerd-bb2bb38c19bf1418ab1ffd04829d02174a75948e7c9117506b6e74d00636b8dd.scope - libcontainer container bb2bb38c19bf1418ab1ffd04829d02174a75948e7c9117506b6e74d00636b8dd. Jan 20 00:41:32.082000 audit: BPF prog-id=162 op=LOAD Jan 20 00:41:32.082000 audit[3757]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001763e8 a2=98 a3=0 items=0 ppid=3718 pid=3757 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:41:32.082000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3731353265663837303933323834626231663039656438383338353536 Jan 20 00:41:32.082000 audit: BPF prog-id=163 op=LOAD Jan 20 00:41:32.082000 audit[3757]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=4000176168 a2=98 a3=0 items=0 ppid=3718 pid=3757 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:41:32.082000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3731353265663837303933323834626231663039656438383338353536 Jan 20 00:41:32.082000 audit: BPF prog-id=163 op=UNLOAD Jan 20 00:41:32.082000 audit[3757]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3718 pid=3757 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:41:32.082000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3731353265663837303933323834626231663039656438383338353536 Jan 20 00:41:32.082000 audit: BPF prog-id=162 op=UNLOAD Jan 20 00:41:32.082000 audit[3757]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3718 pid=3757 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:41:32.082000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3731353265663837303933323834626231663039656438383338353536 Jan 20 00:41:32.082000 audit: BPF prog-id=164 op=LOAD Jan 20 00:41:32.082000 audit[3757]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000176648 a2=98 a3=0 items=0 ppid=3718 pid=3757 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:41:32.082000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3731353265663837303933323834626231663039656438383338353536 Jan 20 00:41:32.094000 audit: BPF prog-id=165 op=LOAD Jan 20 00:41:32.096000 audit: BPF prog-id=166 op=LOAD Jan 20 00:41:32.096000 audit[3796]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40000fe180 a2=98 a3=0 items=0 ppid=3784 pid=3796 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:41:32.096000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6262326262333863313962663134313861623166666430343832396430 Jan 20 00:41:32.096000 audit: BPF prog-id=166 op=UNLOAD Jan 20 00:41:32.096000 audit[3796]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3784 pid=3796 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:41:32.096000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6262326262333863313962663134313861623166666430343832396430 Jan 20 00:41:32.096000 audit: BPF prog-id=167 op=LOAD Jan 20 00:41:32.096000 audit[3796]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40000fe3e8 a2=98 a3=0 items=0 ppid=3784 pid=3796 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:41:32.096000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6262326262333863313962663134313861623166666430343832396430 Jan 20 00:41:32.096000 audit: BPF prog-id=168 op=LOAD Jan 20 00:41:32.096000 audit[3796]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=40000fe168 a2=98 a3=0 items=0 ppid=3784 pid=3796 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:41:32.096000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6262326262333863313962663134313861623166666430343832396430 Jan 20 00:41:32.096000 audit: BPF prog-id=168 op=UNLOAD Jan 20 00:41:32.096000 audit[3796]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3784 pid=3796 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:41:32.096000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6262326262333863313962663134313861623166666430343832396430 Jan 20 00:41:32.096000 audit: BPF prog-id=167 op=UNLOAD Jan 20 00:41:32.096000 audit[3796]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3784 pid=3796 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:41:32.096000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6262326262333863313962663134313861623166666430343832396430 Jan 20 00:41:32.096000 audit: BPF prog-id=169 op=LOAD Jan 20 00:41:32.096000 audit[3796]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40000fe648 a2=98 a3=0 items=0 ppid=3784 pid=3796 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:41:32.096000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6262326262333863313962663134313861623166666430343832396430 Jan 20 00:41:32.113550 containerd[2132]: time="2026-01-20T00:41:32.113515303Z" level=info msg="StartContainer for \"7152ef87093284bb1f09ed8838556ac20ed092ec6f7920b6bfa675eadfee3311\" returns successfully" Jan 20 00:41:32.127966 containerd[2132]: time="2026-01-20T00:41:32.127932772Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-swdwv,Uid:5f8d2b68-690f-4c8d-9def-c7e8b1e03896,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"bb2bb38c19bf1418ab1ffd04829d02174a75948e7c9117506b6e74d00636b8dd\"" Jan 20 00:41:32.130814 containerd[2132]: time="2026-01-20T00:41:32.130410018Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Jan 20 00:41:32.297000 audit[3869]: NETFILTER_CFG table=mangle:57 family=10 entries=1 op=nft_register_chain pid=3869 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 00:41:32.297000 audit[3869]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffecb8cea0 a2=0 a3=1 items=0 ppid=3769 pid=3869 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:41:32.297000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Jan 20 00:41:32.298000 audit[3871]: NETFILTER_CFG table=nat:58 family=10 entries=1 op=nft_register_chain pid=3871 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 00:41:32.298000 audit[3871]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffd13943f0 a2=0 a3=1 items=0 ppid=3769 pid=3871 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:41:32.298000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Jan 20 00:41:32.299000 audit[3872]: NETFILTER_CFG table=filter:59 family=10 entries=1 op=nft_register_chain pid=3872 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 00:41:32.299000 audit[3872]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffd383d8b0 a2=0 a3=1 items=0 ppid=3769 pid=3872 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:41:32.299000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Jan 20 00:41:32.301000 audit[3868]: NETFILTER_CFG table=mangle:60 family=2 entries=1 op=nft_register_chain pid=3868 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 00:41:32.301000 audit[3868]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffd073b0a0 a2=0 a3=1 items=0 ppid=3769 pid=3868 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:41:32.301000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Jan 20 00:41:32.303000 audit[3875]: NETFILTER_CFG table=nat:61 family=2 entries=1 op=nft_register_chain pid=3875 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 00:41:32.303000 audit[3875]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffc282aa10 a2=0 a3=1 items=0 ppid=3769 pid=3875 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:41:32.303000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Jan 20 00:41:32.306000 audit[3876]: NETFILTER_CFG table=filter:62 family=2 entries=1 op=nft_register_chain pid=3876 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 00:41:32.306000 audit[3876]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=fffff7aff240 a2=0 a3=1 items=0 ppid=3769 pid=3876 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:41:32.306000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Jan 20 00:41:32.401000 audit[3877]: NETFILTER_CFG table=filter:63 family=2 entries=1 op=nft_register_chain pid=3877 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 00:41:32.401000 audit[3877]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=108 a0=3 a1=ffffce50a780 a2=0 a3=1 items=0 ppid=3769 pid=3877 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:41:32.401000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Jan 20 00:41:32.403000 audit[3879]: NETFILTER_CFG table=filter:64 family=2 entries=1 op=nft_register_rule pid=3879 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 00:41:32.403000 audit[3879]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=752 a0=3 a1=ffffc3348610 a2=0 a3=1 items=0 ppid=3769 pid=3879 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:41:32.403000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276696365 Jan 20 00:41:32.406000 audit[3882]: NETFILTER_CFG table=filter:65 family=2 entries=1 op=nft_register_rule pid=3882 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 00:41:32.406000 audit[3882]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=752 a0=3 a1=fffff8e21040 a2=0 a3=1 items=0 ppid=3769 pid=3882 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:41:32.406000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669 Jan 20 00:41:32.407000 audit[3883]: NETFILTER_CFG table=filter:66 family=2 entries=1 op=nft_register_chain pid=3883 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 00:41:32.407000 audit[3883]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffff955e0c0 a2=0 a3=1 items=0 ppid=3769 pid=3883 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:41:32.407000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Jan 20 00:41:32.409000 audit[3885]: NETFILTER_CFG table=filter:67 family=2 entries=1 op=nft_register_rule pid=3885 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 00:41:32.409000 audit[3885]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=fffff7cefdd0 a2=0 a3=1 items=0 ppid=3769 pid=3885 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:41:32.409000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Jan 20 00:41:32.410000 audit[3886]: NETFILTER_CFG table=filter:68 family=2 entries=1 op=nft_register_chain pid=3886 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 00:41:32.410000 audit[3886]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffd9966430 a2=0 a3=1 items=0 ppid=3769 pid=3886 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:41:32.410000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Jan 20 00:41:32.412000 audit[3888]: NETFILTER_CFG table=filter:69 family=2 entries=1 op=nft_register_rule pid=3888 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 00:41:32.412000 audit[3888]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=744 a0=3 a1=ffffd915a890 a2=0 a3=1 items=0 ppid=3769 pid=3888 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:41:32.412000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Jan 20 00:41:32.415000 audit[3891]: NETFILTER_CFG table=filter:70 family=2 entries=1 op=nft_register_rule pid=3891 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 00:41:32.415000 audit[3891]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=744 a0=3 a1=ffffd973f840 a2=0 a3=1 items=0 ppid=3769 pid=3891 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:41:32.415000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D53 Jan 20 00:41:32.416000 audit[3892]: NETFILTER_CFG table=filter:71 family=2 entries=1 op=nft_register_chain pid=3892 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 00:41:32.416000 audit[3892]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffdddd4470 a2=0 a3=1 items=0 ppid=3769 pid=3892 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:41:32.416000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Jan 20 00:41:32.418000 audit[3894]: NETFILTER_CFG table=filter:72 family=2 entries=1 op=nft_register_rule pid=3894 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 00:41:32.418000 audit[3894]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffe569eeb0 a2=0 a3=1 items=0 ppid=3769 pid=3894 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:41:32.418000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Jan 20 00:41:32.419000 audit[3895]: NETFILTER_CFG table=filter:73 family=2 entries=1 op=nft_register_chain pid=3895 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 00:41:32.419000 audit[3895]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffe91ca2f0 a2=0 a3=1 items=0 ppid=3769 pid=3895 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:41:32.419000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Jan 20 00:41:32.420000 audit[3897]: NETFILTER_CFG table=filter:74 family=2 entries=1 op=nft_register_rule pid=3897 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 00:41:32.420000 audit[3897]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffc7120320 a2=0 a3=1 items=0 ppid=3769 pid=3897 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:41:32.420000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Jan 20 00:41:32.423000 audit[3900]: NETFILTER_CFG table=filter:75 family=2 entries=1 op=nft_register_rule pid=3900 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 00:41:32.423000 audit[3900]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffeedb3b10 a2=0 a3=1 items=0 ppid=3769 pid=3900 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:41:32.423000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Jan 20 00:41:32.426000 audit[3903]: NETFILTER_CFG table=filter:76 family=2 entries=1 op=nft_register_rule pid=3903 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 00:41:32.426000 audit[3903]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=fffff8e3a9e0 a2=0 a3=1 items=0 ppid=3769 pid=3903 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:41:32.426000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Jan 20 00:41:32.427000 audit[3904]: NETFILTER_CFG table=nat:77 family=2 entries=1 op=nft_register_chain pid=3904 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 00:41:32.427000 audit[3904]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffeeb8d430 a2=0 a3=1 items=0 ppid=3769 pid=3904 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:41:32.427000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Jan 20 00:41:32.429000 audit[3906]: NETFILTER_CFG table=nat:78 family=2 entries=1 op=nft_register_rule pid=3906 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 00:41:32.429000 audit[3906]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=524 a0=3 a1=fffff8ba45d0 a2=0 a3=1 items=0 ppid=3769 pid=3906 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:41:32.429000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 20 00:41:32.432000 audit[3909]: NETFILTER_CFG table=nat:79 family=2 entries=1 op=nft_register_rule pid=3909 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 00:41:32.432000 audit[3909]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffc69e4a30 a2=0 a3=1 items=0 ppid=3769 pid=3909 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:41:32.432000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 20 00:41:32.433000 audit[3910]: NETFILTER_CFG table=nat:80 family=2 entries=1 op=nft_register_chain pid=3910 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 00:41:32.433000 audit[3910]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffef1253a0 a2=0 a3=1 items=0 ppid=3769 pid=3910 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:41:32.433000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Jan 20 00:41:32.434000 audit[3912]: NETFILTER_CFG table=nat:81 family=2 entries=1 op=nft_register_rule pid=3912 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 00:41:32.434000 audit[3912]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=532 a0=3 a1=ffffd0151a30 a2=0 a3=1 items=0 ppid=3769 pid=3912 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:41:32.434000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Jan 20 00:41:32.544000 audit[3918]: NETFILTER_CFG table=filter:82 family=2 entries=8 op=nft_register_rule pid=3918 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 00:41:32.544000 audit[3918]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=ffffc4fd6580 a2=0 a3=1 items=0 ppid=3769 pid=3918 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:41:32.544000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 00:41:32.582000 audit[3918]: NETFILTER_CFG table=nat:83 family=2 entries=14 op=nft_register_chain pid=3918 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 00:41:32.582000 audit[3918]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5508 a0=3 a1=ffffc4fd6580 a2=0 a3=1 items=0 ppid=3769 pid=3918 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:41:32.582000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 00:41:32.585000 audit[3923]: NETFILTER_CFG table=filter:84 family=10 entries=1 op=nft_register_chain pid=3923 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 00:41:32.585000 audit[3923]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=108 a0=3 a1=fffff5fb2f10 a2=0 a3=1 items=0 ppid=3769 pid=3923 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:41:32.585000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Jan 20 00:41:32.587000 audit[3925]: NETFILTER_CFG table=filter:85 family=10 entries=2 op=nft_register_chain pid=3925 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 00:41:32.587000 audit[3925]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=836 a0=3 a1=fffff210bb40 a2=0 a3=1 items=0 ppid=3769 pid=3925 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:41:32.587000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C6520736572766963 Jan 20 00:41:32.590000 audit[3928]: NETFILTER_CFG table=filter:86 family=10 entries=1 op=nft_register_rule pid=3928 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 00:41:32.590000 audit[3928]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=752 a0=3 a1=ffffc0b8b900 a2=0 a3=1 items=0 ppid=3769 pid=3928 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:41:32.590000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276 Jan 20 00:41:32.591000 audit[3929]: NETFILTER_CFG table=filter:87 family=10 entries=1 op=nft_register_chain pid=3929 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 00:41:32.591000 audit[3929]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffe1b1d2f0 a2=0 a3=1 items=0 ppid=3769 pid=3929 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:41:32.591000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Jan 20 00:41:32.593000 audit[3931]: NETFILTER_CFG table=filter:88 family=10 entries=1 op=nft_register_rule pid=3931 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 00:41:32.593000 audit[3931]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffd4f5ac40 a2=0 a3=1 items=0 ppid=3769 pid=3931 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:41:32.593000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Jan 20 00:41:32.594000 audit[3932]: NETFILTER_CFG table=filter:89 family=10 entries=1 op=nft_register_chain pid=3932 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 00:41:32.594000 audit[3932]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffd17fc1b0 a2=0 a3=1 items=0 ppid=3769 pid=3932 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:41:32.594000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Jan 20 00:41:32.596000 audit[3934]: NETFILTER_CFG table=filter:90 family=10 entries=1 op=nft_register_rule pid=3934 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 00:41:32.596000 audit[3934]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=744 a0=3 a1=ffffe5e66210 a2=0 a3=1 items=0 ppid=3769 pid=3934 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:41:32.596000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B554245 Jan 20 00:41:32.600000 audit[3937]: NETFILTER_CFG table=filter:91 family=10 entries=2 op=nft_register_chain pid=3937 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 00:41:32.600000 audit[3937]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=828 a0=3 a1=ffffeb1a5110 a2=0 a3=1 items=0 ppid=3769 pid=3937 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:41:32.600000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Jan 20 00:41:32.601000 audit[3938]: NETFILTER_CFG table=filter:92 family=10 entries=1 op=nft_register_chain pid=3938 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 00:41:32.601000 audit[3938]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffd1e2b990 a2=0 a3=1 items=0 ppid=3769 pid=3938 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:41:32.601000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Jan 20 00:41:32.603000 audit[3940]: NETFILTER_CFG table=filter:93 family=10 entries=1 op=nft_register_rule pid=3940 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 00:41:32.603000 audit[3940]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffc6804e10 a2=0 a3=1 items=0 ppid=3769 pid=3940 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:41:32.603000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Jan 20 00:41:32.604000 audit[3941]: NETFILTER_CFG table=filter:94 family=10 entries=1 op=nft_register_chain pid=3941 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 00:41:32.604000 audit[3941]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffe16ede10 a2=0 a3=1 items=0 ppid=3769 pid=3941 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:41:32.604000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Jan 20 00:41:32.606000 audit[3943]: NETFILTER_CFG table=filter:95 family=10 entries=1 op=nft_register_rule pid=3943 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 00:41:32.606000 audit[3943]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=fffff59ca3b0 a2=0 a3=1 items=0 ppid=3769 pid=3943 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:41:32.606000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Jan 20 00:41:32.609000 audit[3946]: NETFILTER_CFG table=filter:96 family=10 entries=1 op=nft_register_rule pid=3946 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 00:41:32.609000 audit[3946]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=fffffcd6faf0 a2=0 a3=1 items=0 ppid=3769 pid=3946 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:41:32.609000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Jan 20 00:41:32.612000 audit[3949]: NETFILTER_CFG table=filter:97 family=10 entries=1 op=nft_register_rule pid=3949 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 00:41:32.612000 audit[3949]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffe838f9d0 a2=0 a3=1 items=0 ppid=3769 pid=3949 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:41:32.612000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C Jan 20 00:41:32.613000 audit[3950]: NETFILTER_CFG table=nat:98 family=10 entries=1 op=nft_register_chain pid=3950 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 00:41:32.613000 audit[3950]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=fffff5309ca0 a2=0 a3=1 items=0 ppid=3769 pid=3950 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:41:32.613000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Jan 20 00:41:32.616000 audit[3952]: NETFILTER_CFG table=nat:99 family=10 entries=1 op=nft_register_rule pid=3952 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 00:41:32.616000 audit[3952]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=524 a0=3 a1=ffffcfbcd590 a2=0 a3=1 items=0 ppid=3769 pid=3952 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:41:32.616000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 20 00:41:32.619000 audit[3955]: NETFILTER_CFG table=nat:100 family=10 entries=1 op=nft_register_rule pid=3955 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 00:41:32.619000 audit[3955]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=fffff2b6bd50 a2=0 a3=1 items=0 ppid=3769 pid=3955 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:41:32.619000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 20 00:41:32.620000 audit[3956]: NETFILTER_CFG table=nat:101 family=10 entries=1 op=nft_register_chain pid=3956 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 00:41:32.620000 audit[3956]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffff8402e70 a2=0 a3=1 items=0 ppid=3769 pid=3956 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:41:32.620000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Jan 20 00:41:32.622000 audit[3958]: NETFILTER_CFG table=nat:102 family=10 entries=2 op=nft_register_chain pid=3958 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 00:41:32.622000 audit[3958]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=612 a0=3 a1=ffffe28094e0 a2=0 a3=1 items=0 ppid=3769 pid=3958 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:41:32.622000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Jan 20 00:41:32.623000 audit[3959]: NETFILTER_CFG table=filter:103 family=10 entries=1 op=nft_register_chain pid=3959 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 00:41:32.623000 audit[3959]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffd0a90b00 a2=0 a3=1 items=0 ppid=3769 pid=3959 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:41:32.623000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Jan 20 00:41:32.625000 audit[3961]: NETFILTER_CFG table=filter:104 family=10 entries=1 op=nft_register_rule pid=3961 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 00:41:32.625000 audit[3961]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=228 a0=3 a1=ffffcc60a740 a2=0 a3=1 items=0 ppid=3769 pid=3961 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:41:32.625000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 20 00:41:32.628000 audit[3964]: NETFILTER_CFG table=filter:105 family=10 entries=1 op=nft_register_rule pid=3964 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 00:41:32.628000 audit[3964]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=228 a0=3 a1=ffffc27e4290 a2=0 a3=1 items=0 ppid=3769 pid=3964 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:41:32.628000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 20 00:41:32.630000 audit[3966]: NETFILTER_CFG table=filter:106 family=10 entries=3 op=nft_register_rule pid=3966 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Jan 20 00:41:32.630000 audit[3966]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2088 a0=3 a1=ffffddeb6860 a2=0 a3=1 items=0 ppid=3769 pid=3966 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:41:32.630000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 00:41:32.631000 audit[3966]: NETFILTER_CFG table=nat:107 family=10 entries=7 op=nft_register_chain pid=3966 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Jan 20 00:41:32.631000 audit[3966]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2056 a0=3 a1=ffffddeb6860 a2=0 a3=1 items=0 ppid=3769 pid=3966 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:41:32.631000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 00:41:32.683185 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount183905249.mount: Deactivated successfully. Jan 20 00:41:34.226980 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3732439817.mount: Deactivated successfully. Jan 20 00:41:34.767797 containerd[2132]: time="2026-01-20T00:41:34.767479479Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 00:41:34.771791 containerd[2132]: time="2026-01-20T00:41:34.771649709Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=20773434" Jan 20 00:41:34.774920 containerd[2132]: time="2026-01-20T00:41:34.774895440Z" level=info msg="ImageCreate event name:\"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 00:41:34.779820 containerd[2132]: time="2026-01-20T00:41:34.779792306Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 00:41:34.780387 containerd[2132]: time="2026-01-20T00:41:34.780367330Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"22147999\" in 2.649516796s" Jan 20 00:41:34.780558 containerd[2132]: time="2026-01-20T00:41:34.780475045Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\"" Jan 20 00:41:34.787851 containerd[2132]: time="2026-01-20T00:41:34.787808147Z" level=info msg="CreateContainer within sandbox \"bb2bb38c19bf1418ab1ffd04829d02174a75948e7c9117506b6e74d00636b8dd\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jan 20 00:41:34.804444 containerd[2132]: time="2026-01-20T00:41:34.804094237Z" level=info msg="Container ddead4f3e511ec7060a145ffcff2a2c1fcfc698906d282e05a2a6242236e52ff: CDI devices from CRI Config.CDIDevices: []" Jan 20 00:41:34.817959 containerd[2132]: time="2026-01-20T00:41:34.817930259Z" level=info msg="CreateContainer within sandbox \"bb2bb38c19bf1418ab1ffd04829d02174a75948e7c9117506b6e74d00636b8dd\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"ddead4f3e511ec7060a145ffcff2a2c1fcfc698906d282e05a2a6242236e52ff\"" Jan 20 00:41:34.819105 containerd[2132]: time="2026-01-20T00:41:34.818295597Z" level=info msg="StartContainer for \"ddead4f3e511ec7060a145ffcff2a2c1fcfc698906d282e05a2a6242236e52ff\"" Jan 20 00:41:34.819720 containerd[2132]: time="2026-01-20T00:41:34.819701020Z" level=info msg="connecting to shim ddead4f3e511ec7060a145ffcff2a2c1fcfc698906d282e05a2a6242236e52ff" address="unix:///run/containerd/s/7eb6fb2261922d07de14efa1fcbc765346b513e011e0fe640e06b35c04b12f46" protocol=ttrpc version=3 Jan 20 00:41:34.836447 systemd[1]: Started cri-containerd-ddead4f3e511ec7060a145ffcff2a2c1fcfc698906d282e05a2a6242236e52ff.scope - libcontainer container ddead4f3e511ec7060a145ffcff2a2c1fcfc698906d282e05a2a6242236e52ff. Jan 20 00:41:34.843000 audit: BPF prog-id=170 op=LOAD Jan 20 00:41:34.843000 audit: BPF prog-id=171 op=LOAD Jan 20 00:41:34.843000 audit[3975]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000176180 a2=98 a3=0 items=0 ppid=3784 pid=3975 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:41:34.843000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6464656164346633653531316563373036306131343566666366663261 Jan 20 00:41:34.843000 audit: BPF prog-id=171 op=UNLOAD Jan 20 00:41:34.843000 audit[3975]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3784 pid=3975 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:41:34.843000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6464656164346633653531316563373036306131343566666366663261 Jan 20 00:41:34.844000 audit: BPF prog-id=172 op=LOAD Jan 20 00:41:34.844000 audit[3975]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001763e8 a2=98 a3=0 items=0 ppid=3784 pid=3975 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:41:34.844000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6464656164346633653531316563373036306131343566666366663261 Jan 20 00:41:34.844000 audit: BPF prog-id=173 op=LOAD Jan 20 00:41:34.844000 audit[3975]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000176168 a2=98 a3=0 items=0 ppid=3784 pid=3975 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:41:34.844000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6464656164346633653531316563373036306131343566666366663261 Jan 20 00:41:34.844000 audit: BPF prog-id=173 op=UNLOAD Jan 20 00:41:34.844000 audit[3975]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3784 pid=3975 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:41:34.844000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6464656164346633653531316563373036306131343566666366663261 Jan 20 00:41:34.844000 audit: BPF prog-id=172 op=UNLOAD Jan 20 00:41:34.844000 audit[3975]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3784 pid=3975 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:41:34.844000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6464656164346633653531316563373036306131343566666366663261 Jan 20 00:41:34.844000 audit: BPF prog-id=174 op=LOAD Jan 20 00:41:34.844000 audit[3975]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000176648 a2=98 a3=0 items=0 ppid=3784 pid=3975 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:41:34.844000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6464656164346633653531316563373036306131343566666366663261 Jan 20 00:41:34.865396 containerd[2132]: time="2026-01-20T00:41:34.865373401Z" level=info msg="StartContainer for \"ddead4f3e511ec7060a145ffcff2a2c1fcfc698906d282e05a2a6242236e52ff\" returns successfully" Jan 20 00:41:35.274978 kubelet[3599]: I0120 00:41:35.274894 3599 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-7tcgx" podStartSLOduration=4.274881546 podStartE2EDuration="4.274881546s" podCreationTimestamp="2026-01-20 00:41:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 00:41:32.577894375 +0000 UTC m=+8.124859440" watchObservedRunningTime="2026-01-20 00:41:35.274881546 +0000 UTC m=+10.821846619" Jan 20 00:41:39.725690 sudo[2616]: pam_unix(sudo:session): session closed for user root Jan 20 00:41:39.725000 audit[2616]: USER_END pid=2616 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 20 00:41:39.729097 kernel: kauditd_printk_skb: 224 callbacks suppressed Jan 20 00:41:39.729161 kernel: audit: type=1106 audit(1768869699.725:541): pid=2616 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 20 00:41:39.725000 audit[2616]: CRED_DISP pid=2616 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 20 00:41:39.756296 kernel: audit: type=1104 audit(1768869699.725:542): pid=2616 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 20 00:41:39.817527 sshd[2615]: Connection closed by 10.200.16.10 port 49312 Jan 20 00:41:39.816641 sshd-session[2612]: pam_unix(sshd:session): session closed for user core Jan 20 00:41:39.817000 audit[2612]: USER_END pid=2612 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 00:41:39.826604 systemd[1]: sshd@6-10.200.20.14:22-10.200.16.10:49312.service: Deactivated successfully. Jan 20 00:41:39.817000 audit[2612]: CRED_DISP pid=2612 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 00:41:39.837965 systemd[1]: session-9.scope: Deactivated successfully. Jan 20 00:41:39.838176 systemd[1]: session-9.scope: Consumed 3.653s CPU time, 222.1M memory peak. Jan 20 00:41:39.851759 kernel: audit: type=1106 audit(1768869699.817:543): pid=2612 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 00:41:39.851834 kernel: audit: type=1104 audit(1768869699.817:544): pid=2612 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 00:41:39.854878 systemd-logind[2110]: Session 9 logged out. Waiting for processes to exit. Jan 20 00:41:39.826000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.200.20.14:22-10.200.16.10:49312 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 00:41:39.868261 kernel: audit: type=1131 audit(1768869699.826:545): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.200.20.14:22-10.200.16.10:49312 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 00:41:39.869997 systemd-logind[2110]: Removed session 9. Jan 20 00:41:41.338000 audit[4056]: NETFILTER_CFG table=filter:108 family=2 entries=15 op=nft_register_rule pid=4056 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 00:41:41.338000 audit[4056]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5992 a0=3 a1=fffffc9868f0 a2=0 a3=1 items=0 ppid=3769 pid=4056 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:41:41.370614 kernel: audit: type=1325 audit(1768869701.338:546): table=filter:108 family=2 entries=15 op=nft_register_rule pid=4056 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 00:41:41.370692 kernel: audit: type=1300 audit(1768869701.338:546): arch=c00000b7 syscall=211 success=yes exit=5992 a0=3 a1=fffffc9868f0 a2=0 a3=1 items=0 ppid=3769 pid=4056 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:41:41.338000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 00:41:41.387591 kernel: audit: type=1327 audit(1768869701.338:546): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 00:41:41.353000 audit[4056]: NETFILTER_CFG table=nat:109 family=2 entries=12 op=nft_register_rule pid=4056 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 00:41:41.403372 kernel: audit: type=1325 audit(1768869701.353:547): table=nat:109 family=2 entries=12 op=nft_register_rule pid=4056 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 00:41:41.353000 audit[4056]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=fffffc9868f0 a2=0 a3=1 items=0 ppid=3769 pid=4056 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:41:41.353000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 00:41:41.432342 kernel: audit: type=1300 audit(1768869701.353:547): arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=fffffc9868f0 a2=0 a3=1 items=0 ppid=3769 pid=4056 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:41:41.482000 audit[4058]: NETFILTER_CFG table=filter:110 family=2 entries=16 op=nft_register_rule pid=4058 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 00:41:41.482000 audit[4058]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5992 a0=3 a1=fffffcf5bec0 a2=0 a3=1 items=0 ppid=3769 pid=4058 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:41:41.482000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 00:41:41.504000 audit[4058]: NETFILTER_CFG table=nat:111 family=2 entries=12 op=nft_register_rule pid=4058 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 00:41:41.504000 audit[4058]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=fffffcf5bec0 a2=0 a3=1 items=0 ppid=3769 pid=4058 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:41:41.504000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 00:41:44.120000 audit[4061]: NETFILTER_CFG table=filter:112 family=2 entries=17 op=nft_register_rule pid=4061 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 00:41:44.120000 audit[4061]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=6736 a0=3 a1=fffff0ef7740 a2=0 a3=1 items=0 ppid=3769 pid=4061 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:41:44.120000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 00:41:44.124000 audit[4061]: NETFILTER_CFG table=nat:113 family=2 entries=12 op=nft_register_rule pid=4061 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 00:41:44.124000 audit[4061]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=fffff0ef7740 a2=0 a3=1 items=0 ppid=3769 pid=4061 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:41:44.124000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 00:41:44.140000 audit[4063]: NETFILTER_CFG table=filter:114 family=2 entries=18 op=nft_register_rule pid=4063 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 00:41:44.140000 audit[4063]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=6736 a0=3 a1=ffffe2f0a7f0 a2=0 a3=1 items=0 ppid=3769 pid=4063 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:41:44.140000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 00:41:44.143000 audit[4063]: NETFILTER_CFG table=nat:115 family=2 entries=12 op=nft_register_rule pid=4063 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 00:41:44.143000 audit[4063]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffe2f0a7f0 a2=0 a3=1 items=0 ppid=3769 pid=4063 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:41:44.143000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 00:41:45.163336 kernel: kauditd_printk_skb: 19 callbacks suppressed Jan 20 00:41:45.163446 kernel: audit: type=1325 audit(1768869705.156:554): table=filter:116 family=2 entries=19 op=nft_register_rule pid=4066 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 00:41:45.156000 audit[4066]: NETFILTER_CFG table=filter:116 family=2 entries=19 op=nft_register_rule pid=4066 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 00:41:45.156000 audit[4066]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=7480 a0=3 a1=fffff1758a10 a2=0 a3=1 items=0 ppid=3769 pid=4066 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:41:45.188378 kernel: audit: type=1300 audit(1768869705.156:554): arch=c00000b7 syscall=211 success=yes exit=7480 a0=3 a1=fffff1758a10 a2=0 a3=1 items=0 ppid=3769 pid=4066 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:41:45.156000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 00:41:45.198329 kernel: audit: type=1327 audit(1768869705.156:554): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 00:41:45.188000 audit[4066]: NETFILTER_CFG table=nat:117 family=2 entries=12 op=nft_register_rule pid=4066 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 00:41:45.209327 kernel: audit: type=1325 audit(1768869705.188:555): table=nat:117 family=2 entries=12 op=nft_register_rule pid=4066 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 00:41:45.188000 audit[4066]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=fffff1758a10 a2=0 a3=1 items=0 ppid=3769 pid=4066 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:41:45.232765 kernel: audit: type=1300 audit(1768869705.188:555): arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=fffff1758a10 a2=0 a3=1 items=0 ppid=3769 pid=4066 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:41:45.188000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 00:41:45.248642 kernel: audit: type=1327 audit(1768869705.188:555): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 00:41:46.059624 kubelet[3599]: I0120 00:41:46.058667 3599 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7dcd859c48-swdwv" podStartSLOduration=12.407527354 podStartE2EDuration="15.058653363s" podCreationTimestamp="2026-01-20 00:41:31 +0000 UTC" firstStartedPulling="2026-01-20 00:41:32.129903524 +0000 UTC m=+7.676868589" lastFinishedPulling="2026-01-20 00:41:34.781029533 +0000 UTC m=+10.327994598" observedRunningTime="2026-01-20 00:41:35.583045904 +0000 UTC m=+11.130010969" watchObservedRunningTime="2026-01-20 00:41:46.058653363 +0000 UTC m=+21.605618436" Jan 20 00:41:46.072798 systemd[1]: Created slice kubepods-besteffort-podd0b478d0_bcfc_4480_82db_b75f0ad771f3.slice - libcontainer container kubepods-besteffort-podd0b478d0_bcfc_4480_82db_b75f0ad771f3.slice. Jan 20 00:41:46.097000 audit[4068]: NETFILTER_CFG table=filter:118 family=2 entries=21 op=nft_register_rule pid=4068 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 00:41:46.097000 audit[4068]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=8224 a0=3 a1=fffff0d8bcd0 a2=0 a3=1 items=0 ppid=3769 pid=4068 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:41:46.125770 kernel: audit: type=1325 audit(1768869706.097:556): table=filter:118 family=2 entries=21 op=nft_register_rule pid=4068 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 00:41:46.125843 kernel: audit: type=1300 audit(1768869706.097:556): arch=c00000b7 syscall=211 success=yes exit=8224 a0=3 a1=fffff0d8bcd0 a2=0 a3=1 items=0 ppid=3769 pid=4068 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:41:46.097000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 00:41:46.134965 kernel: audit: type=1327 audit(1768869706.097:556): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 00:41:46.107000 audit[4068]: NETFILTER_CFG table=nat:119 family=2 entries=12 op=nft_register_rule pid=4068 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 00:41:46.144291 kernel: audit: type=1325 audit(1768869706.107:557): table=nat:119 family=2 entries=12 op=nft_register_rule pid=4068 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 00:41:46.107000 audit[4068]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=fffff0d8bcd0 a2=0 a3=1 items=0 ppid=3769 pid=4068 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:41:46.107000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 00:41:46.156693 kubelet[3599]: I0120 00:41:46.156661 3599 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lmwqv\" (UniqueName: \"kubernetes.io/projected/d0b478d0-bcfc-4480-82db-b75f0ad771f3-kube-api-access-lmwqv\") pod \"calico-typha-7c9f465788-j5rqb\" (UID: \"d0b478d0-bcfc-4480-82db-b75f0ad771f3\") " pod="calico-system/calico-typha-7c9f465788-j5rqb" Jan 20 00:41:46.156866 kubelet[3599]: I0120 00:41:46.156822 3599 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d0b478d0-bcfc-4480-82db-b75f0ad771f3-tigera-ca-bundle\") pod \"calico-typha-7c9f465788-j5rqb\" (UID: \"d0b478d0-bcfc-4480-82db-b75f0ad771f3\") " pod="calico-system/calico-typha-7c9f465788-j5rqb" Jan 20 00:41:46.156932 kubelet[3599]: I0120 00:41:46.156867 3599 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/d0b478d0-bcfc-4480-82db-b75f0ad771f3-typha-certs\") pod \"calico-typha-7c9f465788-j5rqb\" (UID: \"d0b478d0-bcfc-4480-82db-b75f0ad771f3\") " pod="calico-system/calico-typha-7c9f465788-j5rqb" Jan 20 00:41:46.256017 systemd[1]: Created slice kubepods-besteffort-podb0aaf998_7735_4b52_8d4c_d226d71ef517.slice - libcontainer container kubepods-besteffort-podb0aaf998_7735_4b52_8d4c_d226d71ef517.slice. Jan 20 00:41:46.358620 kubelet[3599]: I0120 00:41:46.358531 3599 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vtgkg\" (UniqueName: \"kubernetes.io/projected/b0aaf998-7735-4b52-8d4c-d226d71ef517-kube-api-access-vtgkg\") pod \"calico-node-k2llv\" (UID: \"b0aaf998-7735-4b52-8d4c-d226d71ef517\") " pod="calico-system/calico-node-k2llv" Jan 20 00:41:46.358620 kubelet[3599]: I0120 00:41:46.358565 3599 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/b0aaf998-7735-4b52-8d4c-d226d71ef517-cni-bin-dir\") pod \"calico-node-k2llv\" (UID: \"b0aaf998-7735-4b52-8d4c-d226d71ef517\") " pod="calico-system/calico-node-k2llv" Jan 20 00:41:46.358620 kubelet[3599]: I0120 00:41:46.358576 3599 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/b0aaf998-7735-4b52-8d4c-d226d71ef517-node-certs\") pod \"calico-node-k2llv\" (UID: \"b0aaf998-7735-4b52-8d4c-d226d71ef517\") " pod="calico-system/calico-node-k2llv" Jan 20 00:41:46.358620 kubelet[3599]: I0120 00:41:46.358585 3599 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b0aaf998-7735-4b52-8d4c-d226d71ef517-lib-modules\") pod \"calico-node-k2llv\" (UID: \"b0aaf998-7735-4b52-8d4c-d226d71ef517\") " pod="calico-system/calico-node-k2llv" Jan 20 00:41:46.358620 kubelet[3599]: I0120 00:41:46.358594 3599 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/b0aaf998-7735-4b52-8d4c-d226d71ef517-var-lib-calico\") pod \"calico-node-k2llv\" (UID: \"b0aaf998-7735-4b52-8d4c-d226d71ef517\") " pod="calico-system/calico-node-k2llv" Jan 20 00:41:46.358772 kubelet[3599]: I0120 00:41:46.358604 3599 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/b0aaf998-7735-4b52-8d4c-d226d71ef517-var-run-calico\") pod \"calico-node-k2llv\" (UID: \"b0aaf998-7735-4b52-8d4c-d226d71ef517\") " pod="calico-system/calico-node-k2llv" Jan 20 00:41:46.358772 kubelet[3599]: I0120 00:41:46.358616 3599 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/b0aaf998-7735-4b52-8d4c-d226d71ef517-cni-net-dir\") pod \"calico-node-k2llv\" (UID: \"b0aaf998-7735-4b52-8d4c-d226d71ef517\") " pod="calico-system/calico-node-k2llv" Jan 20 00:41:46.358772 kubelet[3599]: I0120 00:41:46.358625 3599 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/b0aaf998-7735-4b52-8d4c-d226d71ef517-policysync\") pod \"calico-node-k2llv\" (UID: \"b0aaf998-7735-4b52-8d4c-d226d71ef517\") " pod="calico-system/calico-node-k2llv" Jan 20 00:41:46.358772 kubelet[3599]: I0120 00:41:46.358635 3599 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b0aaf998-7735-4b52-8d4c-d226d71ef517-xtables-lock\") pod \"calico-node-k2llv\" (UID: \"b0aaf998-7735-4b52-8d4c-d226d71ef517\") " pod="calico-system/calico-node-k2llv" Jan 20 00:41:46.358772 kubelet[3599]: I0120 00:41:46.358646 3599 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b0aaf998-7735-4b52-8d4c-d226d71ef517-tigera-ca-bundle\") pod \"calico-node-k2llv\" (UID: \"b0aaf998-7735-4b52-8d4c-d226d71ef517\") " pod="calico-system/calico-node-k2llv" Jan 20 00:41:46.358847 kubelet[3599]: I0120 00:41:46.358657 3599 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/b0aaf998-7735-4b52-8d4c-d226d71ef517-cni-log-dir\") pod \"calico-node-k2llv\" (UID: \"b0aaf998-7735-4b52-8d4c-d226d71ef517\") " pod="calico-system/calico-node-k2llv" Jan 20 00:41:46.358847 kubelet[3599]: I0120 00:41:46.358666 3599 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/b0aaf998-7735-4b52-8d4c-d226d71ef517-flexvol-driver-host\") pod \"calico-node-k2llv\" (UID: \"b0aaf998-7735-4b52-8d4c-d226d71ef517\") " pod="calico-system/calico-node-k2llv" Jan 20 00:41:46.377099 containerd[2132]: time="2026-01-20T00:41:46.377064845Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7c9f465788-j5rqb,Uid:d0b478d0-bcfc-4480-82db-b75f0ad771f3,Namespace:calico-system,Attempt:0,}" Jan 20 00:41:46.420804 containerd[2132]: time="2026-01-20T00:41:46.420745394Z" level=info msg="connecting to shim d9133a4777d9a95fc5e92317d9424e2ccea483e7e89dcbae011319237bc3dbc2" address="unix:///run/containerd/s/195cd949ef35a7458880d663d1e1a4b4d80979764390c28245489d034098e747" namespace=k8s.io protocol=ttrpc version=3 Jan 20 00:41:46.443467 systemd[1]: Started cri-containerd-d9133a4777d9a95fc5e92317d9424e2ccea483e7e89dcbae011319237bc3dbc2.scope - libcontainer container d9133a4777d9a95fc5e92317d9424e2ccea483e7e89dcbae011319237bc3dbc2. Jan 20 00:41:46.466842 kubelet[3599]: E0120 00:41:46.466823 3599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:41:46.467167 kubelet[3599]: W0120 00:41:46.466937 3599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:41:46.467167 kubelet[3599]: E0120 00:41:46.466968 3599 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:41:46.467167 kubelet[3599]: E0120 00:41:46.467093 3599 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-t6nwm" podUID="e914416f-b403-4119-a223-0b5c6e18edd3" Jan 20 00:41:46.474881 kubelet[3599]: E0120 00:41:46.474496 3599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:41:46.474881 kubelet[3599]: W0120 00:41:46.474508 3599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:41:46.474881 kubelet[3599]: E0120 00:41:46.474519 3599 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:41:46.476581 kubelet[3599]: E0120 00:41:46.476557 3599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:41:46.476581 kubelet[3599]: W0120 00:41:46.476575 3599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:41:46.476659 kubelet[3599]: E0120 00:41:46.476587 3599 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:41:46.479000 audit: BPF prog-id=175 op=LOAD Jan 20 00:41:46.479000 audit: BPF prog-id=176 op=LOAD Jan 20 00:41:46.479000 audit[4091]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40000fe180 a2=98 a3=0 items=0 ppid=4079 pid=4091 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:41:46.479000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6439313333613437373764396139356663356539323331376439343234 Jan 20 00:41:46.479000 audit: BPF prog-id=176 op=UNLOAD Jan 20 00:41:46.479000 audit[4091]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4079 pid=4091 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:41:46.479000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6439313333613437373764396139356663356539323331376439343234 Jan 20 00:41:46.479000 audit: BPF prog-id=177 op=LOAD Jan 20 00:41:46.479000 audit[4091]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40000fe3e8 a2=98 a3=0 items=0 ppid=4079 pid=4091 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:41:46.479000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6439313333613437373764396139356663356539323331376439343234 Jan 20 00:41:46.480000 audit: BPF prog-id=178 op=LOAD Jan 20 00:41:46.480000 audit[4091]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=40000fe168 a2=98 a3=0 items=0 ppid=4079 pid=4091 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:41:46.480000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6439313333613437373764396139356663356539323331376439343234 Jan 20 00:41:46.480000 audit: BPF prog-id=178 op=UNLOAD Jan 20 00:41:46.480000 audit[4091]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4079 pid=4091 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:41:46.480000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6439313333613437373764396139356663356539323331376439343234 Jan 20 00:41:46.480000 audit: BPF prog-id=177 op=UNLOAD Jan 20 00:41:46.480000 audit[4091]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4079 pid=4091 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:41:46.480000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6439313333613437373764396139356663356539323331376439343234 Jan 20 00:41:46.480000 audit: BPF prog-id=179 op=LOAD Jan 20 00:41:46.480000 audit[4091]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40000fe648 a2=98 a3=0 items=0 ppid=4079 pid=4091 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:41:46.480000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6439313333613437373764396139356663356539323331376439343234 Jan 20 00:41:46.507326 containerd[2132]: time="2026-01-20T00:41:46.505224486Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7c9f465788-j5rqb,Uid:d0b478d0-bcfc-4480-82db-b75f0ad771f3,Namespace:calico-system,Attempt:0,} returns sandbox id \"d9133a4777d9a95fc5e92317d9424e2ccea483e7e89dcbae011319237bc3dbc2\"" Jan 20 00:41:46.509430 containerd[2132]: time="2026-01-20T00:41:46.509400669Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Jan 20 00:41:46.541401 kubelet[3599]: E0120 00:41:46.541367 3599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:41:46.541401 kubelet[3599]: W0120 00:41:46.541393 3599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:41:46.541518 kubelet[3599]: E0120 00:41:46.541412 3599 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:41:46.541592 kubelet[3599]: E0120 00:41:46.541578 3599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:41:46.541622 kubelet[3599]: W0120 00:41:46.541588 3599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:41:46.542356 kubelet[3599]: E0120 00:41:46.541624 3599 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:41:46.542577 kubelet[3599]: E0120 00:41:46.542560 3599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:41:46.542577 kubelet[3599]: W0120 00:41:46.542571 3599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:41:46.542643 kubelet[3599]: E0120 00:41:46.542582 3599 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:41:46.543084 kubelet[3599]: E0120 00:41:46.542845 3599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:41:46.543084 kubelet[3599]: W0120 00:41:46.543084 3599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:41:46.543159 kubelet[3599]: E0120 00:41:46.543098 3599 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:41:46.544582 kubelet[3599]: E0120 00:41:46.544559 3599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:41:46.544582 kubelet[3599]: W0120 00:41:46.544573 3599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:41:46.544582 kubelet[3599]: E0120 00:41:46.544583 3599 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:41:46.545004 kubelet[3599]: E0120 00:41:46.544983 3599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:41:46.545004 kubelet[3599]: W0120 00:41:46.544997 3599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:41:46.545004 kubelet[3599]: E0120 00:41:46.545008 3599 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:41:46.545787 kubelet[3599]: E0120 00:41:46.545768 3599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:41:46.545787 kubelet[3599]: W0120 00:41:46.545780 3599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:41:46.545787 kubelet[3599]: E0120 00:41:46.545791 3599 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:41:46.546408 kubelet[3599]: E0120 00:41:46.546388 3599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:41:46.546408 kubelet[3599]: W0120 00:41:46.546403 3599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:41:46.546521 kubelet[3599]: E0120 00:41:46.546414 3599 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:41:46.547954 kubelet[3599]: E0120 00:41:46.547935 3599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:41:46.547954 kubelet[3599]: W0120 00:41:46.547948 3599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:41:46.547954 kubelet[3599]: E0120 00:41:46.547958 3599 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:41:46.548136 kubelet[3599]: E0120 00:41:46.548114 3599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:41:46.548136 kubelet[3599]: W0120 00:41:46.548125 3599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:41:46.548136 kubelet[3599]: E0120 00:41:46.548133 3599 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:41:46.548258 kubelet[3599]: E0120 00:41:46.548243 3599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:41:46.548258 kubelet[3599]: W0120 00:41:46.548252 3599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:41:46.548258 kubelet[3599]: E0120 00:41:46.548258 3599 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:41:46.548791 kubelet[3599]: E0120 00:41:46.548368 3599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:41:46.548791 kubelet[3599]: W0120 00:41:46.548375 3599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:41:46.548791 kubelet[3599]: E0120 00:41:46.548380 3599 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:41:46.548791 kubelet[3599]: E0120 00:41:46.548486 3599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:41:46.548791 kubelet[3599]: W0120 00:41:46.548491 3599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:41:46.548791 kubelet[3599]: E0120 00:41:46.548497 3599 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:41:46.548791 kubelet[3599]: E0120 00:41:46.548631 3599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:41:46.548791 kubelet[3599]: W0120 00:41:46.548637 3599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:41:46.548791 kubelet[3599]: E0120 00:41:46.548643 3599 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:41:46.548934 kubelet[3599]: E0120 00:41:46.548885 3599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:41:46.548934 kubelet[3599]: W0120 00:41:46.548895 3599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:41:46.548934 kubelet[3599]: E0120 00:41:46.548905 3599 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:41:46.549306 kubelet[3599]: E0120 00:41:46.549280 3599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:41:46.549306 kubelet[3599]: W0120 00:41:46.549294 3599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:41:46.549423 kubelet[3599]: E0120 00:41:46.549405 3599 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:41:46.549860 kubelet[3599]: E0120 00:41:46.549838 3599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:41:46.549860 kubelet[3599]: W0120 00:41:46.549853 3599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:41:46.549860 kubelet[3599]: E0120 00:41:46.549863 3599 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:41:46.550348 kubelet[3599]: E0120 00:41:46.550329 3599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:41:46.550348 kubelet[3599]: W0120 00:41:46.550343 3599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:41:46.550348 kubelet[3599]: E0120 00:41:46.550353 3599 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:41:46.551009 kubelet[3599]: E0120 00:41:46.550963 3599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:41:46.551009 kubelet[3599]: W0120 00:41:46.550977 3599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:41:46.551009 kubelet[3599]: E0120 00:41:46.550987 3599 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:41:46.551134 kubelet[3599]: E0120 00:41:46.551119 3599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:41:46.551134 kubelet[3599]: W0120 00:41:46.551128 3599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:41:46.551134 kubelet[3599]: E0120 00:41:46.551134 3599 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:41:46.560969 kubelet[3599]: E0120 00:41:46.560948 3599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:41:46.560969 kubelet[3599]: W0120 00:41:46.560962 3599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:41:46.560969 kubelet[3599]: E0120 00:41:46.560973 3599 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:41:46.561071 kubelet[3599]: I0120 00:41:46.560997 3599 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e914416f-b403-4119-a223-0b5c6e18edd3-kubelet-dir\") pod \"csi-node-driver-t6nwm\" (UID: \"e914416f-b403-4119-a223-0b5c6e18edd3\") " pod="calico-system/csi-node-driver-t6nwm" Jan 20 00:41:46.561259 kubelet[3599]: E0120 00:41:46.561236 3599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:41:46.561259 kubelet[3599]: W0120 00:41:46.561252 3599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:41:46.561259 kubelet[3599]: E0120 00:41:46.561262 3599 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:41:46.561741 kubelet[3599]: I0120 00:41:46.561720 3599 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/e914416f-b403-4119-a223-0b5c6e18edd3-varrun\") pod \"csi-node-driver-t6nwm\" (UID: \"e914416f-b403-4119-a223-0b5c6e18edd3\") " pod="calico-system/csi-node-driver-t6nwm" Jan 20 00:41:46.561910 kubelet[3599]: E0120 00:41:46.561894 3599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:41:46.561958 kubelet[3599]: W0120 00:41:46.561905 3599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:41:46.561958 kubelet[3599]: E0120 00:41:46.561933 3599 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:41:46.562358 kubelet[3599]: E0120 00:41:46.562336 3599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:41:46.562358 kubelet[3599]: W0120 00:41:46.562350 3599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:41:46.562358 kubelet[3599]: E0120 00:41:46.562362 3599 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:41:46.565566 kubelet[3599]: E0120 00:41:46.565543 3599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:41:46.565566 kubelet[3599]: W0120 00:41:46.565559 3599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:41:46.565566 kubelet[3599]: E0120 00:41:46.565569 3599 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:41:46.565676 kubelet[3599]: I0120 00:41:46.565584 3599 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/e914416f-b403-4119-a223-0b5c6e18edd3-registration-dir\") pod \"csi-node-driver-t6nwm\" (UID: \"e914416f-b403-4119-a223-0b5c6e18edd3\") " pod="calico-system/csi-node-driver-t6nwm" Jan 20 00:41:46.566027 kubelet[3599]: E0120 00:41:46.566003 3599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:41:46.566126 kubelet[3599]: W0120 00:41:46.566107 3599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:41:46.566126 kubelet[3599]: E0120 00:41:46.566125 3599 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:41:46.566237 kubelet[3599]: I0120 00:41:46.566220 3599 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-njtjb\" (UniqueName: \"kubernetes.io/projected/e914416f-b403-4119-a223-0b5c6e18edd3-kube-api-access-njtjb\") pod \"csi-node-driver-t6nwm\" (UID: \"e914416f-b403-4119-a223-0b5c6e18edd3\") " pod="calico-system/csi-node-driver-t6nwm" Jan 20 00:41:46.566637 kubelet[3599]: E0120 00:41:46.566616 3599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:41:46.566637 kubelet[3599]: W0120 00:41:46.566630 3599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:41:46.566637 kubelet[3599]: E0120 00:41:46.566641 3599 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:41:46.566822 kubelet[3599]: E0120 00:41:46.566792 3599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:41:46.566822 kubelet[3599]: W0120 00:41:46.566802 3599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:41:46.566822 kubelet[3599]: E0120 00:41:46.566810 3599 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:41:46.567213 kubelet[3599]: E0120 00:41:46.567190 3599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:41:46.567213 kubelet[3599]: W0120 00:41:46.567203 3599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:41:46.567213 kubelet[3599]: E0120 00:41:46.567213 3599 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:41:46.568154 containerd[2132]: time="2026-01-20T00:41:46.568120409Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-k2llv,Uid:b0aaf998-7735-4b52-8d4c-d226d71ef517,Namespace:calico-system,Attempt:0,}" Jan 20 00:41:46.568386 kubelet[3599]: I0120 00:41:46.568348 3599 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/e914416f-b403-4119-a223-0b5c6e18edd3-socket-dir\") pod \"csi-node-driver-t6nwm\" (UID: \"e914416f-b403-4119-a223-0b5c6e18edd3\") " pod="calico-system/csi-node-driver-t6nwm" Jan 20 00:41:46.568449 kubelet[3599]: E0120 00:41:46.568405 3599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:41:46.568449 kubelet[3599]: W0120 00:41:46.568413 3599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:41:46.568449 kubelet[3599]: E0120 00:41:46.568422 3599 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:41:46.568704 kubelet[3599]: E0120 00:41:46.568688 3599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:41:46.568704 kubelet[3599]: W0120 00:41:46.568699 3599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:41:46.568704 kubelet[3599]: E0120 00:41:46.568708 3599 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:41:46.569216 kubelet[3599]: E0120 00:41:46.569196 3599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:41:46.569216 kubelet[3599]: W0120 00:41:46.569209 3599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:41:46.569216 kubelet[3599]: E0120 00:41:46.569220 3599 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:41:46.569723 kubelet[3599]: E0120 00:41:46.569702 3599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:41:46.569776 kubelet[3599]: W0120 00:41:46.569741 3599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:41:46.569776 kubelet[3599]: E0120 00:41:46.569752 3599 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:41:46.570039 kubelet[3599]: E0120 00:41:46.570021 3599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:41:46.570039 kubelet[3599]: W0120 00:41:46.570033 3599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:41:46.570096 kubelet[3599]: E0120 00:41:46.570043 3599 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:41:46.570685 kubelet[3599]: E0120 00:41:46.570661 3599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:41:46.570685 kubelet[3599]: W0120 00:41:46.570675 3599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:41:46.570685 kubelet[3599]: E0120 00:41:46.570685 3599 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:41:46.615485 containerd[2132]: time="2026-01-20T00:41:46.614882145Z" level=info msg="connecting to shim 8e1acfaa472aa7cf1ef5ec883998060c50ae02a4774531a312ecb6f374776bb3" address="unix:///run/containerd/s/333c9a00fc91a5198397935545fa5d552ffe083c99e5fcd335a3644e8f37c805" namespace=k8s.io protocol=ttrpc version=3 Jan 20 00:41:46.634464 systemd[1]: Started cri-containerd-8e1acfaa472aa7cf1ef5ec883998060c50ae02a4774531a312ecb6f374776bb3.scope - libcontainer container 8e1acfaa472aa7cf1ef5ec883998060c50ae02a4774531a312ecb6f374776bb3. Jan 20 00:41:46.639000 audit: BPF prog-id=180 op=LOAD Jan 20 00:41:46.640000 audit: BPF prog-id=181 op=LOAD Jan 20 00:41:46.640000 audit[4180]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000130180 a2=98 a3=0 items=0 ppid=4169 pid=4180 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:41:46.640000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3865316163666161343732616137636631656635656338383339393830 Jan 20 00:41:46.640000 audit: BPF prog-id=181 op=UNLOAD Jan 20 00:41:46.640000 audit[4180]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4169 pid=4180 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:41:46.640000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3865316163666161343732616137636631656635656338383339393830 Jan 20 00:41:46.640000 audit: BPF prog-id=182 op=LOAD Jan 20 00:41:46.640000 audit[4180]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001303e8 a2=98 a3=0 items=0 ppid=4169 pid=4180 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:41:46.640000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3865316163666161343732616137636631656635656338383339393830 Jan 20 00:41:46.640000 audit: BPF prog-id=183 op=LOAD Jan 20 00:41:46.640000 audit[4180]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=4000130168 a2=98 a3=0 items=0 ppid=4169 pid=4180 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:41:46.640000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3865316163666161343732616137636631656635656338383339393830 Jan 20 00:41:46.640000 audit: BPF prog-id=183 op=UNLOAD Jan 20 00:41:46.640000 audit[4180]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4169 pid=4180 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:41:46.640000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3865316163666161343732616137636631656635656338383339393830 Jan 20 00:41:46.640000 audit: BPF prog-id=182 op=UNLOAD Jan 20 00:41:46.640000 audit[4180]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4169 pid=4180 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:41:46.640000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3865316163666161343732616137636631656635656338383339393830 Jan 20 00:41:46.640000 audit: BPF prog-id=184 op=LOAD Jan 20 00:41:46.640000 audit[4180]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000130648 a2=98 a3=0 items=0 ppid=4169 pid=4180 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:41:46.640000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3865316163666161343732616137636631656635656338383339393830 Jan 20 00:41:46.654856 containerd[2132]: time="2026-01-20T00:41:46.654824005Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-k2llv,Uid:b0aaf998-7735-4b52-8d4c-d226d71ef517,Namespace:calico-system,Attempt:0,} returns sandbox id \"8e1acfaa472aa7cf1ef5ec883998060c50ae02a4774531a312ecb6f374776bb3\"" Jan 20 00:41:46.671330 kubelet[3599]: E0120 00:41:46.671308 3599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:41:46.671330 kubelet[3599]: W0120 00:41:46.671325 3599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:41:46.671330 kubelet[3599]: E0120 00:41:46.671337 3599 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:41:46.671511 kubelet[3599]: E0120 00:41:46.671498 3599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:41:46.671511 kubelet[3599]: W0120 00:41:46.671508 3599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:41:46.671566 kubelet[3599]: E0120 00:41:46.671516 3599 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:41:46.671668 kubelet[3599]: E0120 00:41:46.671653 3599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:41:46.671668 kubelet[3599]: W0120 00:41:46.671664 3599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:41:46.671828 kubelet[3599]: E0120 00:41:46.671671 3599 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:41:46.671912 kubelet[3599]: E0120 00:41:46.671898 3599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:41:46.671969 kubelet[3599]: W0120 00:41:46.671957 3599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:41:46.672022 kubelet[3599]: E0120 00:41:46.672010 3599 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:41:46.672313 kubelet[3599]: E0120 00:41:46.672206 3599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:41:46.672313 kubelet[3599]: W0120 00:41:46.672217 3599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:41:46.672313 kubelet[3599]: E0120 00:41:46.672227 3599 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:41:46.672516 kubelet[3599]: E0120 00:41:46.672505 3599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:41:46.672578 kubelet[3599]: W0120 00:41:46.672568 3599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:41:46.672638 kubelet[3599]: E0120 00:41:46.672627 3599 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:41:46.672883 kubelet[3599]: E0120 00:41:46.672862 3599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:41:46.672883 kubelet[3599]: W0120 00:41:46.672876 3599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:41:46.672952 kubelet[3599]: E0120 00:41:46.672887 3599 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:41:46.673088 kubelet[3599]: E0120 00:41:46.673074 3599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:41:46.673088 kubelet[3599]: W0120 00:41:46.673085 3599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:41:46.673144 kubelet[3599]: E0120 00:41:46.673093 3599 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:41:46.673277 kubelet[3599]: E0120 00:41:46.673264 3599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:41:46.673277 kubelet[3599]: W0120 00:41:46.673274 3599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:41:46.673338 kubelet[3599]: E0120 00:41:46.673282 3599 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:41:46.676323 kubelet[3599]: E0120 00:41:46.676246 3599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:41:46.676323 kubelet[3599]: W0120 00:41:46.676259 3599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:41:46.676323 kubelet[3599]: E0120 00:41:46.676269 3599 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:41:46.676431 kubelet[3599]: E0120 00:41:46.676416 3599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:41:46.676431 kubelet[3599]: W0120 00:41:46.676423 3599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:41:46.676431 kubelet[3599]: E0120 00:41:46.676430 3599 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:41:46.677020 kubelet[3599]: E0120 00:41:46.676564 3599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:41:46.677020 kubelet[3599]: W0120 00:41:46.676571 3599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:41:46.677020 kubelet[3599]: E0120 00:41:46.676579 3599 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:41:46.677020 kubelet[3599]: E0120 00:41:46.676673 3599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:41:46.677020 kubelet[3599]: W0120 00:41:46.676678 3599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:41:46.677020 kubelet[3599]: E0120 00:41:46.676683 3599 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:41:46.677020 kubelet[3599]: E0120 00:41:46.676772 3599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:41:46.677020 kubelet[3599]: W0120 00:41:46.676777 3599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:41:46.677020 kubelet[3599]: E0120 00:41:46.676782 3599 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:41:46.677196 kubelet[3599]: E0120 00:41:46.677138 3599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:41:46.677196 kubelet[3599]: W0120 00:41:46.677148 3599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:41:46.677196 kubelet[3599]: E0120 00:41:46.677162 3599 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:41:46.677433 kubelet[3599]: E0120 00:41:46.677412 3599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:41:46.677433 kubelet[3599]: W0120 00:41:46.677425 3599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:41:46.677433 kubelet[3599]: E0120 00:41:46.677433 3599 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:41:46.677980 kubelet[3599]: E0120 00:41:46.677567 3599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:41:46.677980 kubelet[3599]: W0120 00:41:46.677577 3599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:41:46.677980 kubelet[3599]: E0120 00:41:46.677584 3599 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:41:46.677980 kubelet[3599]: E0120 00:41:46.677695 3599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:41:46.677980 kubelet[3599]: W0120 00:41:46.677700 3599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:41:46.677980 kubelet[3599]: E0120 00:41:46.677705 3599 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:41:46.677980 kubelet[3599]: E0120 00:41:46.677786 3599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:41:46.677980 kubelet[3599]: W0120 00:41:46.677791 3599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:41:46.677980 kubelet[3599]: E0120 00:41:46.677796 3599 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:41:46.677980 kubelet[3599]: E0120 00:41:46.677869 3599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:41:46.678159 kubelet[3599]: W0120 00:41:46.677873 3599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:41:46.678159 kubelet[3599]: E0120 00:41:46.677878 3599 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:41:46.678159 kubelet[3599]: E0120 00:41:46.678003 3599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:41:46.678159 kubelet[3599]: W0120 00:41:46.678009 3599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:41:46.678159 kubelet[3599]: E0120 00:41:46.678015 3599 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:41:46.678159 kubelet[3599]: E0120 00:41:46.678105 3599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:41:46.678159 kubelet[3599]: W0120 00:41:46.678110 3599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:41:46.678159 kubelet[3599]: E0120 00:41:46.678115 3599 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:41:46.678266 kubelet[3599]: E0120 00:41:46.678187 3599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:41:46.678266 kubelet[3599]: W0120 00:41:46.678191 3599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:41:46.678266 kubelet[3599]: E0120 00:41:46.678195 3599 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:41:46.678266 kubelet[3599]: E0120 00:41:46.678259 3599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:41:46.678266 kubelet[3599]: W0120 00:41:46.678262 3599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:41:46.678266 kubelet[3599]: E0120 00:41:46.678266 3599 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:41:46.678394 kubelet[3599]: E0120 00:41:46.678369 3599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:41:46.678394 kubelet[3599]: W0120 00:41:46.678374 3599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:41:46.678394 kubelet[3599]: E0120 00:41:46.678379 3599 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:41:46.685094 kubelet[3599]: E0120 00:41:46.685080 3599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:41:46.685159 kubelet[3599]: W0120 00:41:46.685149 3599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:41:46.685218 kubelet[3599]: E0120 00:41:46.685208 3599 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:41:47.781240 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2662819941.mount: Deactivated successfully. Jan 20 00:41:48.540979 kubelet[3599]: E0120 00:41:48.540942 3599 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-t6nwm" podUID="e914416f-b403-4119-a223-0b5c6e18edd3" Jan 20 00:41:48.940928 containerd[2132]: time="2026-01-20T00:41:48.940822381Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 00:41:48.944528 containerd[2132]: time="2026-01-20T00:41:48.944489981Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=33086690" Jan 20 00:41:48.947538 containerd[2132]: time="2026-01-20T00:41:48.947511832Z" level=info msg="ImageCreate event name:\"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 00:41:48.951829 containerd[2132]: time="2026-01-20T00:41:48.951799471Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 00:41:48.952131 containerd[2132]: time="2026-01-20T00:41:48.952047911Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"33090541\" in 2.442498397s" Jan 20 00:41:48.952131 containerd[2132]: time="2026-01-20T00:41:48.952071592Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\"" Jan 20 00:41:48.952803 containerd[2132]: time="2026-01-20T00:41:48.952779841Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Jan 20 00:41:48.969806 containerd[2132]: time="2026-01-20T00:41:48.969776231Z" level=info msg="CreateContainer within sandbox \"d9133a4777d9a95fc5e92317d9424e2ccea483e7e89dcbae011319237bc3dbc2\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jan 20 00:41:48.988157 containerd[2132]: time="2026-01-20T00:41:48.987624394Z" level=info msg="Container 1423f7f11b9cc82a8219c0e0161eca5e39f795c34c4e1ddc8072a4a8df8fdb21: CDI devices from CRI Config.CDIDevices: []" Jan 20 00:41:49.005539 containerd[2132]: time="2026-01-20T00:41:49.005505159Z" level=info msg="CreateContainer within sandbox \"d9133a4777d9a95fc5e92317d9424e2ccea483e7e89dcbae011319237bc3dbc2\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"1423f7f11b9cc82a8219c0e0161eca5e39f795c34c4e1ddc8072a4a8df8fdb21\"" Jan 20 00:41:49.006322 containerd[2132]: time="2026-01-20T00:41:49.005908749Z" level=info msg="StartContainer for \"1423f7f11b9cc82a8219c0e0161eca5e39f795c34c4e1ddc8072a4a8df8fdb21\"" Jan 20 00:41:49.007001 containerd[2132]: time="2026-01-20T00:41:49.006969387Z" level=info msg="connecting to shim 1423f7f11b9cc82a8219c0e0161eca5e39f795c34c4e1ddc8072a4a8df8fdb21" address="unix:///run/containerd/s/195cd949ef35a7458880d663d1e1a4b4d80979764390c28245489d034098e747" protocol=ttrpc version=3 Jan 20 00:41:49.027442 systemd[1]: Started cri-containerd-1423f7f11b9cc82a8219c0e0161eca5e39f795c34c4e1ddc8072a4a8df8fdb21.scope - libcontainer container 1423f7f11b9cc82a8219c0e0161eca5e39f795c34c4e1ddc8072a4a8df8fdb21. Jan 20 00:41:49.035000 audit: BPF prog-id=185 op=LOAD Jan 20 00:41:49.035000 audit: BPF prog-id=186 op=LOAD Jan 20 00:41:49.035000 audit[4242]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40000fe180 a2=98 a3=0 items=0 ppid=4079 pid=4242 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:41:49.035000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3134323366376631316239636338326138323139633065303136316563 Jan 20 00:41:49.036000 audit: BPF prog-id=186 op=UNLOAD Jan 20 00:41:49.036000 audit[4242]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4079 pid=4242 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:41:49.036000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3134323366376631316239636338326138323139633065303136316563 Jan 20 00:41:49.036000 audit: BPF prog-id=187 op=LOAD Jan 20 00:41:49.036000 audit[4242]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40000fe3e8 a2=98 a3=0 items=0 ppid=4079 pid=4242 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:41:49.036000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3134323366376631316239636338326138323139633065303136316563 Jan 20 00:41:49.036000 audit: BPF prog-id=188 op=LOAD Jan 20 00:41:49.036000 audit[4242]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=40000fe168 a2=98 a3=0 items=0 ppid=4079 pid=4242 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:41:49.036000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3134323366376631316239636338326138323139633065303136316563 Jan 20 00:41:49.036000 audit: BPF prog-id=188 op=UNLOAD Jan 20 00:41:49.036000 audit[4242]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4079 pid=4242 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:41:49.036000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3134323366376631316239636338326138323139633065303136316563 Jan 20 00:41:49.036000 audit: BPF prog-id=187 op=UNLOAD Jan 20 00:41:49.036000 audit[4242]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4079 pid=4242 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:41:49.036000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3134323366376631316239636338326138323139633065303136316563 Jan 20 00:41:49.036000 audit: BPF prog-id=189 op=LOAD Jan 20 00:41:49.036000 audit[4242]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40000fe648 a2=98 a3=0 items=0 ppid=4079 pid=4242 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:41:49.036000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3134323366376631316239636338326138323139633065303136316563 Jan 20 00:41:49.060139 containerd[2132]: time="2026-01-20T00:41:49.059717265Z" level=info msg="StartContainer for \"1423f7f11b9cc82a8219c0e0161eca5e39f795c34c4e1ddc8072a4a8df8fdb21\" returns successfully" Jan 20 00:41:49.617963 kubelet[3599]: I0120 00:41:49.617906 3599 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-7c9f465788-j5rqb" podStartSLOduration=1.174228368 podStartE2EDuration="3.617715063s" podCreationTimestamp="2026-01-20 00:41:46 +0000 UTC" firstStartedPulling="2026-01-20 00:41:46.509187414 +0000 UTC m=+22.056152479" lastFinishedPulling="2026-01-20 00:41:48.952674109 +0000 UTC m=+24.499639174" observedRunningTime="2026-01-20 00:41:49.617256959 +0000 UTC m=+25.164222024" watchObservedRunningTime="2026-01-20 00:41:49.617715063 +0000 UTC m=+25.164680128" Jan 20 00:41:49.670130 kubelet[3599]: E0120 00:41:49.670104 3599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:41:49.670130 kubelet[3599]: W0120 00:41:49.670127 3599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:41:49.670317 kubelet[3599]: E0120 00:41:49.670144 3599 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:41:49.670317 kubelet[3599]: E0120 00:41:49.670273 3599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:41:49.670358 kubelet[3599]: W0120 00:41:49.670279 3599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:41:49.670358 kubelet[3599]: E0120 00:41:49.670326 3599 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:41:49.670451 kubelet[3599]: E0120 00:41:49.670435 3599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:41:49.670451 kubelet[3599]: W0120 00:41:49.670445 3599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:41:49.670492 kubelet[3599]: E0120 00:41:49.670456 3599 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:41:49.670667 kubelet[3599]: E0120 00:41:49.670647 3599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:41:49.670667 kubelet[3599]: W0120 00:41:49.670657 3599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:41:49.670667 kubelet[3599]: E0120 00:41:49.670666 3599 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:41:49.670856 kubelet[3599]: E0120 00:41:49.670841 3599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:41:49.670856 kubelet[3599]: W0120 00:41:49.670851 3599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:41:49.670930 kubelet[3599]: E0120 00:41:49.670859 3599 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:41:49.671001 kubelet[3599]: E0120 00:41:49.670973 3599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:41:49.671001 kubelet[3599]: W0120 00:41:49.670980 3599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:41:49.671001 kubelet[3599]: E0120 00:41:49.670986 3599 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:41:49.671093 kubelet[3599]: E0120 00:41:49.671077 3599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:41:49.671093 kubelet[3599]: W0120 00:41:49.671093 3599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:41:49.671135 kubelet[3599]: E0120 00:41:49.671099 3599 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:41:49.671204 kubelet[3599]: E0120 00:41:49.671191 3599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:41:49.671204 kubelet[3599]: W0120 00:41:49.671200 3599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:41:49.671269 kubelet[3599]: E0120 00:41:49.671205 3599 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:41:49.671341 kubelet[3599]: E0120 00:41:49.671329 3599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:41:49.671341 kubelet[3599]: W0120 00:41:49.671338 3599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:41:49.672588 kubelet[3599]: E0120 00:41:49.671344 3599 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:41:49.672588 kubelet[3599]: E0120 00:41:49.671475 3599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:41:49.672588 kubelet[3599]: W0120 00:41:49.671481 3599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:41:49.672588 kubelet[3599]: E0120 00:41:49.671487 3599 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:41:49.672588 kubelet[3599]: E0120 00:41:49.671593 3599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:41:49.672588 kubelet[3599]: W0120 00:41:49.671599 3599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:41:49.672588 kubelet[3599]: E0120 00:41:49.671604 3599 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:41:49.672588 kubelet[3599]: E0120 00:41:49.671721 3599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:41:49.672588 kubelet[3599]: W0120 00:41:49.671727 3599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:41:49.672588 kubelet[3599]: E0120 00:41:49.671732 3599 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:41:49.672740 kubelet[3599]: E0120 00:41:49.671848 3599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:41:49.672740 kubelet[3599]: W0120 00:41:49.671853 3599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:41:49.672740 kubelet[3599]: E0120 00:41:49.671859 3599 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:41:49.672740 kubelet[3599]: E0120 00:41:49.671976 3599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:41:49.672740 kubelet[3599]: W0120 00:41:49.671982 3599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:41:49.672740 kubelet[3599]: E0120 00:41:49.671987 3599 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:41:49.672740 kubelet[3599]: E0120 00:41:49.672125 3599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:41:49.672740 kubelet[3599]: W0120 00:41:49.672132 3599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:41:49.672740 kubelet[3599]: E0120 00:41:49.672139 3599 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:41:49.695546 kubelet[3599]: E0120 00:41:49.695528 3599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:41:49.695546 kubelet[3599]: W0120 00:41:49.695542 3599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:41:49.695651 kubelet[3599]: E0120 00:41:49.695553 3599 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:41:49.695718 kubelet[3599]: E0120 00:41:49.695704 3599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:41:49.695718 kubelet[3599]: W0120 00:41:49.695713 3599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:41:49.695766 kubelet[3599]: E0120 00:41:49.695721 3599 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:41:49.695855 kubelet[3599]: E0120 00:41:49.695844 3599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:41:49.695855 kubelet[3599]: W0120 00:41:49.695852 3599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:41:49.695908 kubelet[3599]: E0120 00:41:49.695858 3599 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:41:49.696047 kubelet[3599]: E0120 00:41:49.696035 3599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:41:49.696047 kubelet[3599]: W0120 00:41:49.696045 3599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:41:49.696123 kubelet[3599]: E0120 00:41:49.696052 3599 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:41:49.696169 kubelet[3599]: E0120 00:41:49.696153 3599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:41:49.696169 kubelet[3599]: W0120 00:41:49.696159 3599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:41:49.696169 kubelet[3599]: E0120 00:41:49.696165 3599 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:41:49.696274 kubelet[3599]: E0120 00:41:49.696260 3599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:41:49.696274 kubelet[3599]: W0120 00:41:49.696269 3599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:41:49.696274 kubelet[3599]: E0120 00:41:49.696274 3599 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:41:49.696441 kubelet[3599]: E0120 00:41:49.696429 3599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:41:49.696441 kubelet[3599]: W0120 00:41:49.696437 3599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:41:49.696487 kubelet[3599]: E0120 00:41:49.696444 3599 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:41:49.696739 kubelet[3599]: E0120 00:41:49.696665 3599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:41:49.696739 kubelet[3599]: W0120 00:41:49.696679 3599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:41:49.696739 kubelet[3599]: E0120 00:41:49.696691 3599 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:41:49.696972 kubelet[3599]: E0120 00:41:49.696962 3599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:41:49.697044 kubelet[3599]: W0120 00:41:49.697032 3599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:41:49.697096 kubelet[3599]: E0120 00:41:49.697085 3599 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:41:49.697387 kubelet[3599]: E0120 00:41:49.697355 3599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:41:49.697387 kubelet[3599]: W0120 00:41:49.697365 3599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:41:49.697387 kubelet[3599]: E0120 00:41:49.697374 3599 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:41:49.697658 kubelet[3599]: E0120 00:41:49.697648 3599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:41:49.697819 kubelet[3599]: W0120 00:41:49.697718 3599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:41:49.697819 kubelet[3599]: E0120 00:41:49.697735 3599 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:41:49.697930 kubelet[3599]: E0120 00:41:49.697922 3599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:41:49.697973 kubelet[3599]: W0120 00:41:49.697965 3599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:41:49.698019 kubelet[3599]: E0120 00:41:49.698012 3599 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:41:49.698357 kubelet[3599]: E0120 00:41:49.698209 3599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:41:49.698357 kubelet[3599]: W0120 00:41:49.698219 3599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:41:49.698357 kubelet[3599]: E0120 00:41:49.698227 3599 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:41:49.698452 kubelet[3599]: E0120 00:41:49.698419 3599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:41:49.698452 kubelet[3599]: W0120 00:41:49.698428 3599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:41:49.698452 kubelet[3599]: E0120 00:41:49.698436 3599 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:41:49.698662 kubelet[3599]: E0120 00:41:49.698523 3599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:41:49.698662 kubelet[3599]: W0120 00:41:49.698532 3599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:41:49.698662 kubelet[3599]: E0120 00:41:49.698537 3599 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:41:49.698662 kubelet[3599]: E0120 00:41:49.698632 3599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:41:49.698662 kubelet[3599]: W0120 00:41:49.698637 3599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:41:49.698662 kubelet[3599]: E0120 00:41:49.698641 3599 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:41:49.698843 kubelet[3599]: E0120 00:41:49.698826 3599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:41:49.698843 kubelet[3599]: W0120 00:41:49.698836 3599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:41:49.698843 kubelet[3599]: E0120 00:41:49.698842 3599 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:41:49.698945 kubelet[3599]: E0120 00:41:49.698936 3599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:41:49.698945 kubelet[3599]: W0120 00:41:49.698942 3599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:41:49.698979 kubelet[3599]: E0120 00:41:49.698947 3599 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:41:50.316333 containerd[2132]: time="2026-01-20T00:41:50.316046900Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 00:41:50.319624 containerd[2132]: time="2026-01-20T00:41:50.319581848Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=0" Jan 20 00:41:50.322465 containerd[2132]: time="2026-01-20T00:41:50.322430372Z" level=info msg="ImageCreate event name:\"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 00:41:50.327240 containerd[2132]: time="2026-01-20T00:41:50.326659737Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 00:41:50.327240 containerd[2132]: time="2026-01-20T00:41:50.327040534Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5636392\" in 1.374234716s" Jan 20 00:41:50.327240 containerd[2132]: time="2026-01-20T00:41:50.327060967Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\"" Jan 20 00:41:50.334236 containerd[2132]: time="2026-01-20T00:41:50.334189545Z" level=info msg="CreateContainer within sandbox \"8e1acfaa472aa7cf1ef5ec883998060c50ae02a4774531a312ecb6f374776bb3\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 20 00:41:50.359168 containerd[2132]: time="2026-01-20T00:41:50.358460495Z" level=info msg="Container 9a988bde6e7e4b24dbfb43364d38d249a48706386487fa5544c7c6dbf70282f8: CDI devices from CRI Config.CDIDevices: []" Jan 20 00:41:50.376887 containerd[2132]: time="2026-01-20T00:41:50.376857318Z" level=info msg="CreateContainer within sandbox \"8e1acfaa472aa7cf1ef5ec883998060c50ae02a4774531a312ecb6f374776bb3\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"9a988bde6e7e4b24dbfb43364d38d249a48706386487fa5544c7c6dbf70282f8\"" Jan 20 00:41:50.378379 containerd[2132]: time="2026-01-20T00:41:50.378351746Z" level=info msg="StartContainer for \"9a988bde6e7e4b24dbfb43364d38d249a48706386487fa5544c7c6dbf70282f8\"" Jan 20 00:41:50.380185 containerd[2132]: time="2026-01-20T00:41:50.380160562Z" level=info msg="connecting to shim 9a988bde6e7e4b24dbfb43364d38d249a48706386487fa5544c7c6dbf70282f8" address="unix:///run/containerd/s/333c9a00fc91a5198397935545fa5d552ffe083c99e5fcd335a3644e8f37c805" protocol=ttrpc version=3 Jan 20 00:41:50.401450 systemd[1]: Started cri-containerd-9a988bde6e7e4b24dbfb43364d38d249a48706386487fa5544c7c6dbf70282f8.scope - libcontainer container 9a988bde6e7e4b24dbfb43364d38d249a48706386487fa5544c7c6dbf70282f8. Jan 20 00:41:50.443000 audit: BPF prog-id=190 op=LOAD Jan 20 00:41:50.447905 kernel: kauditd_printk_skb: 68 callbacks suppressed Jan 20 00:41:50.447963 kernel: audit: type=1334 audit(1768869710.443:582): prog-id=190 op=LOAD Jan 20 00:41:50.443000 audit[4317]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40000fe3e8 a2=98 a3=0 items=0 ppid=4169 pid=4317 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:41:50.468092 kernel: audit: type=1300 audit(1768869710.443:582): arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40000fe3e8 a2=98 a3=0 items=0 ppid=4169 pid=4317 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:41:50.443000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3961393838626465366537653462323464626662343333363464333864 Jan 20 00:41:50.486669 kernel: audit: type=1327 audit(1768869710.443:582): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3961393838626465366537653462323464626662343333363464333864 Jan 20 00:41:50.446000 audit: BPF prog-id=191 op=LOAD Jan 20 00:41:50.446000 audit[4317]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=40000fe168 a2=98 a3=0 items=0 ppid=4169 pid=4317 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:41:50.512015 kernel: audit: type=1334 audit(1768869710.446:583): prog-id=191 op=LOAD Jan 20 00:41:50.512091 kernel: audit: type=1300 audit(1768869710.446:583): arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=40000fe168 a2=98 a3=0 items=0 ppid=4169 pid=4317 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:41:50.446000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3961393838626465366537653462323464626662343333363464333864 Jan 20 00:41:50.522756 containerd[2132]: time="2026-01-20T00:41:50.522716471Z" level=info msg="StartContainer for \"9a988bde6e7e4b24dbfb43364d38d249a48706386487fa5544c7c6dbf70282f8\" returns successfully" Jan 20 00:41:50.529772 kernel: audit: type=1327 audit(1768869710.446:583): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3961393838626465366537653462323464626662343333363464333864 Jan 20 00:41:50.447000 audit: BPF prog-id=191 op=UNLOAD Jan 20 00:41:50.535277 kernel: audit: type=1334 audit(1768869710.447:584): prog-id=191 op=UNLOAD Jan 20 00:41:50.447000 audit[4317]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4169 pid=4317 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:41:50.551380 kernel: audit: type=1300 audit(1768869710.447:584): arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4169 pid=4317 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:41:50.447000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3961393838626465366537653462323464626662343333363464333864 Jan 20 00:41:50.568832 kernel: audit: type=1327 audit(1768869710.447:584): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3961393838626465366537653462323464626662343333363464333864 Jan 20 00:41:50.447000 audit: BPF prog-id=190 op=UNLOAD Jan 20 00:41:50.572127 kubelet[3599]: E0120 00:41:50.571783 3599 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-t6nwm" podUID="e914416f-b403-4119-a223-0b5c6e18edd3" Jan 20 00:41:50.574221 kernel: audit: type=1334 audit(1768869710.447:585): prog-id=190 op=UNLOAD Jan 20 00:41:50.574740 systemd[1]: cri-containerd-9a988bde6e7e4b24dbfb43364d38d249a48706386487fa5544c7c6dbf70282f8.scope: Deactivated successfully. Jan 20 00:41:50.447000 audit[4317]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4169 pid=4317 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:41:50.447000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3961393838626465366537653462323464626662343333363464333864 Jan 20 00:41:50.447000 audit: BPF prog-id=192 op=LOAD Jan 20 00:41:50.447000 audit[4317]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40000fe648 a2=98 a3=0 items=0 ppid=4169 pid=4317 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:41:50.447000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3961393838626465366537653462323464626662343333363464333864 Jan 20 00:41:50.577812 containerd[2132]: time="2026-01-20T00:41:50.577695332Z" level=info msg="received container exit event container_id:\"9a988bde6e7e4b24dbfb43364d38d249a48706386487fa5544c7c6dbf70282f8\" id:\"9a988bde6e7e4b24dbfb43364d38d249a48706386487fa5544c7c6dbf70282f8\" pid:4330 exited_at:{seconds:1768869710 nanos:576972163}" Jan 20 00:41:50.577000 audit: BPF prog-id=192 op=UNLOAD Jan 20 00:41:50.604403 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9a988bde6e7e4b24dbfb43364d38d249a48706386487fa5544c7c6dbf70282f8-rootfs.mount: Deactivated successfully. Jan 20 00:41:50.610515 kubelet[3599]: I0120 00:41:50.610493 3599 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 20 00:41:52.539729 kubelet[3599]: E0120 00:41:52.539684 3599 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-t6nwm" podUID="e914416f-b403-4119-a223-0b5c6e18edd3" Jan 20 00:41:52.617031 containerd[2132]: time="2026-01-20T00:41:52.616266578Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Jan 20 00:41:54.541315 kubelet[3599]: E0120 00:41:54.541269 3599 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-t6nwm" podUID="e914416f-b403-4119-a223-0b5c6e18edd3" Jan 20 00:41:55.802342 containerd[2132]: time="2026-01-20T00:41:55.802041666Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 00:41:55.804897 containerd[2132]: time="2026-01-20T00:41:55.804763516Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=65921248" Jan 20 00:41:55.808039 containerd[2132]: time="2026-01-20T00:41:55.808012199Z" level=info msg="ImageCreate event name:\"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 00:41:55.812409 containerd[2132]: time="2026-01-20T00:41:55.812367727Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 00:41:55.813584 containerd[2132]: time="2026-01-20T00:41:55.813490719Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"67295507\" in 3.197192147s" Jan 20 00:41:55.813584 containerd[2132]: time="2026-01-20T00:41:55.813515913Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\"" Jan 20 00:41:55.822461 containerd[2132]: time="2026-01-20T00:41:55.822435445Z" level=info msg="CreateContainer within sandbox \"8e1acfaa472aa7cf1ef5ec883998060c50ae02a4774531a312ecb6f374776bb3\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 20 00:41:55.849335 containerd[2132]: time="2026-01-20T00:41:55.848714821Z" level=info msg="Container 41f62e05a0c98c95331ea54d6b2eadc5c37d9aa9831c296d099ae41c6bb6fec0: CDI devices from CRI Config.CDIDevices: []" Jan 20 00:41:55.879200 containerd[2132]: time="2026-01-20T00:41:55.879163933Z" level=info msg="CreateContainer within sandbox \"8e1acfaa472aa7cf1ef5ec883998060c50ae02a4774531a312ecb6f374776bb3\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"41f62e05a0c98c95331ea54d6b2eadc5c37d9aa9831c296d099ae41c6bb6fec0\"" Jan 20 00:41:55.880668 containerd[2132]: time="2026-01-20T00:41:55.880597969Z" level=info msg="StartContainer for \"41f62e05a0c98c95331ea54d6b2eadc5c37d9aa9831c296d099ae41c6bb6fec0\"" Jan 20 00:41:55.882377 containerd[2132]: time="2026-01-20T00:41:55.882356594Z" level=info msg="connecting to shim 41f62e05a0c98c95331ea54d6b2eadc5c37d9aa9831c296d099ae41c6bb6fec0" address="unix:///run/containerd/s/333c9a00fc91a5198397935545fa5d552ffe083c99e5fcd335a3644e8f37c805" protocol=ttrpc version=3 Jan 20 00:41:55.898448 systemd[1]: Started cri-containerd-41f62e05a0c98c95331ea54d6b2eadc5c37d9aa9831c296d099ae41c6bb6fec0.scope - libcontainer container 41f62e05a0c98c95331ea54d6b2eadc5c37d9aa9831c296d099ae41c6bb6fec0. Jan 20 00:41:55.930000 audit: BPF prog-id=193 op=LOAD Jan 20 00:41:55.934924 kernel: kauditd_printk_skb: 6 callbacks suppressed Jan 20 00:41:55.934980 kernel: audit: type=1334 audit(1768869715.930:588): prog-id=193 op=LOAD Jan 20 00:41:55.930000 audit[4380]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001763e8 a2=98 a3=0 items=0 ppid=4169 pid=4380 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:41:55.955403 kernel: audit: type=1300 audit(1768869715.930:588): arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001763e8 a2=98 a3=0 items=0 ppid=4169 pid=4380 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:41:55.930000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3431663632653035613063393863393533333165613534643662326561 Jan 20 00:41:55.972043 kernel: audit: type=1327 audit(1768869715.930:588): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3431663632653035613063393863393533333165613534643662326561 Jan 20 00:41:55.973237 kubelet[3599]: I0120 00:41:55.973114 3599 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 20 00:41:55.930000 audit: BPF prog-id=194 op=LOAD Jan 20 00:41:55.978605 kernel: audit: type=1334 audit(1768869715.930:589): prog-id=194 op=LOAD Jan 20 00:41:55.930000 audit[4380]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=4000176168 a2=98 a3=0 items=0 ppid=4169 pid=4380 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:41:55.995489 kernel: audit: type=1300 audit(1768869715.930:589): arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=4000176168 a2=98 a3=0 items=0 ppid=4169 pid=4380 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:41:55.930000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3431663632653035613063393863393533333165613534643662326561 Jan 20 00:41:56.015657 kernel: audit: type=1327 audit(1768869715.930:589): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3431663632653035613063393863393533333165613534643662326561 Jan 20 00:41:55.934000 audit: BPF prog-id=194 op=UNLOAD Jan 20 00:41:56.025540 kernel: audit: type=1334 audit(1768869715.934:590): prog-id=194 op=UNLOAD Jan 20 00:41:55.934000 audit[4380]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4169 pid=4380 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:41:56.032546 containerd[2132]: time="2026-01-20T00:41:56.032452659Z" level=info msg="StartContainer for \"41f62e05a0c98c95331ea54d6b2eadc5c37d9aa9831c296d099ae41c6bb6fec0\" returns successfully" Jan 20 00:41:56.044150 kernel: audit: type=1300 audit(1768869715.934:590): arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4169 pid=4380 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:41:55.934000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3431663632653035613063393863393533333165613534643662326561 Jan 20 00:41:56.063450 kernel: audit: type=1327 audit(1768869715.934:590): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3431663632653035613063393863393533333165613534643662326561 Jan 20 00:41:55.934000 audit: BPF prog-id=193 op=UNLOAD Jan 20 00:41:56.069531 kernel: audit: type=1334 audit(1768869715.934:591): prog-id=193 op=UNLOAD Jan 20 00:41:55.934000 audit[4380]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4169 pid=4380 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:41:55.934000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3431663632653035613063393863393533333165613534643662326561 Jan 20 00:41:55.934000 audit: BPF prog-id=195 op=LOAD Jan 20 00:41:55.934000 audit[4380]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000176648 a2=98 a3=0 items=0 ppid=4169 pid=4380 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:41:55.934000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3431663632653035613063393863393533333165613534643662326561 Jan 20 00:41:56.045000 audit[4409]: NETFILTER_CFG table=filter:120 family=2 entries=21 op=nft_register_rule pid=4409 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 00:41:56.045000 audit[4409]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=7480 a0=3 a1=fffff1929e00 a2=0 a3=1 items=0 ppid=3769 pid=4409 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:41:56.045000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 00:41:56.069000 audit[4409]: NETFILTER_CFG table=nat:121 family=2 entries=19 op=nft_register_chain pid=4409 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 00:41:56.069000 audit[4409]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=6276 a0=3 a1=fffff1929e00 a2=0 a3=1 items=0 ppid=3769 pid=4409 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:41:56.069000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 00:41:56.540816 kubelet[3599]: E0120 00:41:56.540513 3599 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-t6nwm" podUID="e914416f-b403-4119-a223-0b5c6e18edd3" Jan 20 00:41:57.220721 containerd[2132]: time="2026-01-20T00:41:57.220507494Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 20 00:41:57.224167 systemd[1]: cri-containerd-41f62e05a0c98c95331ea54d6b2eadc5c37d9aa9831c296d099ae41c6bb6fec0.scope: Deactivated successfully. Jan 20 00:41:57.225126 containerd[2132]: time="2026-01-20T00:41:57.224810930Z" level=info msg="received container exit event container_id:\"41f62e05a0c98c95331ea54d6b2eadc5c37d9aa9831c296d099ae41c6bb6fec0\" id:\"41f62e05a0c98c95331ea54d6b2eadc5c37d9aa9831c296d099ae41c6bb6fec0\" pid:4393 exited_at:{seconds:1768869717 nanos:224082783}" Jan 20 00:41:57.224479 systemd[1]: cri-containerd-41f62e05a0c98c95331ea54d6b2eadc5c37d9aa9831c296d099ae41c6bb6fec0.scope: Consumed 330ms CPU time, 186.5M memory peak, 165.9M written to disk. Jan 20 00:41:57.227000 audit: BPF prog-id=195 op=UNLOAD Jan 20 00:41:57.241910 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-41f62e05a0c98c95331ea54d6b2eadc5c37d9aa9831c296d099ae41c6bb6fec0-rootfs.mount: Deactivated successfully. Jan 20 00:41:57.259234 kubelet[3599]: I0120 00:41:57.258556 3599 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jan 20 00:41:58.164037 systemd[1]: Created slice kubepods-burstable-pod1e8e6c90_c796_40a3_81f9_bb1930f6d213.slice - libcontainer container kubepods-burstable-pod1e8e6c90_c796_40a3_81f9_bb1930f6d213.slice. Jan 20 00:41:58.175508 systemd[1]: Created slice kubepods-besteffort-pode914416f_b403_4119_a223_0b5c6e18edd3.slice - libcontainer container kubepods-besteffort-pode914416f_b403_4119_a223_0b5c6e18edd3.slice. Jan 20 00:41:58.184539 containerd[2132]: time="2026-01-20T00:41:58.183994769Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-t6nwm,Uid:e914416f-b403-4119-a223-0b5c6e18edd3,Namespace:calico-system,Attempt:0,}" Jan 20 00:41:58.186016 systemd[1]: Created slice kubepods-besteffort-pod4827d07a_600d_43d7_a9a0_275dbcae9208.slice - libcontainer container kubepods-besteffort-pod4827d07a_600d_43d7_a9a0_275dbcae9208.slice. Jan 20 00:41:58.192281 systemd[1]: Created slice kubepods-besteffort-pod86fc1b8f_992e_433a_a4e5_96b8bd195d5d.slice - libcontainer container kubepods-besteffort-pod86fc1b8f_992e_433a_a4e5_96b8bd195d5d.slice. Jan 20 00:41:58.207258 systemd[1]: Created slice kubepods-burstable-pode57a0c68_c8b0_453f_a1e5_cacbecaee897.slice - libcontainer container kubepods-burstable-pode57a0c68_c8b0_453f_a1e5_cacbecaee897.slice. Jan 20 00:41:58.224445 systemd[1]: Created slice kubepods-besteffort-pod0b136bd0_6a42_4726_87cd_a3538d5ee86b.slice - libcontainer container kubepods-besteffort-pod0b136bd0_6a42_4726_87cd_a3538d5ee86b.slice. Jan 20 00:41:58.234358 systemd[1]: Created slice kubepods-besteffort-pod6eedb683_7841_469d_9465_68ae5bed2952.slice - libcontainer container kubepods-besteffort-pod6eedb683_7841_469d_9465_68ae5bed2952.slice. Jan 20 00:41:58.241957 systemd[1]: Created slice kubepods-besteffort-pod28862320_350f_4f29_92bb_d8201c93580b.slice - libcontainer container kubepods-besteffort-pod28862320_350f_4f29_92bb_d8201c93580b.slice. Jan 20 00:41:58.249581 systemd[1]: Created slice kubepods-besteffort-pod77e9acbe_87a2_440f_b406_8c8900ab52f5.slice - libcontainer container kubepods-besteffort-pod77e9acbe_87a2_440f_b406_8c8900ab52f5.slice. Jan 20 00:41:58.251880 kubelet[3599]: I0120 00:41:58.251850 3599 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/86fc1b8f-992e-433a-a4e5-96b8bd195d5d-tigera-ca-bundle\") pod \"calico-kube-controllers-57d88c779f-cqh6x\" (UID: \"86fc1b8f-992e-433a-a4e5-96b8bd195d5d\") " pod="calico-system/calico-kube-controllers-57d88c779f-cqh6x" Jan 20 00:41:58.251880 kubelet[3599]: I0120 00:41:58.251881 3599 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1e8e6c90-c796-40a3-81f9-bb1930f6d213-config-volume\") pod \"coredns-674b8bbfcf-w85xq\" (UID: \"1e8e6c90-c796-40a3-81f9-bb1930f6d213\") " pod="kube-system/coredns-674b8bbfcf-w85xq" Jan 20 00:41:58.251982 kubelet[3599]: I0120 00:41:58.251894 3599 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sdk8w\" (UniqueName: \"kubernetes.io/projected/1e8e6c90-c796-40a3-81f9-bb1930f6d213-kube-api-access-sdk8w\") pod \"coredns-674b8bbfcf-w85xq\" (UID: \"1e8e6c90-c796-40a3-81f9-bb1930f6d213\") " pod="kube-system/coredns-674b8bbfcf-w85xq" Jan 20 00:41:58.251982 kubelet[3599]: I0120 00:41:58.251904 3599 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/28862320-350f-4f29-92bb-d8201c93580b-config\") pod \"goldmane-666569f655-n9ngh\" (UID: \"28862320-350f-4f29-92bb-d8201c93580b\") " pod="calico-system/goldmane-666569f655-n9ngh" Jan 20 00:41:58.251982 kubelet[3599]: I0120 00:41:58.251913 3599 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q6cps\" (UniqueName: \"kubernetes.io/projected/0b136bd0-6a42-4726-87cd-a3538d5ee86b-kube-api-access-q6cps\") pod \"calico-apiserver-7c8fb8fd4d-xhrnz\" (UID: \"0b136bd0-6a42-4726-87cd-a3538d5ee86b\") " pod="calico-apiserver/calico-apiserver-7c8fb8fd4d-xhrnz" Jan 20 00:41:58.251982 kubelet[3599]: I0120 00:41:58.251922 3599 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qm99j\" (UniqueName: \"kubernetes.io/projected/86fc1b8f-992e-433a-a4e5-96b8bd195d5d-kube-api-access-qm99j\") pod \"calico-kube-controllers-57d88c779f-cqh6x\" (UID: \"86fc1b8f-992e-433a-a4e5-96b8bd195d5d\") " pod="calico-system/calico-kube-controllers-57d88c779f-cqh6x" Jan 20 00:41:58.251982 kubelet[3599]: I0120 00:41:58.251935 3599 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/28862320-350f-4f29-92bb-d8201c93580b-goldmane-ca-bundle\") pod \"goldmane-666569f655-n9ngh\" (UID: \"28862320-350f-4f29-92bb-d8201c93580b\") " pod="calico-system/goldmane-666569f655-n9ngh" Jan 20 00:41:58.252063 kubelet[3599]: I0120 00:41:58.251945 3599 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e57a0c68-c8b0-453f-a1e5-cacbecaee897-config-volume\") pod \"coredns-674b8bbfcf-smdnr\" (UID: \"e57a0c68-c8b0-453f-a1e5-cacbecaee897\") " pod="kube-system/coredns-674b8bbfcf-smdnr" Jan 20 00:41:58.252063 kubelet[3599]: I0120 00:41:58.251957 3599 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/28862320-350f-4f29-92bb-d8201c93580b-goldmane-key-pair\") pod \"goldmane-666569f655-n9ngh\" (UID: \"28862320-350f-4f29-92bb-d8201c93580b\") " pod="calico-system/goldmane-666569f655-n9ngh" Jan 20 00:41:58.252063 kubelet[3599]: I0120 00:41:58.251967 3599 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9bl57\" (UniqueName: \"kubernetes.io/projected/e57a0c68-c8b0-453f-a1e5-cacbecaee897-kube-api-access-9bl57\") pod \"coredns-674b8bbfcf-smdnr\" (UID: \"e57a0c68-c8b0-453f-a1e5-cacbecaee897\") " pod="kube-system/coredns-674b8bbfcf-smdnr" Jan 20 00:41:58.252063 kubelet[3599]: I0120 00:41:58.251978 3599 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mpls5\" (UniqueName: \"kubernetes.io/projected/77e9acbe-87a2-440f-b406-8c8900ab52f5-kube-api-access-mpls5\") pod \"calico-apiserver-6d94f7fcbb-jr4rt\" (UID: \"77e9acbe-87a2-440f-b406-8c8900ab52f5\") " pod="calico-apiserver/calico-apiserver-6d94f7fcbb-jr4rt" Jan 20 00:41:58.252063 kubelet[3599]: I0120 00:41:58.251997 3599 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xkm7x\" (UniqueName: \"kubernetes.io/projected/6eedb683-7841-469d-9465-68ae5bed2952-kube-api-access-xkm7x\") pod \"calico-apiserver-7c8fb8fd4d-9hn8w\" (UID: \"6eedb683-7841-469d-9465-68ae5bed2952\") " pod="calico-apiserver/calico-apiserver-7c8fb8fd4d-9hn8w" Jan 20 00:41:58.252610 kubelet[3599]: I0120 00:41:58.252007 3599 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/4827d07a-600d-43d7-a9a0-275dbcae9208-whisker-backend-key-pair\") pod \"whisker-7577cb8d6f-jfxcp\" (UID: \"4827d07a-600d-43d7-a9a0-275dbcae9208\") " pod="calico-system/whisker-7577cb8d6f-jfxcp" Jan 20 00:41:58.252610 kubelet[3599]: I0120 00:41:58.252018 3599 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4827d07a-600d-43d7-a9a0-275dbcae9208-whisker-ca-bundle\") pod \"whisker-7577cb8d6f-jfxcp\" (UID: \"4827d07a-600d-43d7-a9a0-275dbcae9208\") " pod="calico-system/whisker-7577cb8d6f-jfxcp" Jan 20 00:41:58.252610 kubelet[3599]: I0120 00:41:58.252028 3599 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d6np2\" (UniqueName: \"kubernetes.io/projected/4827d07a-600d-43d7-a9a0-275dbcae9208-kube-api-access-d6np2\") pod \"whisker-7577cb8d6f-jfxcp\" (UID: \"4827d07a-600d-43d7-a9a0-275dbcae9208\") " pod="calico-system/whisker-7577cb8d6f-jfxcp" Jan 20 00:41:58.252610 kubelet[3599]: I0120 00:41:58.252037 3599 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/0b136bd0-6a42-4726-87cd-a3538d5ee86b-calico-apiserver-certs\") pod \"calico-apiserver-7c8fb8fd4d-xhrnz\" (UID: \"0b136bd0-6a42-4726-87cd-a3538d5ee86b\") " pod="calico-apiserver/calico-apiserver-7c8fb8fd4d-xhrnz" Jan 20 00:41:58.252610 kubelet[3599]: I0120 00:41:58.252047 3599 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/77e9acbe-87a2-440f-b406-8c8900ab52f5-calico-apiserver-certs\") pod \"calico-apiserver-6d94f7fcbb-jr4rt\" (UID: \"77e9acbe-87a2-440f-b406-8c8900ab52f5\") " pod="calico-apiserver/calico-apiserver-6d94f7fcbb-jr4rt" Jan 20 00:41:58.252690 kubelet[3599]: I0120 00:41:58.252059 3599 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/6eedb683-7841-469d-9465-68ae5bed2952-calico-apiserver-certs\") pod \"calico-apiserver-7c8fb8fd4d-9hn8w\" (UID: \"6eedb683-7841-469d-9465-68ae5bed2952\") " pod="calico-apiserver/calico-apiserver-7c8fb8fd4d-9hn8w" Jan 20 00:41:58.252690 kubelet[3599]: I0120 00:41:58.252068 3599 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6shcx\" (UniqueName: \"kubernetes.io/projected/28862320-350f-4f29-92bb-d8201c93580b-kube-api-access-6shcx\") pod \"goldmane-666569f655-n9ngh\" (UID: \"28862320-350f-4f29-92bb-d8201c93580b\") " pod="calico-system/goldmane-666569f655-n9ngh" Jan 20 00:41:58.269172 containerd[2132]: time="2026-01-20T00:41:58.269008169Z" level=error msg="Failed to destroy network for sandbox \"21ea568d56f3f86dd76f799109d754b02f5f962d529200a89be796e8db1968f4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 00:41:58.271533 systemd[1]: run-netns-cni\x2d25ceb5f3\x2d63d6\x2d97cd\x2dff2a\x2dd9e0cd0e1d05.mount: Deactivated successfully. Jan 20 00:41:58.282380 containerd[2132]: time="2026-01-20T00:41:58.282289998Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-t6nwm,Uid:e914416f-b403-4119-a223-0b5c6e18edd3,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"21ea568d56f3f86dd76f799109d754b02f5f962d529200a89be796e8db1968f4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 00:41:58.282929 kubelet[3599]: E0120 00:41:58.282899 3599 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"21ea568d56f3f86dd76f799109d754b02f5f962d529200a89be796e8db1968f4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 00:41:58.283478 kubelet[3599]: E0120 00:41:58.283264 3599 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"21ea568d56f3f86dd76f799109d754b02f5f962d529200a89be796e8db1968f4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-t6nwm" Jan 20 00:41:58.283665 kubelet[3599]: E0120 00:41:58.283290 3599 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"21ea568d56f3f86dd76f799109d754b02f5f962d529200a89be796e8db1968f4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-t6nwm" Jan 20 00:41:58.284277 kubelet[3599]: E0120 00:41:58.283933 3599 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-t6nwm_calico-system(e914416f-b403-4119-a223-0b5c6e18edd3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-t6nwm_calico-system(e914416f-b403-4119-a223-0b5c6e18edd3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"21ea568d56f3f86dd76f799109d754b02f5f962d529200a89be796e8db1968f4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-t6nwm" podUID="e914416f-b403-4119-a223-0b5c6e18edd3" Jan 20 00:41:58.470669 containerd[2132]: time="2026-01-20T00:41:58.470572033Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-w85xq,Uid:1e8e6c90-c796-40a3-81f9-bb1930f6d213,Namespace:kube-system,Attempt:0,}" Jan 20 00:41:58.491550 containerd[2132]: time="2026-01-20T00:41:58.491476306Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7577cb8d6f-jfxcp,Uid:4827d07a-600d-43d7-a9a0-275dbcae9208,Namespace:calico-system,Attempt:0,}" Jan 20 00:41:58.511629 containerd[2132]: time="2026-01-20T00:41:58.511501662Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-57d88c779f-cqh6x,Uid:86fc1b8f-992e-433a-a4e5-96b8bd195d5d,Namespace:calico-system,Attempt:0,}" Jan 20 00:41:58.511801 containerd[2132]: time="2026-01-20T00:41:58.511776575Z" level=error msg="Failed to destroy network for sandbox \"16859c57f923ad11f6da7c9c231e1dea54ce1388cb0a5a4e72cde2ad58c06ad4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 00:41:58.516646 containerd[2132]: time="2026-01-20T00:41:58.516590578Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-smdnr,Uid:e57a0c68-c8b0-453f-a1e5-cacbecaee897,Namespace:kube-system,Attempt:0,}" Jan 20 00:41:58.520484 containerd[2132]: time="2026-01-20T00:41:58.520414493Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-w85xq,Uid:1e8e6c90-c796-40a3-81f9-bb1930f6d213,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"16859c57f923ad11f6da7c9c231e1dea54ce1388cb0a5a4e72cde2ad58c06ad4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 00:41:58.521186 kubelet[3599]: E0120 00:41:58.520563 3599 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"16859c57f923ad11f6da7c9c231e1dea54ce1388cb0a5a4e72cde2ad58c06ad4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 00:41:58.521251 kubelet[3599]: E0120 00:41:58.521211 3599 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"16859c57f923ad11f6da7c9c231e1dea54ce1388cb0a5a4e72cde2ad58c06ad4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-w85xq" Jan 20 00:41:58.521279 kubelet[3599]: E0120 00:41:58.521248 3599 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"16859c57f923ad11f6da7c9c231e1dea54ce1388cb0a5a4e72cde2ad58c06ad4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-w85xq" Jan 20 00:41:58.521352 kubelet[3599]: E0120 00:41:58.521296 3599 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-w85xq_kube-system(1e8e6c90-c796-40a3-81f9-bb1930f6d213)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-w85xq_kube-system(1e8e6c90-c796-40a3-81f9-bb1930f6d213)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"16859c57f923ad11f6da7c9c231e1dea54ce1388cb0a5a4e72cde2ad58c06ad4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-w85xq" podUID="1e8e6c90-c796-40a3-81f9-bb1930f6d213" Jan 20 00:41:58.529451 containerd[2132]: time="2026-01-20T00:41:58.529351205Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7c8fb8fd4d-xhrnz,Uid:0b136bd0-6a42-4726-87cd-a3538d5ee86b,Namespace:calico-apiserver,Attempt:0,}" Jan 20 00:41:58.541877 containerd[2132]: time="2026-01-20T00:41:58.541855279Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7c8fb8fd4d-9hn8w,Uid:6eedb683-7841-469d-9465-68ae5bed2952,Namespace:calico-apiserver,Attempt:0,}" Jan 20 00:41:58.547123 containerd[2132]: time="2026-01-20T00:41:58.547102320Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-n9ngh,Uid:28862320-350f-4f29-92bb-d8201c93580b,Namespace:calico-system,Attempt:0,}" Jan 20 00:41:58.552683 containerd[2132]: time="2026-01-20T00:41:58.552648066Z" level=error msg="Failed to destroy network for sandbox \"9838e5049985172d03d2bde4cbb16a6ca83cae38f9ecbddbdb43b54e6634db78\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 00:41:58.553986 containerd[2132]: time="2026-01-20T00:41:58.553924579Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6d94f7fcbb-jr4rt,Uid:77e9acbe-87a2-440f-b406-8c8900ab52f5,Namespace:calico-apiserver,Attempt:0,}" Jan 20 00:41:58.589900 containerd[2132]: time="2026-01-20T00:41:58.589689898Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7577cb8d6f-jfxcp,Uid:4827d07a-600d-43d7-a9a0-275dbcae9208,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"9838e5049985172d03d2bde4cbb16a6ca83cae38f9ecbddbdb43b54e6634db78\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 00:41:58.590325 kubelet[3599]: E0120 00:41:58.590272 3599 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9838e5049985172d03d2bde4cbb16a6ca83cae38f9ecbddbdb43b54e6634db78\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 00:41:58.590476 kubelet[3599]: E0120 00:41:58.590432 3599 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9838e5049985172d03d2bde4cbb16a6ca83cae38f9ecbddbdb43b54e6634db78\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-7577cb8d6f-jfxcp" Jan 20 00:41:58.590476 kubelet[3599]: E0120 00:41:58.590450 3599 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9838e5049985172d03d2bde4cbb16a6ca83cae38f9ecbddbdb43b54e6634db78\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-7577cb8d6f-jfxcp" Jan 20 00:41:58.590541 kubelet[3599]: E0120 00:41:58.590501 3599 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-7577cb8d6f-jfxcp_calico-system(4827d07a-600d-43d7-a9a0-275dbcae9208)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-7577cb8d6f-jfxcp_calico-system(4827d07a-600d-43d7-a9a0-275dbcae9208)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9838e5049985172d03d2bde4cbb16a6ca83cae38f9ecbddbdb43b54e6634db78\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-7577cb8d6f-jfxcp" podUID="4827d07a-600d-43d7-a9a0-275dbcae9208" Jan 20 00:41:58.601636 containerd[2132]: time="2026-01-20T00:41:58.601566064Z" level=error msg="Failed to destroy network for sandbox \"d5d1ea6ad346003b5ebd1f5180a4ca787de1e072f0a4d563d4f89fe7c27db251\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 00:41:58.602685 containerd[2132]: time="2026-01-20T00:41:58.602659372Z" level=error msg="Failed to destroy network for sandbox \"0a859fb248b1ab151e53a7c9f73f6ee1e2494fc31439648716f6a38cadf015cf\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 00:41:58.610100 containerd[2132]: time="2026-01-20T00:41:58.610074450Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-57d88c779f-cqh6x,Uid:86fc1b8f-992e-433a-a4e5-96b8bd195d5d,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"d5d1ea6ad346003b5ebd1f5180a4ca787de1e072f0a4d563d4f89fe7c27db251\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 00:41:58.610502 kubelet[3599]: E0120 00:41:58.610399 3599 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d5d1ea6ad346003b5ebd1f5180a4ca787de1e072f0a4d563d4f89fe7c27db251\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 00:41:58.611132 kubelet[3599]: E0120 00:41:58.610601 3599 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d5d1ea6ad346003b5ebd1f5180a4ca787de1e072f0a4d563d4f89fe7c27db251\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-57d88c779f-cqh6x" Jan 20 00:41:58.611132 kubelet[3599]: E0120 00:41:58.610622 3599 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d5d1ea6ad346003b5ebd1f5180a4ca787de1e072f0a4d563d4f89fe7c27db251\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-57d88c779f-cqh6x" Jan 20 00:41:58.611132 kubelet[3599]: E0120 00:41:58.610660 3599 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-57d88c779f-cqh6x_calico-system(86fc1b8f-992e-433a-a4e5-96b8bd195d5d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-57d88c779f-cqh6x_calico-system(86fc1b8f-992e-433a-a4e5-96b8bd195d5d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d5d1ea6ad346003b5ebd1f5180a4ca787de1e072f0a4d563d4f89fe7c27db251\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-57d88c779f-cqh6x" podUID="86fc1b8f-992e-433a-a4e5-96b8bd195d5d" Jan 20 00:41:58.617508 containerd[2132]: time="2026-01-20T00:41:58.617455336Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-smdnr,Uid:e57a0c68-c8b0-453f-a1e5-cacbecaee897,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"0a859fb248b1ab151e53a7c9f73f6ee1e2494fc31439648716f6a38cadf015cf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 00:41:58.618582 kubelet[3599]: E0120 00:41:58.617708 3599 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0a859fb248b1ab151e53a7c9f73f6ee1e2494fc31439648716f6a38cadf015cf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 00:41:58.618582 kubelet[3599]: E0120 00:41:58.617739 3599 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0a859fb248b1ab151e53a7c9f73f6ee1e2494fc31439648716f6a38cadf015cf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-smdnr" Jan 20 00:41:58.618582 kubelet[3599]: E0120 00:41:58.617752 3599 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0a859fb248b1ab151e53a7c9f73f6ee1e2494fc31439648716f6a38cadf015cf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-smdnr" Jan 20 00:41:58.618713 kubelet[3599]: E0120 00:41:58.617792 3599 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-smdnr_kube-system(e57a0c68-c8b0-453f-a1e5-cacbecaee897)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-smdnr_kube-system(e57a0c68-c8b0-453f-a1e5-cacbecaee897)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0a859fb248b1ab151e53a7c9f73f6ee1e2494fc31439648716f6a38cadf015cf\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-smdnr" podUID="e57a0c68-c8b0-453f-a1e5-cacbecaee897" Jan 20 00:41:58.636847 containerd[2132]: time="2026-01-20T00:41:58.635926226Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Jan 20 00:41:58.645997 containerd[2132]: time="2026-01-20T00:41:58.645914428Z" level=error msg="Failed to destroy network for sandbox \"9f4201d3b104153525201abb898aa7903bacc7e83d0f3b44cf76449e987acb51\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 00:41:58.653637 containerd[2132]: time="2026-01-20T00:41:58.653597163Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7c8fb8fd4d-xhrnz,Uid:0b136bd0-6a42-4726-87cd-a3538d5ee86b,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"9f4201d3b104153525201abb898aa7903bacc7e83d0f3b44cf76449e987acb51\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 00:41:58.654748 kubelet[3599]: E0120 00:41:58.653752 3599 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9f4201d3b104153525201abb898aa7903bacc7e83d0f3b44cf76449e987acb51\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 00:41:58.654748 kubelet[3599]: E0120 00:41:58.653783 3599 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9f4201d3b104153525201abb898aa7903bacc7e83d0f3b44cf76449e987acb51\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7c8fb8fd4d-xhrnz" Jan 20 00:41:58.654748 kubelet[3599]: E0120 00:41:58.653796 3599 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9f4201d3b104153525201abb898aa7903bacc7e83d0f3b44cf76449e987acb51\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7c8fb8fd4d-xhrnz" Jan 20 00:41:58.654830 kubelet[3599]: E0120 00:41:58.653823 3599 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7c8fb8fd4d-xhrnz_calico-apiserver(0b136bd0-6a42-4726-87cd-a3538d5ee86b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7c8fb8fd4d-xhrnz_calico-apiserver(0b136bd0-6a42-4726-87cd-a3538d5ee86b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9f4201d3b104153525201abb898aa7903bacc7e83d0f3b44cf76449e987acb51\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7c8fb8fd4d-xhrnz" podUID="0b136bd0-6a42-4726-87cd-a3538d5ee86b" Jan 20 00:41:58.667460 containerd[2132]: time="2026-01-20T00:41:58.667423784Z" level=error msg="Failed to destroy network for sandbox \"bca0802e7b6157ddb9b924c0c79b1d2881db47163551b1da6ca5bd7710222ec6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 00:41:58.674043 containerd[2132]: time="2026-01-20T00:41:58.674007628Z" level=error msg="Failed to destroy network for sandbox \"4399ef6188593f94fd1ee61cae8c5b263ecc8eaa755d1efdb18981fe50fc9a7d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 00:41:58.675497 containerd[2132]: time="2026-01-20T00:41:58.675373303Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7c8fb8fd4d-9hn8w,Uid:6eedb683-7841-469d-9465-68ae5bed2952,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"bca0802e7b6157ddb9b924c0c79b1d2881db47163551b1da6ca5bd7710222ec6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 00:41:58.675574 kubelet[3599]: E0120 00:41:58.675511 3599 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bca0802e7b6157ddb9b924c0c79b1d2881db47163551b1da6ca5bd7710222ec6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 00:41:58.675574 kubelet[3599]: E0120 00:41:58.675547 3599 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bca0802e7b6157ddb9b924c0c79b1d2881db47163551b1da6ca5bd7710222ec6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7c8fb8fd4d-9hn8w" Jan 20 00:41:58.675574 kubelet[3599]: E0120 00:41:58.675560 3599 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bca0802e7b6157ddb9b924c0c79b1d2881db47163551b1da6ca5bd7710222ec6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7c8fb8fd4d-9hn8w" Jan 20 00:41:58.675756 kubelet[3599]: E0120 00:41:58.675721 3599 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7c8fb8fd4d-9hn8w_calico-apiserver(6eedb683-7841-469d-9465-68ae5bed2952)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7c8fb8fd4d-9hn8w_calico-apiserver(6eedb683-7841-469d-9465-68ae5bed2952)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"bca0802e7b6157ddb9b924c0c79b1d2881db47163551b1da6ca5bd7710222ec6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7c8fb8fd4d-9hn8w" podUID="6eedb683-7841-469d-9465-68ae5bed2952" Jan 20 00:41:58.676338 containerd[2132]: time="2026-01-20T00:41:58.675914825Z" level=error msg="Failed to destroy network for sandbox \"6b28c903474889d06cbe9a83854acca539dd52729285571aeac8771da9e8061c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 00:41:58.683388 containerd[2132]: time="2026-01-20T00:41:58.683322743Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-n9ngh,Uid:28862320-350f-4f29-92bb-d8201c93580b,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"4399ef6188593f94fd1ee61cae8c5b263ecc8eaa755d1efdb18981fe50fc9a7d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 00:41:58.683559 kubelet[3599]: E0120 00:41:58.683534 3599 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4399ef6188593f94fd1ee61cae8c5b263ecc8eaa755d1efdb18981fe50fc9a7d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 00:41:58.683645 kubelet[3599]: E0120 00:41:58.683630 3599 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4399ef6188593f94fd1ee61cae8c5b263ecc8eaa755d1efdb18981fe50fc9a7d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-n9ngh" Jan 20 00:41:58.683716 kubelet[3599]: E0120 00:41:58.683700 3599 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4399ef6188593f94fd1ee61cae8c5b263ecc8eaa755d1efdb18981fe50fc9a7d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-n9ngh" Jan 20 00:41:58.683806 kubelet[3599]: E0120 00:41:58.683791 3599 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-n9ngh_calico-system(28862320-350f-4f29-92bb-d8201c93580b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-n9ngh_calico-system(28862320-350f-4f29-92bb-d8201c93580b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4399ef6188593f94fd1ee61cae8c5b263ecc8eaa755d1efdb18981fe50fc9a7d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-n9ngh" podUID="28862320-350f-4f29-92bb-d8201c93580b" Jan 20 00:41:58.686190 containerd[2132]: time="2026-01-20T00:41:58.686124417Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6d94f7fcbb-jr4rt,Uid:77e9acbe-87a2-440f-b406-8c8900ab52f5,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"6b28c903474889d06cbe9a83854acca539dd52729285571aeac8771da9e8061c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 00:41:58.686291 kubelet[3599]: E0120 00:41:58.686259 3599 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6b28c903474889d06cbe9a83854acca539dd52729285571aeac8771da9e8061c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 00:41:58.686518 kubelet[3599]: E0120 00:41:58.686293 3599 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6b28c903474889d06cbe9a83854acca539dd52729285571aeac8771da9e8061c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6d94f7fcbb-jr4rt" Jan 20 00:41:58.686518 kubelet[3599]: E0120 00:41:58.686332 3599 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6b28c903474889d06cbe9a83854acca539dd52729285571aeac8771da9e8061c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6d94f7fcbb-jr4rt" Jan 20 00:41:58.686518 kubelet[3599]: E0120 00:41:58.686429 3599 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6d94f7fcbb-jr4rt_calico-apiserver(77e9acbe-87a2-440f-b406-8c8900ab52f5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6d94f7fcbb-jr4rt_calico-apiserver(77e9acbe-87a2-440f-b406-8c8900ab52f5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6b28c903474889d06cbe9a83854acca539dd52729285571aeac8771da9e8061c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6d94f7fcbb-jr4rt" podUID="77e9acbe-87a2-440f-b406-8c8900ab52f5" Jan 20 00:42:05.423457 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2308710571.mount: Deactivated successfully. Jan 20 00:42:05.810555 containerd[2132]: time="2026-01-20T00:42:05.810434169Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 00:42:05.813323 containerd[2132]: time="2026-01-20T00:42:05.813281891Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=150930912" Jan 20 00:42:05.817356 containerd[2132]: time="2026-01-20T00:42:05.817295995Z" level=info msg="ImageCreate event name:\"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 00:42:05.821558 containerd[2132]: time="2026-01-20T00:42:05.821520639Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 00:42:05.822076 containerd[2132]: time="2026-01-20T00:42:05.821810816Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"150934424\" in 7.18488815s" Jan 20 00:42:05.822076 containerd[2132]: time="2026-01-20T00:42:05.821838338Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\"" Jan 20 00:42:05.853867 containerd[2132]: time="2026-01-20T00:42:05.853834874Z" level=info msg="CreateContainer within sandbox \"8e1acfaa472aa7cf1ef5ec883998060c50ae02a4774531a312ecb6f374776bb3\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 20 00:42:05.877322 containerd[2132]: time="2026-01-20T00:42:05.875158364Z" level=info msg="Container 229894cd5e4470bb70dceb00af34a7010de8c01715abdf64c92780d2b30515b6: CDI devices from CRI Config.CDIDevices: []" Jan 20 00:42:05.894878 containerd[2132]: time="2026-01-20T00:42:05.894841644Z" level=info msg="CreateContainer within sandbox \"8e1acfaa472aa7cf1ef5ec883998060c50ae02a4774531a312ecb6f374776bb3\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"229894cd5e4470bb70dceb00af34a7010de8c01715abdf64c92780d2b30515b6\"" Jan 20 00:42:05.895644 containerd[2132]: time="2026-01-20T00:42:05.895618474Z" level=info msg="StartContainer for \"229894cd5e4470bb70dceb00af34a7010de8c01715abdf64c92780d2b30515b6\"" Jan 20 00:42:05.896684 containerd[2132]: time="2026-01-20T00:42:05.896657312Z" level=info msg="connecting to shim 229894cd5e4470bb70dceb00af34a7010de8c01715abdf64c92780d2b30515b6" address="unix:///run/containerd/s/333c9a00fc91a5198397935545fa5d552ffe083c99e5fcd335a3644e8f37c805" protocol=ttrpc version=3 Jan 20 00:42:05.915447 systemd[1]: Started cri-containerd-229894cd5e4470bb70dceb00af34a7010de8c01715abdf64c92780d2b30515b6.scope - libcontainer container 229894cd5e4470bb70dceb00af34a7010de8c01715abdf64c92780d2b30515b6. Jan 20 00:42:05.957535 kernel: kauditd_printk_skb: 12 callbacks suppressed Jan 20 00:42:05.957609 kernel: audit: type=1334 audit(1768869725.954:596): prog-id=196 op=LOAD Jan 20 00:42:05.954000 audit: BPF prog-id=196 op=LOAD Jan 20 00:42:05.954000 audit[4686]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001b03e8 a2=98 a3=0 items=0 ppid=4169 pid=4686 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:42:05.977793 kernel: audit: type=1300 audit(1768869725.954:596): arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001b03e8 a2=98 a3=0 items=0 ppid=4169 pid=4686 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:42:05.954000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3232393839346364356534343730626237306463656230306166333461 Jan 20 00:42:05.995335 kernel: audit: type=1327 audit(1768869725.954:596): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3232393839346364356534343730626237306463656230306166333461 Jan 20 00:42:05.954000 audit: BPF prog-id=197 op=LOAD Jan 20 00:42:05.999957 kernel: audit: type=1334 audit(1768869725.954:597): prog-id=197 op=LOAD Jan 20 00:42:05.954000 audit[4686]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=40001b0168 a2=98 a3=0 items=0 ppid=4169 pid=4686 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:42:05.954000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3232393839346364356534343730626237306463656230306166333461 Jan 20 00:42:06.018358 kernel: audit: type=1300 audit(1768869725.954:597): arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=40001b0168 a2=98 a3=0 items=0 ppid=4169 pid=4686 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:42:06.039768 kernel: audit: type=1327 audit(1768869725.954:597): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3232393839346364356534343730626237306463656230306166333461 Jan 20 00:42:06.039855 kernel: audit: type=1334 audit(1768869725.957:598): prog-id=197 op=UNLOAD Jan 20 00:42:05.957000 audit: BPF prog-id=197 op=UNLOAD Jan 20 00:42:05.957000 audit[4686]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4169 pid=4686 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:42:06.056344 kernel: audit: type=1300 audit(1768869725.957:598): arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4169 pid=4686 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:42:05.957000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3232393839346364356534343730626237306463656230306166333461 Jan 20 00:42:06.072537 kernel: audit: type=1327 audit(1768869725.957:598): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3232393839346364356534343730626237306463656230306166333461 Jan 20 00:42:05.957000 audit: BPF prog-id=196 op=UNLOAD Jan 20 00:42:06.077516 kernel: audit: type=1334 audit(1768869725.957:599): prog-id=196 op=UNLOAD Jan 20 00:42:05.957000 audit[4686]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4169 pid=4686 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:42:05.957000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3232393839346364356534343730626237306463656230306166333461 Jan 20 00:42:05.957000 audit: BPF prog-id=198 op=LOAD Jan 20 00:42:05.957000 audit[4686]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001b0648 a2=98 a3=0 items=0 ppid=4169 pid=4686 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:42:05.957000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3232393839346364356534343730626237306463656230306166333461 Jan 20 00:42:06.080168 containerd[2132]: time="2026-01-20T00:42:06.080130786Z" level=info msg="StartContainer for \"229894cd5e4470bb70dceb00af34a7010de8c01715abdf64c92780d2b30515b6\" returns successfully" Jan 20 00:42:06.383407 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jan 20 00:42:06.383517 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jan 20 00:42:06.499170 kubelet[3599]: I0120 00:42:06.499131 3599 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/4827d07a-600d-43d7-a9a0-275dbcae9208-whisker-backend-key-pair\") pod \"4827d07a-600d-43d7-a9a0-275dbcae9208\" (UID: \"4827d07a-600d-43d7-a9a0-275dbcae9208\") " Jan 20 00:42:06.499170 kubelet[3599]: I0120 00:42:06.499170 3599 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4827d07a-600d-43d7-a9a0-275dbcae9208-whisker-ca-bundle\") pod \"4827d07a-600d-43d7-a9a0-275dbcae9208\" (UID: \"4827d07a-600d-43d7-a9a0-275dbcae9208\") " Jan 20 00:42:06.499655 kubelet[3599]: I0120 00:42:06.499191 3599 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6np2\" (UniqueName: \"kubernetes.io/projected/4827d07a-600d-43d7-a9a0-275dbcae9208-kube-api-access-d6np2\") pod \"4827d07a-600d-43d7-a9a0-275dbcae9208\" (UID: \"4827d07a-600d-43d7-a9a0-275dbcae9208\") " Jan 20 00:42:06.501799 kubelet[3599]: I0120 00:42:06.501756 3599 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4827d07a-600d-43d7-a9a0-275dbcae9208-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "4827d07a-600d-43d7-a9a0-275dbcae9208" (UID: "4827d07a-600d-43d7-a9a0-275dbcae9208"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 20 00:42:06.508552 kubelet[3599]: I0120 00:42:06.508511 3599 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4827d07a-600d-43d7-a9a0-275dbcae9208-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "4827d07a-600d-43d7-a9a0-275dbcae9208" (UID: "4827d07a-600d-43d7-a9a0-275dbcae9208"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 20 00:42:06.509160 systemd[1]: var-lib-kubelet-pods-4827d07a\x2d600d\x2d43d7\x2da9a0\x2d275dbcae9208-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Jan 20 00:42:06.511841 systemd[1]: var-lib-kubelet-pods-4827d07a\x2d600d\x2d43d7\x2da9a0\x2d275dbcae9208-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dd6np2.mount: Deactivated successfully. Jan 20 00:42:06.513133 kubelet[3599]: I0120 00:42:06.513102 3599 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4827d07a-600d-43d7-a9a0-275dbcae9208-kube-api-access-d6np2" (OuterVolumeSpecName: "kube-api-access-d6np2") pod "4827d07a-600d-43d7-a9a0-275dbcae9208" (UID: "4827d07a-600d-43d7-a9a0-275dbcae9208"). InnerVolumeSpecName "kube-api-access-d6np2". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 20 00:42:06.550346 systemd[1]: Removed slice kubepods-besteffort-pod4827d07a_600d_43d7_a9a0_275dbcae9208.slice - libcontainer container kubepods-besteffort-pod4827d07a_600d_43d7_a9a0_275dbcae9208.slice. Jan 20 00:42:06.600265 kubelet[3599]: I0120 00:42:06.600226 3599 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4827d07a-600d-43d7-a9a0-275dbcae9208-whisker-ca-bundle\") on node \"ci-4515.1.0-n-fc9e3ff023\" DevicePath \"\"" Jan 20 00:42:06.600378 kubelet[3599]: I0120 00:42:06.600278 3599 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-d6np2\" (UniqueName: \"kubernetes.io/projected/4827d07a-600d-43d7-a9a0-275dbcae9208-kube-api-access-d6np2\") on node \"ci-4515.1.0-n-fc9e3ff023\" DevicePath \"\"" Jan 20 00:42:06.600378 kubelet[3599]: I0120 00:42:06.600287 3599 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/4827d07a-600d-43d7-a9a0-275dbcae9208-whisker-backend-key-pair\") on node \"ci-4515.1.0-n-fc9e3ff023\" DevicePath \"\"" Jan 20 00:42:06.721130 kubelet[3599]: I0120 00:42:06.721058 3599 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-k2llv" podStartSLOduration=1.555525538 podStartE2EDuration="20.721042421s" podCreationTimestamp="2026-01-20 00:41:46 +0000 UTC" firstStartedPulling="2026-01-20 00:41:46.656837662 +0000 UTC m=+22.203802727" lastFinishedPulling="2026-01-20 00:42:05.822354545 +0000 UTC m=+41.369319610" observedRunningTime="2026-01-20 00:42:06.710362031 +0000 UTC m=+42.257327136" watchObservedRunningTime="2026-01-20 00:42:06.721042421 +0000 UTC m=+42.268007486" Jan 20 00:42:06.739106 systemd[1]: Created slice kubepods-besteffort-pod050b7649_47d7_4543_80dc_167b27775ab2.slice - libcontainer container kubepods-besteffort-pod050b7649_47d7_4543_80dc_167b27775ab2.slice. Jan 20 00:42:06.801859 kubelet[3599]: I0120 00:42:06.801822 3599 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4s4k9\" (UniqueName: \"kubernetes.io/projected/050b7649-47d7-4543-80dc-167b27775ab2-kube-api-access-4s4k9\") pod \"whisker-78b7cf9965-hz2t4\" (UID: \"050b7649-47d7-4543-80dc-167b27775ab2\") " pod="calico-system/whisker-78b7cf9965-hz2t4" Jan 20 00:42:06.801859 kubelet[3599]: I0120 00:42:06.801861 3599 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/050b7649-47d7-4543-80dc-167b27775ab2-whisker-backend-key-pair\") pod \"whisker-78b7cf9965-hz2t4\" (UID: \"050b7649-47d7-4543-80dc-167b27775ab2\") " pod="calico-system/whisker-78b7cf9965-hz2t4" Jan 20 00:42:06.801991 kubelet[3599]: I0120 00:42:06.801873 3599 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/050b7649-47d7-4543-80dc-167b27775ab2-whisker-ca-bundle\") pod \"whisker-78b7cf9965-hz2t4\" (UID: \"050b7649-47d7-4543-80dc-167b27775ab2\") " pod="calico-system/whisker-78b7cf9965-hz2t4" Jan 20 00:42:07.051543 containerd[2132]: time="2026-01-20T00:42:07.051497084Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-78b7cf9965-hz2t4,Uid:050b7649-47d7-4543-80dc-167b27775ab2,Namespace:calico-system,Attempt:0,}" Jan 20 00:42:07.187556 systemd-networkd[1725]: calif2d892fe235: Link UP Jan 20 00:42:07.188722 systemd-networkd[1725]: calif2d892fe235: Gained carrier Jan 20 00:42:07.202954 containerd[2132]: 2026-01-20 00:42:07.074 [INFO][4769] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 20 00:42:07.202954 containerd[2132]: 2026-01-20 00:42:07.106 [INFO][4769] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4515.1.0--n--fc9e3ff023-k8s-whisker--78b7cf9965--hz2t4-eth0 whisker-78b7cf9965- calico-system 050b7649-47d7-4543-80dc-167b27775ab2 942 0 2026-01-20 00:42:06 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:78b7cf9965 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4515.1.0-n-fc9e3ff023 whisker-78b7cf9965-hz2t4 eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calif2d892fe235 [] [] }} ContainerID="19ad5e7abea849dfba36c6314f34308a60b3d604386ee9cf311897658e834945" Namespace="calico-system" Pod="whisker-78b7cf9965-hz2t4" WorkloadEndpoint="ci--4515.1.0--n--fc9e3ff023-k8s-whisker--78b7cf9965--hz2t4-" Jan 20 00:42:07.202954 containerd[2132]: 2026-01-20 00:42:07.106 [INFO][4769] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="19ad5e7abea849dfba36c6314f34308a60b3d604386ee9cf311897658e834945" Namespace="calico-system" Pod="whisker-78b7cf9965-hz2t4" WorkloadEndpoint="ci--4515.1.0--n--fc9e3ff023-k8s-whisker--78b7cf9965--hz2t4-eth0" Jan 20 00:42:07.202954 containerd[2132]: 2026-01-20 00:42:07.122 [INFO][4781] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="19ad5e7abea849dfba36c6314f34308a60b3d604386ee9cf311897658e834945" HandleID="k8s-pod-network.19ad5e7abea849dfba36c6314f34308a60b3d604386ee9cf311897658e834945" Workload="ci--4515.1.0--n--fc9e3ff023-k8s-whisker--78b7cf9965--hz2t4-eth0" Jan 20 00:42:07.203219 containerd[2132]: 2026-01-20 00:42:07.122 [INFO][4781] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="19ad5e7abea849dfba36c6314f34308a60b3d604386ee9cf311897658e834945" HandleID="k8s-pod-network.19ad5e7abea849dfba36c6314f34308a60b3d604386ee9cf311897658e834945" Workload="ci--4515.1.0--n--fc9e3ff023-k8s-whisker--78b7cf9965--hz2t4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400024b2d0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4515.1.0-n-fc9e3ff023", "pod":"whisker-78b7cf9965-hz2t4", "timestamp":"2026-01-20 00:42:07.122766926 +0000 UTC"}, Hostname:"ci-4515.1.0-n-fc9e3ff023", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 20 00:42:07.203219 containerd[2132]: 2026-01-20 00:42:07.122 [INFO][4781] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 00:42:07.203219 containerd[2132]: 2026-01-20 00:42:07.123 [INFO][4781] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 00:42:07.203219 containerd[2132]: 2026-01-20 00:42:07.123 [INFO][4781] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4515.1.0-n-fc9e3ff023' Jan 20 00:42:07.203219 containerd[2132]: 2026-01-20 00:42:07.128 [INFO][4781] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.19ad5e7abea849dfba36c6314f34308a60b3d604386ee9cf311897658e834945" host="ci-4515.1.0-n-fc9e3ff023" Jan 20 00:42:07.203219 containerd[2132]: 2026-01-20 00:42:07.131 [INFO][4781] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4515.1.0-n-fc9e3ff023" Jan 20 00:42:07.203219 containerd[2132]: 2026-01-20 00:42:07.134 [INFO][4781] ipam/ipam.go 511: Trying affinity for 192.168.51.64/26 host="ci-4515.1.0-n-fc9e3ff023" Jan 20 00:42:07.203219 containerd[2132]: 2026-01-20 00:42:07.135 [INFO][4781] ipam/ipam.go 158: Attempting to load block cidr=192.168.51.64/26 host="ci-4515.1.0-n-fc9e3ff023" Jan 20 00:42:07.203219 containerd[2132]: 2026-01-20 00:42:07.136 [INFO][4781] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.51.64/26 host="ci-4515.1.0-n-fc9e3ff023" Jan 20 00:42:07.203390 containerd[2132]: 2026-01-20 00:42:07.136 [INFO][4781] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.51.64/26 handle="k8s-pod-network.19ad5e7abea849dfba36c6314f34308a60b3d604386ee9cf311897658e834945" host="ci-4515.1.0-n-fc9e3ff023" Jan 20 00:42:07.203390 containerd[2132]: 2026-01-20 00:42:07.137 [INFO][4781] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.19ad5e7abea849dfba36c6314f34308a60b3d604386ee9cf311897658e834945 Jan 20 00:42:07.203390 containerd[2132]: 2026-01-20 00:42:07.141 [INFO][4781] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.51.64/26 handle="k8s-pod-network.19ad5e7abea849dfba36c6314f34308a60b3d604386ee9cf311897658e834945" host="ci-4515.1.0-n-fc9e3ff023" Jan 20 00:42:07.203390 containerd[2132]: 2026-01-20 00:42:07.145 [INFO][4781] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.51.65/26] block=192.168.51.64/26 handle="k8s-pod-network.19ad5e7abea849dfba36c6314f34308a60b3d604386ee9cf311897658e834945" host="ci-4515.1.0-n-fc9e3ff023" Jan 20 00:42:07.203390 containerd[2132]: 2026-01-20 00:42:07.145 [INFO][4781] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.51.65/26] handle="k8s-pod-network.19ad5e7abea849dfba36c6314f34308a60b3d604386ee9cf311897658e834945" host="ci-4515.1.0-n-fc9e3ff023" Jan 20 00:42:07.203390 containerd[2132]: 2026-01-20 00:42:07.145 [INFO][4781] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 00:42:07.203390 containerd[2132]: 2026-01-20 00:42:07.145 [INFO][4781] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.51.65/26] IPv6=[] ContainerID="19ad5e7abea849dfba36c6314f34308a60b3d604386ee9cf311897658e834945" HandleID="k8s-pod-network.19ad5e7abea849dfba36c6314f34308a60b3d604386ee9cf311897658e834945" Workload="ci--4515.1.0--n--fc9e3ff023-k8s-whisker--78b7cf9965--hz2t4-eth0" Jan 20 00:42:07.203480 containerd[2132]: 2026-01-20 00:42:07.147 [INFO][4769] cni-plugin/k8s.go 418: Populated endpoint ContainerID="19ad5e7abea849dfba36c6314f34308a60b3d604386ee9cf311897658e834945" Namespace="calico-system" Pod="whisker-78b7cf9965-hz2t4" WorkloadEndpoint="ci--4515.1.0--n--fc9e3ff023-k8s-whisker--78b7cf9965--hz2t4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4515.1.0--n--fc9e3ff023-k8s-whisker--78b7cf9965--hz2t4-eth0", GenerateName:"whisker-78b7cf9965-", Namespace:"calico-system", SelfLink:"", UID:"050b7649-47d7-4543-80dc-167b27775ab2", ResourceVersion:"942", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 0, 42, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"78b7cf9965", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4515.1.0-n-fc9e3ff023", ContainerID:"", Pod:"whisker-78b7cf9965-hz2t4", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.51.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calif2d892fe235", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 00:42:07.203480 containerd[2132]: 2026-01-20 00:42:07.147 [INFO][4769] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.51.65/32] ContainerID="19ad5e7abea849dfba36c6314f34308a60b3d604386ee9cf311897658e834945" Namespace="calico-system" Pod="whisker-78b7cf9965-hz2t4" WorkloadEndpoint="ci--4515.1.0--n--fc9e3ff023-k8s-whisker--78b7cf9965--hz2t4-eth0" Jan 20 00:42:07.203528 containerd[2132]: 2026-01-20 00:42:07.147 [INFO][4769] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif2d892fe235 ContainerID="19ad5e7abea849dfba36c6314f34308a60b3d604386ee9cf311897658e834945" Namespace="calico-system" Pod="whisker-78b7cf9965-hz2t4" WorkloadEndpoint="ci--4515.1.0--n--fc9e3ff023-k8s-whisker--78b7cf9965--hz2t4-eth0" Jan 20 00:42:07.203528 containerd[2132]: 2026-01-20 00:42:07.189 [INFO][4769] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="19ad5e7abea849dfba36c6314f34308a60b3d604386ee9cf311897658e834945" Namespace="calico-system" Pod="whisker-78b7cf9965-hz2t4" WorkloadEndpoint="ci--4515.1.0--n--fc9e3ff023-k8s-whisker--78b7cf9965--hz2t4-eth0" Jan 20 00:42:07.203557 containerd[2132]: 2026-01-20 00:42:07.189 [INFO][4769] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="19ad5e7abea849dfba36c6314f34308a60b3d604386ee9cf311897658e834945" Namespace="calico-system" Pod="whisker-78b7cf9965-hz2t4" WorkloadEndpoint="ci--4515.1.0--n--fc9e3ff023-k8s-whisker--78b7cf9965--hz2t4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4515.1.0--n--fc9e3ff023-k8s-whisker--78b7cf9965--hz2t4-eth0", GenerateName:"whisker-78b7cf9965-", Namespace:"calico-system", SelfLink:"", UID:"050b7649-47d7-4543-80dc-167b27775ab2", ResourceVersion:"942", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 0, 42, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"78b7cf9965", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4515.1.0-n-fc9e3ff023", ContainerID:"19ad5e7abea849dfba36c6314f34308a60b3d604386ee9cf311897658e834945", Pod:"whisker-78b7cf9965-hz2t4", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.51.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calif2d892fe235", MAC:"c2:08:6b:db:ae:fd", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 00:42:07.203587 containerd[2132]: 2026-01-20 00:42:07.199 [INFO][4769] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="19ad5e7abea849dfba36c6314f34308a60b3d604386ee9cf311897658e834945" Namespace="calico-system" Pod="whisker-78b7cf9965-hz2t4" WorkloadEndpoint="ci--4515.1.0--n--fc9e3ff023-k8s-whisker--78b7cf9965--hz2t4-eth0" Jan 20 00:42:07.236697 containerd[2132]: time="2026-01-20T00:42:07.236663994Z" level=info msg="connecting to shim 19ad5e7abea849dfba36c6314f34308a60b3d604386ee9cf311897658e834945" address="unix:///run/containerd/s/331758650f46171819e59aebb15c010c4743efb27b5c5727270533838232d9f3" namespace=k8s.io protocol=ttrpc version=3 Jan 20 00:42:07.254455 systemd[1]: Started cri-containerd-19ad5e7abea849dfba36c6314f34308a60b3d604386ee9cf311897658e834945.scope - libcontainer container 19ad5e7abea849dfba36c6314f34308a60b3d604386ee9cf311897658e834945. Jan 20 00:42:07.260000 audit: BPF prog-id=199 op=LOAD Jan 20 00:42:07.262000 audit: BPF prog-id=200 op=LOAD Jan 20 00:42:07.262000 audit[4814]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000176180 a2=98 a3=0 items=0 ppid=4802 pid=4814 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:42:07.262000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3139616435653761626561383439646662613336633633313466333433 Jan 20 00:42:07.262000 audit: BPF prog-id=200 op=UNLOAD Jan 20 00:42:07.262000 audit[4814]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4802 pid=4814 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:42:07.262000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3139616435653761626561383439646662613336633633313466333433 Jan 20 00:42:07.263000 audit: BPF prog-id=201 op=LOAD Jan 20 00:42:07.263000 audit[4814]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001763e8 a2=98 a3=0 items=0 ppid=4802 pid=4814 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:42:07.263000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3139616435653761626561383439646662613336633633313466333433 Jan 20 00:42:07.263000 audit: BPF prog-id=202 op=LOAD Jan 20 00:42:07.263000 audit[4814]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000176168 a2=98 a3=0 items=0 ppid=4802 pid=4814 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:42:07.263000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3139616435653761626561383439646662613336633633313466333433 Jan 20 00:42:07.263000 audit: BPF prog-id=202 op=UNLOAD Jan 20 00:42:07.263000 audit[4814]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4802 pid=4814 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:42:07.263000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3139616435653761626561383439646662613336633633313466333433 Jan 20 00:42:07.263000 audit: BPF prog-id=201 op=UNLOAD Jan 20 00:42:07.263000 audit[4814]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4802 pid=4814 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:42:07.263000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3139616435653761626561383439646662613336633633313466333433 Jan 20 00:42:07.263000 audit: BPF prog-id=203 op=LOAD Jan 20 00:42:07.263000 audit[4814]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000176648 a2=98 a3=0 items=0 ppid=4802 pid=4814 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:42:07.263000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3139616435653761626561383439646662613336633633313466333433 Jan 20 00:42:07.285510 containerd[2132]: time="2026-01-20T00:42:07.285479263Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-78b7cf9965-hz2t4,Uid:050b7649-47d7-4543-80dc-167b27775ab2,Namespace:calico-system,Attempt:0,} returns sandbox id \"19ad5e7abea849dfba36c6314f34308a60b3d604386ee9cf311897658e834945\"" Jan 20 00:42:07.289132 containerd[2132]: time="2026-01-20T00:42:07.288936645Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 20 00:42:07.527961 containerd[2132]: time="2026-01-20T00:42:07.527067217Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 00:42:07.530821 containerd[2132]: time="2026-01-20T00:42:07.530751005Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 20 00:42:07.530821 containerd[2132]: time="2026-01-20T00:42:07.530778311Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 20 00:42:07.532591 kubelet[3599]: E0120 00:42:07.532552 3599 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 20 00:42:07.532969 kubelet[3599]: E0120 00:42:07.532610 3599 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 20 00:42:07.537439 kubelet[3599]: E0120 00:42:07.537399 3599 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:4ed88e40380e41739ab0886868a4c216,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-4s4k9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-78b7cf9965-hz2t4_calico-system(050b7649-47d7-4543-80dc-167b27775ab2): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 20 00:42:07.539187 containerd[2132]: time="2026-01-20T00:42:07.539168284Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 20 00:42:07.776851 containerd[2132]: time="2026-01-20T00:42:07.776763294Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 00:42:07.906009 containerd[2132]: time="2026-01-20T00:42:07.905920667Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 20 00:42:07.906009 containerd[2132]: time="2026-01-20T00:42:07.905959293Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 20 00:42:07.907018 kubelet[3599]: E0120 00:42:07.906954 3599 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 20 00:42:07.907018 kubelet[3599]: E0120 00:42:07.907004 3599 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 20 00:42:07.907473 kubelet[3599]: E0120 00:42:07.907396 3599 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4s4k9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-78b7cf9965-hz2t4_calico-system(050b7649-47d7-4543-80dc-167b27775ab2): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 20 00:42:07.909491 kubelet[3599]: E0120 00:42:07.909434 3599 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-78b7cf9965-hz2t4" podUID="050b7649-47d7-4543-80dc-167b27775ab2" Jan 20 00:42:08.016000 audit: BPF prog-id=204 op=LOAD Jan 20 00:42:08.016000 audit[4982]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffdc5d5088 a2=98 a3=ffffdc5d5078 items=0 ppid=4877 pid=4982 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:42:08.016000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 20 00:42:08.016000 audit: BPF prog-id=204 op=UNLOAD Jan 20 00:42:08.016000 audit[4982]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=ffffdc5d5058 a3=0 items=0 ppid=4877 pid=4982 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:42:08.016000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 20 00:42:08.016000 audit: BPF prog-id=205 op=LOAD Jan 20 00:42:08.016000 audit[4982]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffdc5d4f38 a2=74 a3=95 items=0 ppid=4877 pid=4982 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:42:08.016000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 20 00:42:08.016000 audit: BPF prog-id=205 op=UNLOAD Jan 20 00:42:08.016000 audit[4982]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=74 a3=95 items=0 ppid=4877 pid=4982 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:42:08.016000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 20 00:42:08.016000 audit: BPF prog-id=206 op=LOAD Jan 20 00:42:08.016000 audit[4982]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffdc5d4f68 a2=40 a3=ffffdc5d4f98 items=0 ppid=4877 pid=4982 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:42:08.016000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 20 00:42:08.016000 audit: BPF prog-id=206 op=UNLOAD Jan 20 00:42:08.016000 audit[4982]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=40 a3=ffffdc5d4f98 items=0 ppid=4877 pid=4982 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:42:08.016000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 20 00:42:08.018000 audit: BPF prog-id=207 op=LOAD Jan 20 00:42:08.018000 audit[4983]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffd3f929f8 a2=98 a3=ffffd3f929e8 items=0 ppid=4877 pid=4983 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:42:08.018000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 20 00:42:08.018000 audit: BPF prog-id=207 op=UNLOAD Jan 20 00:42:08.018000 audit[4983]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=ffffd3f929c8 a3=0 items=0 ppid=4877 pid=4983 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:42:08.018000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 20 00:42:08.018000 audit: BPF prog-id=208 op=LOAD Jan 20 00:42:08.018000 audit[4983]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=ffffd3f92688 a2=74 a3=95 items=0 ppid=4877 pid=4983 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:42:08.018000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 20 00:42:08.018000 audit: BPF prog-id=208 op=UNLOAD Jan 20 00:42:08.018000 audit[4983]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=4 a1=57156c a2=74 a3=95 items=0 ppid=4877 pid=4983 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:42:08.018000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 20 00:42:08.018000 audit: BPF prog-id=209 op=LOAD Jan 20 00:42:08.018000 audit[4983]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=ffffd3f926e8 a2=94 a3=2 items=0 ppid=4877 pid=4983 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:42:08.018000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 20 00:42:08.018000 audit: BPF prog-id=209 op=UNLOAD Jan 20 00:42:08.018000 audit[4983]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=4 a1=57156c a2=70 a3=2 items=0 ppid=4877 pid=4983 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:42:08.018000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 20 00:42:08.098000 audit: BPF prog-id=210 op=LOAD Jan 20 00:42:08.098000 audit[4983]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=ffffd3f926a8 a2=40 a3=ffffd3f926d8 items=0 ppid=4877 pid=4983 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:42:08.098000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 20 00:42:08.098000 audit: BPF prog-id=210 op=UNLOAD Jan 20 00:42:08.098000 audit[4983]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=4 a1=57156c a2=40 a3=ffffd3f926d8 items=0 ppid=4877 pid=4983 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:42:08.098000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 20 00:42:08.104000 audit: BPF prog-id=211 op=LOAD Jan 20 00:42:08.104000 audit[4983]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=5 a1=ffffd3f926b8 a2=94 a3=4 items=0 ppid=4877 pid=4983 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:42:08.104000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 20 00:42:08.104000 audit: BPF prog-id=211 op=UNLOAD Jan 20 00:42:08.104000 audit[4983]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=5 a1=57156c a2=70 a3=4 items=0 ppid=4877 pid=4983 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:42:08.104000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 20 00:42:08.105000 audit: BPF prog-id=212 op=LOAD Jan 20 00:42:08.105000 audit[4983]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=ffffd3f924f8 a2=94 a3=5 items=0 ppid=4877 pid=4983 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:42:08.105000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 20 00:42:08.105000 audit: BPF prog-id=212 op=UNLOAD Jan 20 00:42:08.105000 audit[4983]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=6 a1=57156c a2=70 a3=5 items=0 ppid=4877 pid=4983 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:42:08.105000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 20 00:42:08.105000 audit: BPF prog-id=213 op=LOAD Jan 20 00:42:08.105000 audit[4983]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=5 a1=ffffd3f92728 a2=94 a3=6 items=0 ppid=4877 pid=4983 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:42:08.105000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 20 00:42:08.105000 audit: BPF prog-id=213 op=UNLOAD Jan 20 00:42:08.105000 audit[4983]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=5 a1=57156c a2=70 a3=6 items=0 ppid=4877 pid=4983 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:42:08.105000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 20 00:42:08.105000 audit: BPF prog-id=214 op=LOAD Jan 20 00:42:08.105000 audit[4983]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=5 a1=ffffd3f91ef8 a2=94 a3=83 items=0 ppid=4877 pid=4983 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:42:08.105000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 20 00:42:08.105000 audit: BPF prog-id=215 op=LOAD Jan 20 00:42:08.105000 audit[4983]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=7 a0=5 a1=ffffd3f91cb8 a2=94 a3=2 items=0 ppid=4877 pid=4983 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:42:08.105000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 20 00:42:08.105000 audit: BPF prog-id=215 op=UNLOAD Jan 20 00:42:08.105000 audit[4983]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=7 a1=57156c a2=c a3=0 items=0 ppid=4877 pid=4983 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:42:08.105000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 20 00:42:08.106000 audit: BPF prog-id=214 op=UNLOAD Jan 20 00:42:08.106000 audit[4983]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=5 a1=57156c a2=2d25d620 a3=2d250b00 items=0 ppid=4877 pid=4983 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:42:08.106000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 20 00:42:08.112000 audit: BPF prog-id=216 op=LOAD Jan 20 00:42:08.112000 audit[4986]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffd8de0fd8 a2=98 a3=ffffd8de0fc8 items=0 ppid=4877 pid=4986 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:42:08.112000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 20 00:42:08.112000 audit: BPF prog-id=216 op=UNLOAD Jan 20 00:42:08.112000 audit[4986]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=ffffd8de0fa8 a3=0 items=0 ppid=4877 pid=4986 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:42:08.112000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 20 00:42:08.112000 audit: BPF prog-id=217 op=LOAD Jan 20 00:42:08.112000 audit[4986]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffd8de0e88 a2=74 a3=95 items=0 ppid=4877 pid=4986 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:42:08.112000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 20 00:42:08.112000 audit: BPF prog-id=217 op=UNLOAD Jan 20 00:42:08.112000 audit[4986]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=74 a3=95 items=0 ppid=4877 pid=4986 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:42:08.112000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 20 00:42:08.112000 audit: BPF prog-id=218 op=LOAD Jan 20 00:42:08.112000 audit[4986]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffd8de0eb8 a2=40 a3=ffffd8de0ee8 items=0 ppid=4877 pid=4986 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:42:08.112000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 20 00:42:08.112000 audit: BPF prog-id=218 op=UNLOAD Jan 20 00:42:08.112000 audit[4986]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=40 a3=ffffd8de0ee8 items=0 ppid=4877 pid=4986 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:42:08.112000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 20 00:42:08.246045 systemd-networkd[1725]: vxlan.calico: Link UP Jan 20 00:42:08.246050 systemd-networkd[1725]: vxlan.calico: Gained carrier Jan 20 00:42:08.266000 audit: BPF prog-id=219 op=LOAD Jan 20 00:42:08.266000 audit[5012]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffc7684548 a2=98 a3=ffffc7684538 items=0 ppid=4877 pid=5012 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:42:08.266000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 20 00:42:08.266000 audit: BPF prog-id=219 op=UNLOAD Jan 20 00:42:08.266000 audit[5012]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=ffffc7684518 a3=0 items=0 ppid=4877 pid=5012 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:42:08.266000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 20 00:42:08.266000 audit: BPF prog-id=220 op=LOAD Jan 20 00:42:08.266000 audit[5012]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffc7684228 a2=74 a3=95 items=0 ppid=4877 pid=5012 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:42:08.266000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 20 00:42:08.267000 audit: BPF prog-id=220 op=UNLOAD Jan 20 00:42:08.267000 audit[5012]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=74 a3=95 items=0 ppid=4877 pid=5012 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:42:08.267000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 20 00:42:08.267000 audit: BPF prog-id=221 op=LOAD Jan 20 00:42:08.267000 audit[5012]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffc7684288 a2=94 a3=2 items=0 ppid=4877 pid=5012 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:42:08.267000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 20 00:42:08.267000 audit: BPF prog-id=221 op=UNLOAD Jan 20 00:42:08.267000 audit[5012]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=70 a3=2 items=0 ppid=4877 pid=5012 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:42:08.267000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 20 00:42:08.267000 audit: BPF prog-id=222 op=LOAD Jan 20 00:42:08.267000 audit[5012]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=ffffc7684108 a2=40 a3=ffffc7684138 items=0 ppid=4877 pid=5012 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:42:08.267000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 20 00:42:08.267000 audit: BPF prog-id=222 op=UNLOAD Jan 20 00:42:08.267000 audit[5012]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=6 a1=57156c a2=40 a3=ffffc7684138 items=0 ppid=4877 pid=5012 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:42:08.267000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 20 00:42:08.267000 audit: BPF prog-id=223 op=LOAD Jan 20 00:42:08.267000 audit[5012]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=ffffc7684258 a2=94 a3=b7 items=0 ppid=4877 pid=5012 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:42:08.267000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 20 00:42:08.267000 audit: BPF prog-id=223 op=UNLOAD Jan 20 00:42:08.267000 audit[5012]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=6 a1=57156c a2=70 a3=b7 items=0 ppid=4877 pid=5012 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:42:08.267000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 20 00:42:08.267000 audit: BPF prog-id=224 op=LOAD Jan 20 00:42:08.267000 audit[5012]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=ffffc7683908 a2=94 a3=2 items=0 ppid=4877 pid=5012 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:42:08.267000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 20 00:42:08.267000 audit: BPF prog-id=224 op=UNLOAD Jan 20 00:42:08.267000 audit[5012]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=6 a1=57156c a2=70 a3=2 items=0 ppid=4877 pid=5012 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:42:08.267000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 20 00:42:08.267000 audit: BPF prog-id=225 op=LOAD Jan 20 00:42:08.267000 audit[5012]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=ffffc7683a98 a2=94 a3=30 items=0 ppid=4877 pid=5012 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:42:08.267000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 20 00:42:08.272000 audit: BPF prog-id=226 op=LOAD Jan 20 00:42:08.272000 audit[5016]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffd5b8f8f8 a2=98 a3=ffffd5b8f8e8 items=0 ppid=4877 pid=5016 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:42:08.272000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 20 00:42:08.272000 audit: BPF prog-id=226 op=UNLOAD Jan 20 00:42:08.272000 audit[5016]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=ffffd5b8f8c8 a3=0 items=0 ppid=4877 pid=5016 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:42:08.272000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 20 00:42:08.273000 audit: BPF prog-id=227 op=LOAD Jan 20 00:42:08.273000 audit[5016]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=ffffd5b8f588 a2=74 a3=95 items=0 ppid=4877 pid=5016 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:42:08.273000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 20 00:42:08.273000 audit: BPF prog-id=227 op=UNLOAD Jan 20 00:42:08.273000 audit[5016]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=4 a1=57156c a2=74 a3=95 items=0 ppid=4877 pid=5016 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:42:08.273000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 20 00:42:08.273000 audit: BPF prog-id=228 op=LOAD Jan 20 00:42:08.273000 audit[5016]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=ffffd5b8f5e8 a2=94 a3=2 items=0 ppid=4877 pid=5016 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:42:08.273000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 20 00:42:08.273000 audit: BPF prog-id=228 op=UNLOAD Jan 20 00:42:08.273000 audit[5016]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=4 a1=57156c a2=70 a3=2 items=0 ppid=4877 pid=5016 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:42:08.273000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 20 00:42:08.354000 audit: BPF prog-id=229 op=LOAD Jan 20 00:42:08.354000 audit[5016]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=ffffd5b8f5a8 a2=40 a3=ffffd5b8f5d8 items=0 ppid=4877 pid=5016 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:42:08.354000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 20 00:42:08.354000 audit: BPF prog-id=229 op=UNLOAD Jan 20 00:42:08.354000 audit[5016]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=4 a1=57156c a2=40 a3=ffffd5b8f5d8 items=0 ppid=4877 pid=5016 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:42:08.354000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 20 00:42:08.360000 audit: BPF prog-id=230 op=LOAD Jan 20 00:42:08.360000 audit[5016]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=5 a1=ffffd5b8f5b8 a2=94 a3=4 items=0 ppid=4877 pid=5016 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:42:08.360000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 20 00:42:08.361000 audit: BPF prog-id=230 op=UNLOAD Jan 20 00:42:08.361000 audit[5016]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=5 a1=57156c a2=70 a3=4 items=0 ppid=4877 pid=5016 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:42:08.361000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 20 00:42:08.361000 audit: BPF prog-id=231 op=LOAD Jan 20 00:42:08.361000 audit[5016]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=ffffd5b8f3f8 a2=94 a3=5 items=0 ppid=4877 pid=5016 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:42:08.361000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 20 00:42:08.361000 audit: BPF prog-id=231 op=UNLOAD Jan 20 00:42:08.361000 audit[5016]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=6 a1=57156c a2=70 a3=5 items=0 ppid=4877 pid=5016 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:42:08.361000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 20 00:42:08.361000 audit: BPF prog-id=232 op=LOAD Jan 20 00:42:08.361000 audit[5016]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=5 a1=ffffd5b8f628 a2=94 a3=6 items=0 ppid=4877 pid=5016 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:42:08.361000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 20 00:42:08.362000 audit: BPF prog-id=232 op=UNLOAD Jan 20 00:42:08.362000 audit[5016]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=5 a1=57156c a2=70 a3=6 items=0 ppid=4877 pid=5016 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:42:08.362000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 20 00:42:08.362000 audit: BPF prog-id=233 op=LOAD Jan 20 00:42:08.362000 audit[5016]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=5 a1=ffffd5b8edf8 a2=94 a3=83 items=0 ppid=4877 pid=5016 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:42:08.362000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 20 00:42:08.362000 audit: BPF prog-id=234 op=LOAD Jan 20 00:42:08.362000 audit[5016]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=7 a0=5 a1=ffffd5b8ebb8 a2=94 a3=2 items=0 ppid=4877 pid=5016 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:42:08.362000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 20 00:42:08.362000 audit: BPF prog-id=234 op=UNLOAD Jan 20 00:42:08.362000 audit[5016]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=7 a1=57156c a2=c a3=0 items=0 ppid=4877 pid=5016 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:42:08.362000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 20 00:42:08.363000 audit: BPF prog-id=233 op=UNLOAD Jan 20 00:42:08.363000 audit[5016]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=5 a1=57156c a2=2aa76620 a3=2aa69b00 items=0 ppid=4877 pid=5016 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:42:08.363000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 20 00:42:08.367000 audit: BPF prog-id=225 op=UNLOAD Jan 20 00:42:08.367000 audit[4877]: SYSCALL arch=c00000b7 syscall=35 success=yes exit=0 a0=ffffffffffffff9c a1=40013d0580 a2=0 a3=0 items=0 ppid=4863 pid=4877 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="calico-node" exe="/usr/bin/calico-node" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:42:08.367000 audit: PROCTITLE proctitle=63616C69636F2D6E6F6465002D66656C6978 Jan 20 00:42:08.542022 kubelet[3599]: I0120 00:42:08.541987 3599 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4827d07a-600d-43d7-a9a0-275dbcae9208" path="/var/lib/kubelet/pods/4827d07a-600d-43d7-a9a0-275dbcae9208/volumes" Jan 20 00:42:08.584603 systemd-networkd[1725]: calif2d892fe235: Gained IPv6LL Jan 20 00:42:08.603000 audit[5039]: NETFILTER_CFG table=nat:122 family=2 entries=15 op=nft_register_chain pid=5039 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 20 00:42:08.603000 audit[5039]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5084 a0=3 a1=ffffe2396990 a2=0 a3=ffff9d016fa8 items=0 ppid=4877 pid=5039 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:42:08.603000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 20 00:42:08.610000 audit[5041]: NETFILTER_CFG table=mangle:123 family=2 entries=16 op=nft_register_chain pid=5041 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 20 00:42:08.610000 audit[5041]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=6868 a0=3 a1=fffff9db2cb0 a2=0 a3=ffffbb0fcfa8 items=0 ppid=4877 pid=5041 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:42:08.610000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 20 00:42:08.655197 kubelet[3599]: E0120 00:42:08.655160 3599 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-78b7cf9965-hz2t4" podUID="050b7649-47d7-4543-80dc-167b27775ab2" Jan 20 00:42:08.672000 audit[5052]: NETFILTER_CFG table=filter:124 family=2 entries=20 op=nft_register_rule pid=5052 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 00:42:08.672000 audit[5052]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=7480 a0=3 a1=ffffd6789840 a2=0 a3=1 items=0 ppid=3769 pid=5052 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:42:08.672000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 00:42:08.680000 audit[5052]: NETFILTER_CFG table=nat:125 family=2 entries=14 op=nft_register_rule pid=5052 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 00:42:08.680000 audit[5052]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=3468 a0=3 a1=ffffd6789840 a2=0 a3=1 items=0 ppid=3769 pid=5052 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:42:08.680000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 00:42:08.684000 audit[5040]: NETFILTER_CFG table=raw:126 family=2 entries=21 op=nft_register_chain pid=5040 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 20 00:42:08.684000 audit[5040]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=8452 a0=3 a1=ffffc92277c0 a2=0 a3=ffff91ae0fa8 items=0 ppid=4877 pid=5040 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:42:08.684000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 20 00:42:08.734000 audit[5042]: NETFILTER_CFG table=filter:127 family=2 entries=94 op=nft_register_chain pid=5042 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 20 00:42:08.734000 audit[5042]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=53116 a0=3 a1=ffffc0109250 a2=0 a3=ffffb0790fa8 items=0 ppid=4877 pid=5042 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:42:08.734000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 20 00:42:09.540746 containerd[2132]: time="2026-01-20T00:42:09.540659909Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7c8fb8fd4d-9hn8w,Uid:6eedb683-7841-469d-9465-68ae5bed2952,Namespace:calico-apiserver,Attempt:0,}" Jan 20 00:42:09.541417 containerd[2132]: time="2026-01-20T00:42:09.541163176Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-w85xq,Uid:1e8e6c90-c796-40a3-81f9-bb1930f6d213,Namespace:kube-system,Attempt:0,}" Jan 20 00:42:09.608528 systemd-networkd[1725]: vxlan.calico: Gained IPv6LL Jan 20 00:42:09.655930 systemd-networkd[1725]: cali5516c1d59a0: Link UP Jan 20 00:42:09.658141 systemd-networkd[1725]: cali5516c1d59a0: Gained carrier Jan 20 00:42:09.677750 containerd[2132]: 2026-01-20 00:42:09.600 [INFO][5060] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4515.1.0--n--fc9e3ff023-k8s-coredns--674b8bbfcf--w85xq-eth0 coredns-674b8bbfcf- kube-system 1e8e6c90-c796-40a3-81f9-bb1930f6d213 868 0 2026-01-20 00:41:31 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4515.1.0-n-fc9e3ff023 coredns-674b8bbfcf-w85xq eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali5516c1d59a0 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="2c70ca600e61d3338b27b19b2639a0d489aa29a5eb87e5d64b3a6816f145ff11" Namespace="kube-system" Pod="coredns-674b8bbfcf-w85xq" WorkloadEndpoint="ci--4515.1.0--n--fc9e3ff023-k8s-coredns--674b8bbfcf--w85xq-" Jan 20 00:42:09.677750 containerd[2132]: 2026-01-20 00:42:09.601 [INFO][5060] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="2c70ca600e61d3338b27b19b2639a0d489aa29a5eb87e5d64b3a6816f145ff11" Namespace="kube-system" Pod="coredns-674b8bbfcf-w85xq" WorkloadEndpoint="ci--4515.1.0--n--fc9e3ff023-k8s-coredns--674b8bbfcf--w85xq-eth0" Jan 20 00:42:09.677750 containerd[2132]: 2026-01-20 00:42:09.624 [INFO][5080] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2c70ca600e61d3338b27b19b2639a0d489aa29a5eb87e5d64b3a6816f145ff11" HandleID="k8s-pod-network.2c70ca600e61d3338b27b19b2639a0d489aa29a5eb87e5d64b3a6816f145ff11" Workload="ci--4515.1.0--n--fc9e3ff023-k8s-coredns--674b8bbfcf--w85xq-eth0" Jan 20 00:42:09.677891 containerd[2132]: 2026-01-20 00:42:09.624 [INFO][5080] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="2c70ca600e61d3338b27b19b2639a0d489aa29a5eb87e5d64b3a6816f145ff11" HandleID="k8s-pod-network.2c70ca600e61d3338b27b19b2639a0d489aa29a5eb87e5d64b3a6816f145ff11" Workload="ci--4515.1.0--n--fc9e3ff023-k8s-coredns--674b8bbfcf--w85xq-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002c96e0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4515.1.0-n-fc9e3ff023", "pod":"coredns-674b8bbfcf-w85xq", "timestamp":"2026-01-20 00:42:09.624555579 +0000 UTC"}, Hostname:"ci-4515.1.0-n-fc9e3ff023", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 20 00:42:09.677891 containerd[2132]: 2026-01-20 00:42:09.624 [INFO][5080] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 00:42:09.677891 containerd[2132]: 2026-01-20 00:42:09.624 [INFO][5080] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 00:42:09.677891 containerd[2132]: 2026-01-20 00:42:09.624 [INFO][5080] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4515.1.0-n-fc9e3ff023' Jan 20 00:42:09.677891 containerd[2132]: 2026-01-20 00:42:09.629 [INFO][5080] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.2c70ca600e61d3338b27b19b2639a0d489aa29a5eb87e5d64b3a6816f145ff11" host="ci-4515.1.0-n-fc9e3ff023" Jan 20 00:42:09.677891 containerd[2132]: 2026-01-20 00:42:09.632 [INFO][5080] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4515.1.0-n-fc9e3ff023" Jan 20 00:42:09.677891 containerd[2132]: 2026-01-20 00:42:09.635 [INFO][5080] ipam/ipam.go 511: Trying affinity for 192.168.51.64/26 host="ci-4515.1.0-n-fc9e3ff023" Jan 20 00:42:09.677891 containerd[2132]: 2026-01-20 00:42:09.636 [INFO][5080] ipam/ipam.go 158: Attempting to load block cidr=192.168.51.64/26 host="ci-4515.1.0-n-fc9e3ff023" Jan 20 00:42:09.677891 containerd[2132]: 2026-01-20 00:42:09.637 [INFO][5080] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.51.64/26 host="ci-4515.1.0-n-fc9e3ff023" Jan 20 00:42:09.678298 containerd[2132]: 2026-01-20 00:42:09.638 [INFO][5080] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.51.64/26 handle="k8s-pod-network.2c70ca600e61d3338b27b19b2639a0d489aa29a5eb87e5d64b3a6816f145ff11" host="ci-4515.1.0-n-fc9e3ff023" Jan 20 00:42:09.678298 containerd[2132]: 2026-01-20 00:42:09.638 [INFO][5080] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.2c70ca600e61d3338b27b19b2639a0d489aa29a5eb87e5d64b3a6816f145ff11 Jan 20 00:42:09.678298 containerd[2132]: 2026-01-20 00:42:09.642 [INFO][5080] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.51.64/26 handle="k8s-pod-network.2c70ca600e61d3338b27b19b2639a0d489aa29a5eb87e5d64b3a6816f145ff11" host="ci-4515.1.0-n-fc9e3ff023" Jan 20 00:42:09.678298 containerd[2132]: 2026-01-20 00:42:09.646 [INFO][5080] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.51.66/26] block=192.168.51.64/26 handle="k8s-pod-network.2c70ca600e61d3338b27b19b2639a0d489aa29a5eb87e5d64b3a6816f145ff11" host="ci-4515.1.0-n-fc9e3ff023" Jan 20 00:42:09.678298 containerd[2132]: 2026-01-20 00:42:09.646 [INFO][5080] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.51.66/26] handle="k8s-pod-network.2c70ca600e61d3338b27b19b2639a0d489aa29a5eb87e5d64b3a6816f145ff11" host="ci-4515.1.0-n-fc9e3ff023" Jan 20 00:42:09.678298 containerd[2132]: 2026-01-20 00:42:09.646 [INFO][5080] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 00:42:09.678298 containerd[2132]: 2026-01-20 00:42:09.646 [INFO][5080] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.51.66/26] IPv6=[] ContainerID="2c70ca600e61d3338b27b19b2639a0d489aa29a5eb87e5d64b3a6816f145ff11" HandleID="k8s-pod-network.2c70ca600e61d3338b27b19b2639a0d489aa29a5eb87e5d64b3a6816f145ff11" Workload="ci--4515.1.0--n--fc9e3ff023-k8s-coredns--674b8bbfcf--w85xq-eth0" Jan 20 00:42:09.678467 containerd[2132]: 2026-01-20 00:42:09.650 [INFO][5060] cni-plugin/k8s.go 418: Populated endpoint ContainerID="2c70ca600e61d3338b27b19b2639a0d489aa29a5eb87e5d64b3a6816f145ff11" Namespace="kube-system" Pod="coredns-674b8bbfcf-w85xq" WorkloadEndpoint="ci--4515.1.0--n--fc9e3ff023-k8s-coredns--674b8bbfcf--w85xq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4515.1.0--n--fc9e3ff023-k8s-coredns--674b8bbfcf--w85xq-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"1e8e6c90-c796-40a3-81f9-bb1930f6d213", ResourceVersion:"868", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 0, 41, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4515.1.0-n-fc9e3ff023", ContainerID:"", Pod:"coredns-674b8bbfcf-w85xq", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.51.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali5516c1d59a0", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 00:42:09.678467 containerd[2132]: 2026-01-20 00:42:09.650 [INFO][5060] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.51.66/32] ContainerID="2c70ca600e61d3338b27b19b2639a0d489aa29a5eb87e5d64b3a6816f145ff11" Namespace="kube-system" Pod="coredns-674b8bbfcf-w85xq" WorkloadEndpoint="ci--4515.1.0--n--fc9e3ff023-k8s-coredns--674b8bbfcf--w85xq-eth0" Jan 20 00:42:09.678467 containerd[2132]: 2026-01-20 00:42:09.650 [INFO][5060] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5516c1d59a0 ContainerID="2c70ca600e61d3338b27b19b2639a0d489aa29a5eb87e5d64b3a6816f145ff11" Namespace="kube-system" Pod="coredns-674b8bbfcf-w85xq" WorkloadEndpoint="ci--4515.1.0--n--fc9e3ff023-k8s-coredns--674b8bbfcf--w85xq-eth0" Jan 20 00:42:09.678467 containerd[2132]: 2026-01-20 00:42:09.657 [INFO][5060] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2c70ca600e61d3338b27b19b2639a0d489aa29a5eb87e5d64b3a6816f145ff11" Namespace="kube-system" Pod="coredns-674b8bbfcf-w85xq" WorkloadEndpoint="ci--4515.1.0--n--fc9e3ff023-k8s-coredns--674b8bbfcf--w85xq-eth0" Jan 20 00:42:09.678467 containerd[2132]: 2026-01-20 00:42:09.659 [INFO][5060] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="2c70ca600e61d3338b27b19b2639a0d489aa29a5eb87e5d64b3a6816f145ff11" Namespace="kube-system" Pod="coredns-674b8bbfcf-w85xq" WorkloadEndpoint="ci--4515.1.0--n--fc9e3ff023-k8s-coredns--674b8bbfcf--w85xq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4515.1.0--n--fc9e3ff023-k8s-coredns--674b8bbfcf--w85xq-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"1e8e6c90-c796-40a3-81f9-bb1930f6d213", ResourceVersion:"868", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 0, 41, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4515.1.0-n-fc9e3ff023", ContainerID:"2c70ca600e61d3338b27b19b2639a0d489aa29a5eb87e5d64b3a6816f145ff11", Pod:"coredns-674b8bbfcf-w85xq", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.51.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali5516c1d59a0", MAC:"82:1b:d3:1b:b6:eb", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 00:42:09.678467 containerd[2132]: 2026-01-20 00:42:09.676 [INFO][5060] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="2c70ca600e61d3338b27b19b2639a0d489aa29a5eb87e5d64b3a6816f145ff11" Namespace="kube-system" Pod="coredns-674b8bbfcf-w85xq" WorkloadEndpoint="ci--4515.1.0--n--fc9e3ff023-k8s-coredns--674b8bbfcf--w85xq-eth0" Jan 20 00:42:09.689000 audit[5104]: NETFILTER_CFG table=filter:128 family=2 entries=42 op=nft_register_chain pid=5104 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 20 00:42:09.689000 audit[5104]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=22552 a0=3 a1=ffffc99dd740 a2=0 a3=ffff891a3fa8 items=0 ppid=4877 pid=5104 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:42:09.689000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 20 00:42:09.719057 containerd[2132]: time="2026-01-20T00:42:09.718841432Z" level=info msg="connecting to shim 2c70ca600e61d3338b27b19b2639a0d489aa29a5eb87e5d64b3a6816f145ff11" address="unix:///run/containerd/s/c6792cc081fc3e2d3c6129042d08b15cbbad40bd87845fe12458c3ef81a52e29" namespace=k8s.io protocol=ttrpc version=3 Jan 20 00:42:09.741458 systemd[1]: Started cri-containerd-2c70ca600e61d3338b27b19b2639a0d489aa29a5eb87e5d64b3a6816f145ff11.scope - libcontainer container 2c70ca600e61d3338b27b19b2639a0d489aa29a5eb87e5d64b3a6816f145ff11. Jan 20 00:42:09.748000 audit: BPF prog-id=235 op=LOAD Jan 20 00:42:09.748000 audit: BPF prog-id=236 op=LOAD Jan 20 00:42:09.748000 audit[5124]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000176180 a2=98 a3=0 items=0 ppid=5113 pid=5124 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:42:09.748000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3263373063613630306536316433333338623237623139623236333961 Jan 20 00:42:09.749000 audit: BPF prog-id=236 op=UNLOAD Jan 20 00:42:09.749000 audit[5124]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5113 pid=5124 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:42:09.749000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3263373063613630306536316433333338623237623139623236333961 Jan 20 00:42:09.749000 audit: BPF prog-id=237 op=LOAD Jan 20 00:42:09.749000 audit[5124]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001763e8 a2=98 a3=0 items=0 ppid=5113 pid=5124 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:42:09.749000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3263373063613630306536316433333338623237623139623236333961 Jan 20 00:42:09.749000 audit: BPF prog-id=238 op=LOAD Jan 20 00:42:09.749000 audit[5124]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000176168 a2=98 a3=0 items=0 ppid=5113 pid=5124 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:42:09.749000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3263373063613630306536316433333338623237623139623236333961 Jan 20 00:42:09.749000 audit: BPF prog-id=238 op=UNLOAD Jan 20 00:42:09.749000 audit[5124]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5113 pid=5124 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:42:09.749000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3263373063613630306536316433333338623237623139623236333961 Jan 20 00:42:09.749000 audit: BPF prog-id=237 op=UNLOAD Jan 20 00:42:09.749000 audit[5124]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5113 pid=5124 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:42:09.749000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3263373063613630306536316433333338623237623139623236333961 Jan 20 00:42:09.749000 audit: BPF prog-id=239 op=LOAD Jan 20 00:42:09.749000 audit[5124]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000176648 a2=98 a3=0 items=0 ppid=5113 pid=5124 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:42:09.749000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3263373063613630306536316433333338623237623139623236333961 Jan 20 00:42:09.770633 systemd-networkd[1725]: cali65703926848: Link UP Jan 20 00:42:09.771433 systemd-networkd[1725]: cali65703926848: Gained carrier Jan 20 00:42:09.787546 containerd[2132]: time="2026-01-20T00:42:09.787514518Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-w85xq,Uid:1e8e6c90-c796-40a3-81f9-bb1930f6d213,Namespace:kube-system,Attempt:0,} returns sandbox id \"2c70ca600e61d3338b27b19b2639a0d489aa29a5eb87e5d64b3a6816f145ff11\"" Jan 20 00:42:09.788507 containerd[2132]: 2026-01-20 00:42:09.600 [INFO][5056] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4515.1.0--n--fc9e3ff023-k8s-calico--apiserver--7c8fb8fd4d--9hn8w-eth0 calico-apiserver-7c8fb8fd4d- calico-apiserver 6eedb683-7841-469d-9465-68ae5bed2952 875 0 2026-01-20 00:41:41 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7c8fb8fd4d projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4515.1.0-n-fc9e3ff023 calico-apiserver-7c8fb8fd4d-9hn8w eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali65703926848 [] [] }} ContainerID="c0f67dcb2ddc26e894e87eb59f33fe59ef7a5e151ddcc2bbf9f8a7bd64b115ce" Namespace="calico-apiserver" Pod="calico-apiserver-7c8fb8fd4d-9hn8w" WorkloadEndpoint="ci--4515.1.0--n--fc9e3ff023-k8s-calico--apiserver--7c8fb8fd4d--9hn8w-" Jan 20 00:42:09.788507 containerd[2132]: 2026-01-20 00:42:09.601 [INFO][5056] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c0f67dcb2ddc26e894e87eb59f33fe59ef7a5e151ddcc2bbf9f8a7bd64b115ce" Namespace="calico-apiserver" Pod="calico-apiserver-7c8fb8fd4d-9hn8w" WorkloadEndpoint="ci--4515.1.0--n--fc9e3ff023-k8s-calico--apiserver--7c8fb8fd4d--9hn8w-eth0" Jan 20 00:42:09.788507 containerd[2132]: 2026-01-20 00:42:09.632 [INFO][5082] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c0f67dcb2ddc26e894e87eb59f33fe59ef7a5e151ddcc2bbf9f8a7bd64b115ce" HandleID="k8s-pod-network.c0f67dcb2ddc26e894e87eb59f33fe59ef7a5e151ddcc2bbf9f8a7bd64b115ce" Workload="ci--4515.1.0--n--fc9e3ff023-k8s-calico--apiserver--7c8fb8fd4d--9hn8w-eth0" Jan 20 00:42:09.788507 containerd[2132]: 2026-01-20 00:42:09.632 [INFO][5082] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="c0f67dcb2ddc26e894e87eb59f33fe59ef7a5e151ddcc2bbf9f8a7bd64b115ce" HandleID="k8s-pod-network.c0f67dcb2ddc26e894e87eb59f33fe59ef7a5e151ddcc2bbf9f8a7bd64b115ce" Workload="ci--4515.1.0--n--fc9e3ff023-k8s-calico--apiserver--7c8fb8fd4d--9hn8w-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40001235a0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4515.1.0-n-fc9e3ff023", "pod":"calico-apiserver-7c8fb8fd4d-9hn8w", "timestamp":"2026-01-20 00:42:09.632466223 +0000 UTC"}, Hostname:"ci-4515.1.0-n-fc9e3ff023", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 20 00:42:09.788507 containerd[2132]: 2026-01-20 00:42:09.632 [INFO][5082] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 00:42:09.788507 containerd[2132]: 2026-01-20 00:42:09.646 [INFO][5082] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 00:42:09.788507 containerd[2132]: 2026-01-20 00:42:09.646 [INFO][5082] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4515.1.0-n-fc9e3ff023' Jan 20 00:42:09.788507 containerd[2132]: 2026-01-20 00:42:09.730 [INFO][5082] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.c0f67dcb2ddc26e894e87eb59f33fe59ef7a5e151ddcc2bbf9f8a7bd64b115ce" host="ci-4515.1.0-n-fc9e3ff023" Jan 20 00:42:09.788507 containerd[2132]: 2026-01-20 00:42:09.737 [INFO][5082] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4515.1.0-n-fc9e3ff023" Jan 20 00:42:09.788507 containerd[2132]: 2026-01-20 00:42:09.740 [INFO][5082] ipam/ipam.go 511: Trying affinity for 192.168.51.64/26 host="ci-4515.1.0-n-fc9e3ff023" Jan 20 00:42:09.788507 containerd[2132]: 2026-01-20 00:42:09.743 [INFO][5082] ipam/ipam.go 158: Attempting to load block cidr=192.168.51.64/26 host="ci-4515.1.0-n-fc9e3ff023" Jan 20 00:42:09.788507 containerd[2132]: 2026-01-20 00:42:09.745 [INFO][5082] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.51.64/26 host="ci-4515.1.0-n-fc9e3ff023" Jan 20 00:42:09.788507 containerd[2132]: 2026-01-20 00:42:09.745 [INFO][5082] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.51.64/26 handle="k8s-pod-network.c0f67dcb2ddc26e894e87eb59f33fe59ef7a5e151ddcc2bbf9f8a7bd64b115ce" host="ci-4515.1.0-n-fc9e3ff023" Jan 20 00:42:09.788507 containerd[2132]: 2026-01-20 00:42:09.746 [INFO][5082] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.c0f67dcb2ddc26e894e87eb59f33fe59ef7a5e151ddcc2bbf9f8a7bd64b115ce Jan 20 00:42:09.788507 containerd[2132]: 2026-01-20 00:42:09.751 [INFO][5082] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.51.64/26 handle="k8s-pod-network.c0f67dcb2ddc26e894e87eb59f33fe59ef7a5e151ddcc2bbf9f8a7bd64b115ce" host="ci-4515.1.0-n-fc9e3ff023" Jan 20 00:42:09.788507 containerd[2132]: 2026-01-20 00:42:09.760 [INFO][5082] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.51.67/26] block=192.168.51.64/26 handle="k8s-pod-network.c0f67dcb2ddc26e894e87eb59f33fe59ef7a5e151ddcc2bbf9f8a7bd64b115ce" host="ci-4515.1.0-n-fc9e3ff023" Jan 20 00:42:09.788507 containerd[2132]: 2026-01-20 00:42:09.760 [INFO][5082] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.51.67/26] handle="k8s-pod-network.c0f67dcb2ddc26e894e87eb59f33fe59ef7a5e151ddcc2bbf9f8a7bd64b115ce" host="ci-4515.1.0-n-fc9e3ff023" Jan 20 00:42:09.788507 containerd[2132]: 2026-01-20 00:42:09.760 [INFO][5082] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 00:42:09.788507 containerd[2132]: 2026-01-20 00:42:09.760 [INFO][5082] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.51.67/26] IPv6=[] ContainerID="c0f67dcb2ddc26e894e87eb59f33fe59ef7a5e151ddcc2bbf9f8a7bd64b115ce" HandleID="k8s-pod-network.c0f67dcb2ddc26e894e87eb59f33fe59ef7a5e151ddcc2bbf9f8a7bd64b115ce" Workload="ci--4515.1.0--n--fc9e3ff023-k8s-calico--apiserver--7c8fb8fd4d--9hn8w-eth0" Jan 20 00:42:09.788975 containerd[2132]: 2026-01-20 00:42:09.764 [INFO][5056] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c0f67dcb2ddc26e894e87eb59f33fe59ef7a5e151ddcc2bbf9f8a7bd64b115ce" Namespace="calico-apiserver" Pod="calico-apiserver-7c8fb8fd4d-9hn8w" WorkloadEndpoint="ci--4515.1.0--n--fc9e3ff023-k8s-calico--apiserver--7c8fb8fd4d--9hn8w-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4515.1.0--n--fc9e3ff023-k8s-calico--apiserver--7c8fb8fd4d--9hn8w-eth0", GenerateName:"calico-apiserver-7c8fb8fd4d-", Namespace:"calico-apiserver", SelfLink:"", UID:"6eedb683-7841-469d-9465-68ae5bed2952", ResourceVersion:"875", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 0, 41, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7c8fb8fd4d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4515.1.0-n-fc9e3ff023", ContainerID:"", Pod:"calico-apiserver-7c8fb8fd4d-9hn8w", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.51.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali65703926848", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 00:42:09.788975 containerd[2132]: 2026-01-20 00:42:09.764 [INFO][5056] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.51.67/32] ContainerID="c0f67dcb2ddc26e894e87eb59f33fe59ef7a5e151ddcc2bbf9f8a7bd64b115ce" Namespace="calico-apiserver" Pod="calico-apiserver-7c8fb8fd4d-9hn8w" WorkloadEndpoint="ci--4515.1.0--n--fc9e3ff023-k8s-calico--apiserver--7c8fb8fd4d--9hn8w-eth0" Jan 20 00:42:09.788975 containerd[2132]: 2026-01-20 00:42:09.764 [INFO][5056] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali65703926848 ContainerID="c0f67dcb2ddc26e894e87eb59f33fe59ef7a5e151ddcc2bbf9f8a7bd64b115ce" Namespace="calico-apiserver" Pod="calico-apiserver-7c8fb8fd4d-9hn8w" WorkloadEndpoint="ci--4515.1.0--n--fc9e3ff023-k8s-calico--apiserver--7c8fb8fd4d--9hn8w-eth0" Jan 20 00:42:09.788975 containerd[2132]: 2026-01-20 00:42:09.772 [INFO][5056] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c0f67dcb2ddc26e894e87eb59f33fe59ef7a5e151ddcc2bbf9f8a7bd64b115ce" Namespace="calico-apiserver" Pod="calico-apiserver-7c8fb8fd4d-9hn8w" WorkloadEndpoint="ci--4515.1.0--n--fc9e3ff023-k8s-calico--apiserver--7c8fb8fd4d--9hn8w-eth0" Jan 20 00:42:09.788975 containerd[2132]: 2026-01-20 00:42:09.773 [INFO][5056] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c0f67dcb2ddc26e894e87eb59f33fe59ef7a5e151ddcc2bbf9f8a7bd64b115ce" Namespace="calico-apiserver" Pod="calico-apiserver-7c8fb8fd4d-9hn8w" WorkloadEndpoint="ci--4515.1.0--n--fc9e3ff023-k8s-calico--apiserver--7c8fb8fd4d--9hn8w-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4515.1.0--n--fc9e3ff023-k8s-calico--apiserver--7c8fb8fd4d--9hn8w-eth0", GenerateName:"calico-apiserver-7c8fb8fd4d-", Namespace:"calico-apiserver", SelfLink:"", UID:"6eedb683-7841-469d-9465-68ae5bed2952", ResourceVersion:"875", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 0, 41, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7c8fb8fd4d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4515.1.0-n-fc9e3ff023", ContainerID:"c0f67dcb2ddc26e894e87eb59f33fe59ef7a5e151ddcc2bbf9f8a7bd64b115ce", Pod:"calico-apiserver-7c8fb8fd4d-9hn8w", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.51.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali65703926848", MAC:"8e:82:31:19:22:04", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 00:42:09.788975 containerd[2132]: 2026-01-20 00:42:09.784 [INFO][5056] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c0f67dcb2ddc26e894e87eb59f33fe59ef7a5e151ddcc2bbf9f8a7bd64b115ce" Namespace="calico-apiserver" Pod="calico-apiserver-7c8fb8fd4d-9hn8w" WorkloadEndpoint="ci--4515.1.0--n--fc9e3ff023-k8s-calico--apiserver--7c8fb8fd4d--9hn8w-eth0" Jan 20 00:42:09.800000 audit[5156]: NETFILTER_CFG table=filter:129 family=2 entries=54 op=nft_register_chain pid=5156 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 20 00:42:09.800000 audit[5156]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=29396 a0=3 a1=ffffc7b95ba0 a2=0 a3=ffffa0ac6fa8 items=0 ppid=4877 pid=5156 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:42:09.800000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 20 00:42:09.808314 containerd[2132]: time="2026-01-20T00:42:09.808273913Z" level=info msg="CreateContainer within sandbox \"2c70ca600e61d3338b27b19b2639a0d489aa29a5eb87e5d64b3a6816f145ff11\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 20 00:42:09.833561 containerd[2132]: time="2026-01-20T00:42:09.833513122Z" level=info msg="Container dfa9e7979d07eb1ca1d22f3c84dbf419988c0868f9c39227bc1233a61eb18797: CDI devices from CRI Config.CDIDevices: []" Jan 20 00:42:09.846783 containerd[2132]: time="2026-01-20T00:42:09.846720240Z" level=info msg="CreateContainer within sandbox \"2c70ca600e61d3338b27b19b2639a0d489aa29a5eb87e5d64b3a6816f145ff11\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"dfa9e7979d07eb1ca1d22f3c84dbf419988c0868f9c39227bc1233a61eb18797\"" Jan 20 00:42:09.847394 containerd[2132]: time="2026-01-20T00:42:09.847261004Z" level=info msg="StartContainer for \"dfa9e7979d07eb1ca1d22f3c84dbf419988c0868f9c39227bc1233a61eb18797\"" Jan 20 00:42:09.848372 containerd[2132]: time="2026-01-20T00:42:09.848351436Z" level=info msg="connecting to shim dfa9e7979d07eb1ca1d22f3c84dbf419988c0868f9c39227bc1233a61eb18797" address="unix:///run/containerd/s/c6792cc081fc3e2d3c6129042d08b15cbbad40bd87845fe12458c3ef81a52e29" protocol=ttrpc version=3 Jan 20 00:42:09.865450 systemd[1]: Started cri-containerd-dfa9e7979d07eb1ca1d22f3c84dbf419988c0868f9c39227bc1233a61eb18797.scope - libcontainer container dfa9e7979d07eb1ca1d22f3c84dbf419988c0868f9c39227bc1233a61eb18797. Jan 20 00:42:09.874000 audit: BPF prog-id=240 op=LOAD Jan 20 00:42:09.876426 containerd[2132]: time="2026-01-20T00:42:09.876405268Z" level=info msg="connecting to shim c0f67dcb2ddc26e894e87eb59f33fe59ef7a5e151ddcc2bbf9f8a7bd64b115ce" address="unix:///run/containerd/s/516273341a4d0bea7ee2d6f74969cdab49a4b9400844864801a238bea5a6f0b5" namespace=k8s.io protocol=ttrpc version=3 Jan 20 00:42:09.875000 audit: BPF prog-id=241 op=LOAD Jan 20 00:42:09.875000 audit[5157]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=400010c180 a2=98 a3=0 items=0 ppid=5113 pid=5157 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:42:09.875000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6466613965373937396430376562316361316432326633633834646266 Jan 20 00:42:09.875000 audit: BPF prog-id=241 op=UNLOAD Jan 20 00:42:09.875000 audit[5157]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5113 pid=5157 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:42:09.875000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6466613965373937396430376562316361316432326633633834646266 Jan 20 00:42:09.875000 audit: BPF prog-id=242 op=LOAD Jan 20 00:42:09.875000 audit[5157]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=400010c3e8 a2=98 a3=0 items=0 ppid=5113 pid=5157 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:42:09.875000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6466613965373937396430376562316361316432326633633834646266 Jan 20 00:42:09.875000 audit: BPF prog-id=243 op=LOAD Jan 20 00:42:09.875000 audit[5157]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=400010c168 a2=98 a3=0 items=0 ppid=5113 pid=5157 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:42:09.875000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6466613965373937396430376562316361316432326633633834646266 Jan 20 00:42:09.875000 audit: BPF prog-id=243 op=UNLOAD Jan 20 00:42:09.875000 audit[5157]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5113 pid=5157 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:42:09.875000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6466613965373937396430376562316361316432326633633834646266 Jan 20 00:42:09.876000 audit: BPF prog-id=242 op=UNLOAD Jan 20 00:42:09.876000 audit[5157]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5113 pid=5157 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:42:09.876000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6466613965373937396430376562316361316432326633633834646266 Jan 20 00:42:09.876000 audit: BPF prog-id=244 op=LOAD Jan 20 00:42:09.876000 audit[5157]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=400010c648 a2=98 a3=0 items=0 ppid=5113 pid=5157 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:42:09.876000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6466613965373937396430376562316361316432326633633834646266 Jan 20 00:42:09.906476 containerd[2132]: time="2026-01-20T00:42:09.906453101Z" level=info msg="StartContainer for \"dfa9e7979d07eb1ca1d22f3c84dbf419988c0868f9c39227bc1233a61eb18797\" returns successfully" Jan 20 00:42:09.908568 systemd[1]: Started cri-containerd-c0f67dcb2ddc26e894e87eb59f33fe59ef7a5e151ddcc2bbf9f8a7bd64b115ce.scope - libcontainer container c0f67dcb2ddc26e894e87eb59f33fe59ef7a5e151ddcc2bbf9f8a7bd64b115ce. Jan 20 00:42:09.919000 audit: BPF prog-id=245 op=LOAD Jan 20 00:42:09.919000 audit: BPF prog-id=246 op=LOAD Jan 20 00:42:09.919000 audit[5196]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130180 a2=98 a3=0 items=0 ppid=5185 pid=5196 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:42:09.919000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6330663637646362326464633236653839346538376562353966333366 Jan 20 00:42:09.919000 audit: BPF prog-id=246 op=UNLOAD Jan 20 00:42:09.919000 audit[5196]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5185 pid=5196 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:42:09.919000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6330663637646362326464633236653839346538376562353966333366 Jan 20 00:42:09.920000 audit: BPF prog-id=247 op=LOAD Jan 20 00:42:09.920000 audit[5196]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001303e8 a2=98 a3=0 items=0 ppid=5185 pid=5196 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:42:09.920000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6330663637646362326464633236653839346538376562353966333366 Jan 20 00:42:09.920000 audit: BPF prog-id=248 op=LOAD Jan 20 00:42:09.920000 audit[5196]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000130168 a2=98 a3=0 items=0 ppid=5185 pid=5196 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:42:09.920000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6330663637646362326464633236653839346538376562353966333366 Jan 20 00:42:09.920000 audit: BPF prog-id=248 op=UNLOAD Jan 20 00:42:09.920000 audit[5196]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5185 pid=5196 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:42:09.920000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6330663637646362326464633236653839346538376562353966333366 Jan 20 00:42:09.920000 audit: BPF prog-id=247 op=UNLOAD Jan 20 00:42:09.920000 audit[5196]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5185 pid=5196 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:42:09.920000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6330663637646362326464633236653839346538376562353966333366 Jan 20 00:42:09.920000 audit: BPF prog-id=249 op=LOAD Jan 20 00:42:09.920000 audit[5196]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130648 a2=98 a3=0 items=0 ppid=5185 pid=5196 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:42:09.920000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6330663637646362326464633236653839346538376562353966333366 Jan 20 00:42:09.959747 containerd[2132]: time="2026-01-20T00:42:09.959701556Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7c8fb8fd4d-9hn8w,Uid:6eedb683-7841-469d-9465-68ae5bed2952,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"c0f67dcb2ddc26e894e87eb59f33fe59ef7a5e151ddcc2bbf9f8a7bd64b115ce\"" Jan 20 00:42:09.961066 containerd[2132]: time="2026-01-20T00:42:09.961039253Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 20 00:42:10.183231 containerd[2132]: time="2026-01-20T00:42:10.183114446Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 00:42:10.186688 containerd[2132]: time="2026-01-20T00:42:10.186637936Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 20 00:42:10.186884 containerd[2132]: time="2026-01-20T00:42:10.186847624Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 20 00:42:10.187168 kubelet[3599]: E0120 00:42:10.187104 3599 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 00:42:10.187168 kubelet[3599]: E0120 00:42:10.187154 3599 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 00:42:10.187491 kubelet[3599]: E0120 00:42:10.187384 3599 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xkm7x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-7c8fb8fd4d-9hn8w_calico-apiserver(6eedb683-7841-469d-9465-68ae5bed2952): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 20 00:42:10.188782 kubelet[3599]: E0120 00:42:10.188731 3599 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7c8fb8fd4d-9hn8w" podUID="6eedb683-7841-469d-9465-68ae5bed2952" Jan 20 00:42:10.541539 containerd[2132]: time="2026-01-20T00:42:10.541484765Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-n9ngh,Uid:28862320-350f-4f29-92bb-d8201c93580b,Namespace:calico-system,Attempt:0,}" Jan 20 00:42:10.542124 containerd[2132]: time="2026-01-20T00:42:10.542028569Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6d94f7fcbb-jr4rt,Uid:77e9acbe-87a2-440f-b406-8c8900ab52f5,Namespace:calico-apiserver,Attempt:0,}" Jan 20 00:42:10.664158 kubelet[3599]: E0120 00:42:10.661939 3599 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7c8fb8fd4d-9hn8w" podUID="6eedb683-7841-469d-9465-68ae5bed2952" Jan 20 00:42:10.664514 systemd-networkd[1725]: caliefcc097399e: Link UP Jan 20 00:42:10.666717 systemd-networkd[1725]: caliefcc097399e: Gained carrier Jan 20 00:42:10.690653 containerd[2132]: 2026-01-20 00:42:10.594 [INFO][5235] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4515.1.0--n--fc9e3ff023-k8s-goldmane--666569f655--n9ngh-eth0 goldmane-666569f655- calico-system 28862320-350f-4f29-92bb-d8201c93580b 872 0 2026-01-20 00:41:44 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:666569f655 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4515.1.0-n-fc9e3ff023 goldmane-666569f655-n9ngh eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] caliefcc097399e [] [] }} ContainerID="7847338752f6354cd2de43ceb6e076e0cba522e1b0cdc68252bae437e14d898f" Namespace="calico-system" Pod="goldmane-666569f655-n9ngh" WorkloadEndpoint="ci--4515.1.0--n--fc9e3ff023-k8s-goldmane--666569f655--n9ngh-" Jan 20 00:42:10.690653 containerd[2132]: 2026-01-20 00:42:10.595 [INFO][5235] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="7847338752f6354cd2de43ceb6e076e0cba522e1b0cdc68252bae437e14d898f" Namespace="calico-system" Pod="goldmane-666569f655-n9ngh" WorkloadEndpoint="ci--4515.1.0--n--fc9e3ff023-k8s-goldmane--666569f655--n9ngh-eth0" Jan 20 00:42:10.690653 containerd[2132]: 2026-01-20 00:42:10.622 [INFO][5259] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7847338752f6354cd2de43ceb6e076e0cba522e1b0cdc68252bae437e14d898f" HandleID="k8s-pod-network.7847338752f6354cd2de43ceb6e076e0cba522e1b0cdc68252bae437e14d898f" Workload="ci--4515.1.0--n--fc9e3ff023-k8s-goldmane--666569f655--n9ngh-eth0" Jan 20 00:42:10.690653 containerd[2132]: 2026-01-20 00:42:10.622 [INFO][5259] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="7847338752f6354cd2de43ceb6e076e0cba522e1b0cdc68252bae437e14d898f" HandleID="k8s-pod-network.7847338752f6354cd2de43ceb6e076e0cba522e1b0cdc68252bae437e14d898f" Workload="ci--4515.1.0--n--fc9e3ff023-k8s-goldmane--666569f655--n9ngh-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002d35a0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4515.1.0-n-fc9e3ff023", "pod":"goldmane-666569f655-n9ngh", "timestamp":"2026-01-20 00:42:10.622395877 +0000 UTC"}, Hostname:"ci-4515.1.0-n-fc9e3ff023", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 20 00:42:10.690653 containerd[2132]: 2026-01-20 00:42:10.622 [INFO][5259] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 00:42:10.690653 containerd[2132]: 2026-01-20 00:42:10.622 [INFO][5259] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 00:42:10.690653 containerd[2132]: 2026-01-20 00:42:10.622 [INFO][5259] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4515.1.0-n-fc9e3ff023' Jan 20 00:42:10.690653 containerd[2132]: 2026-01-20 00:42:10.629 [INFO][5259] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.7847338752f6354cd2de43ceb6e076e0cba522e1b0cdc68252bae437e14d898f" host="ci-4515.1.0-n-fc9e3ff023" Jan 20 00:42:10.690653 containerd[2132]: 2026-01-20 00:42:10.633 [INFO][5259] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4515.1.0-n-fc9e3ff023" Jan 20 00:42:10.690653 containerd[2132]: 2026-01-20 00:42:10.636 [INFO][5259] ipam/ipam.go 511: Trying affinity for 192.168.51.64/26 host="ci-4515.1.0-n-fc9e3ff023" Jan 20 00:42:10.690653 containerd[2132]: 2026-01-20 00:42:10.637 [INFO][5259] ipam/ipam.go 158: Attempting to load block cidr=192.168.51.64/26 host="ci-4515.1.0-n-fc9e3ff023" Jan 20 00:42:10.690653 containerd[2132]: 2026-01-20 00:42:10.639 [INFO][5259] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.51.64/26 host="ci-4515.1.0-n-fc9e3ff023" Jan 20 00:42:10.690653 containerd[2132]: 2026-01-20 00:42:10.639 [INFO][5259] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.51.64/26 handle="k8s-pod-network.7847338752f6354cd2de43ceb6e076e0cba522e1b0cdc68252bae437e14d898f" host="ci-4515.1.0-n-fc9e3ff023" Jan 20 00:42:10.690653 containerd[2132]: 2026-01-20 00:42:10.640 [INFO][5259] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.7847338752f6354cd2de43ceb6e076e0cba522e1b0cdc68252bae437e14d898f Jan 20 00:42:10.690653 containerd[2132]: 2026-01-20 00:42:10.648 [INFO][5259] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.51.64/26 handle="k8s-pod-network.7847338752f6354cd2de43ceb6e076e0cba522e1b0cdc68252bae437e14d898f" host="ci-4515.1.0-n-fc9e3ff023" Jan 20 00:42:10.690653 containerd[2132]: 2026-01-20 00:42:10.652 [INFO][5259] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.51.68/26] block=192.168.51.64/26 handle="k8s-pod-network.7847338752f6354cd2de43ceb6e076e0cba522e1b0cdc68252bae437e14d898f" host="ci-4515.1.0-n-fc9e3ff023" Jan 20 00:42:10.690653 containerd[2132]: 2026-01-20 00:42:10.652 [INFO][5259] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.51.68/26] handle="k8s-pod-network.7847338752f6354cd2de43ceb6e076e0cba522e1b0cdc68252bae437e14d898f" host="ci-4515.1.0-n-fc9e3ff023" Jan 20 00:42:10.690653 containerd[2132]: 2026-01-20 00:42:10.652 [INFO][5259] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 00:42:10.690653 containerd[2132]: 2026-01-20 00:42:10.652 [INFO][5259] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.51.68/26] IPv6=[] ContainerID="7847338752f6354cd2de43ceb6e076e0cba522e1b0cdc68252bae437e14d898f" HandleID="k8s-pod-network.7847338752f6354cd2de43ceb6e076e0cba522e1b0cdc68252bae437e14d898f" Workload="ci--4515.1.0--n--fc9e3ff023-k8s-goldmane--666569f655--n9ngh-eth0" Jan 20 00:42:10.691031 containerd[2132]: 2026-01-20 00:42:10.657 [INFO][5235] cni-plugin/k8s.go 418: Populated endpoint ContainerID="7847338752f6354cd2de43ceb6e076e0cba522e1b0cdc68252bae437e14d898f" Namespace="calico-system" Pod="goldmane-666569f655-n9ngh" WorkloadEndpoint="ci--4515.1.0--n--fc9e3ff023-k8s-goldmane--666569f655--n9ngh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4515.1.0--n--fc9e3ff023-k8s-goldmane--666569f655--n9ngh-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"28862320-350f-4f29-92bb-d8201c93580b", ResourceVersion:"872", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 0, 41, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4515.1.0-n-fc9e3ff023", ContainerID:"", Pod:"goldmane-666569f655-n9ngh", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.51.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"caliefcc097399e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 00:42:10.691031 containerd[2132]: 2026-01-20 00:42:10.657 [INFO][5235] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.51.68/32] ContainerID="7847338752f6354cd2de43ceb6e076e0cba522e1b0cdc68252bae437e14d898f" Namespace="calico-system" Pod="goldmane-666569f655-n9ngh" WorkloadEndpoint="ci--4515.1.0--n--fc9e3ff023-k8s-goldmane--666569f655--n9ngh-eth0" Jan 20 00:42:10.691031 containerd[2132]: 2026-01-20 00:42:10.657 [INFO][5235] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliefcc097399e ContainerID="7847338752f6354cd2de43ceb6e076e0cba522e1b0cdc68252bae437e14d898f" Namespace="calico-system" Pod="goldmane-666569f655-n9ngh" WorkloadEndpoint="ci--4515.1.0--n--fc9e3ff023-k8s-goldmane--666569f655--n9ngh-eth0" Jan 20 00:42:10.691031 containerd[2132]: 2026-01-20 00:42:10.668 [INFO][5235] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7847338752f6354cd2de43ceb6e076e0cba522e1b0cdc68252bae437e14d898f" Namespace="calico-system" Pod="goldmane-666569f655-n9ngh" WorkloadEndpoint="ci--4515.1.0--n--fc9e3ff023-k8s-goldmane--666569f655--n9ngh-eth0" Jan 20 00:42:10.691031 containerd[2132]: 2026-01-20 00:42:10.669 [INFO][5235] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="7847338752f6354cd2de43ceb6e076e0cba522e1b0cdc68252bae437e14d898f" Namespace="calico-system" Pod="goldmane-666569f655-n9ngh" WorkloadEndpoint="ci--4515.1.0--n--fc9e3ff023-k8s-goldmane--666569f655--n9ngh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4515.1.0--n--fc9e3ff023-k8s-goldmane--666569f655--n9ngh-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"28862320-350f-4f29-92bb-d8201c93580b", ResourceVersion:"872", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 0, 41, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4515.1.0-n-fc9e3ff023", ContainerID:"7847338752f6354cd2de43ceb6e076e0cba522e1b0cdc68252bae437e14d898f", Pod:"goldmane-666569f655-n9ngh", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.51.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"caliefcc097399e", MAC:"62:4e:ee:ea:ed:e5", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 00:42:10.691031 containerd[2132]: 2026-01-20 00:42:10.686 [INFO][5235] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="7847338752f6354cd2de43ceb6e076e0cba522e1b0cdc68252bae437e14d898f" Namespace="calico-system" Pod="goldmane-666569f655-n9ngh" WorkloadEndpoint="ci--4515.1.0--n--fc9e3ff023-k8s-goldmane--666569f655--n9ngh-eth0" Jan 20 00:42:10.711000 audit[5285]: NETFILTER_CFG table=filter:130 family=2 entries=20 op=nft_register_rule pid=5285 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 00:42:10.711000 audit[5285]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=7480 a0=3 a1=fffff7d863e0 a2=0 a3=1 items=0 ppid=3769 pid=5285 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:42:10.711000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 00:42:10.715000 audit[5285]: NETFILTER_CFG table=nat:131 family=2 entries=14 op=nft_register_rule pid=5285 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 00:42:10.715000 audit[5285]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=3468 a0=3 a1=fffff7d863e0 a2=0 a3=1 items=0 ppid=3769 pid=5285 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:42:10.715000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 00:42:10.722000 audit[5288]: NETFILTER_CFG table=filter:132 family=2 entries=52 op=nft_register_chain pid=5288 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 20 00:42:10.722000 audit[5288]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=27556 a0=3 a1=ffffd90f8a90 a2=0 a3=ffff931b4fa8 items=0 ppid=4877 pid=5288 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:42:10.722000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 20 00:42:10.727000 audit[5289]: NETFILTER_CFG table=filter:133 family=2 entries=17 op=nft_register_rule pid=5289 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 00:42:10.727000 audit[5289]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=ffffed7df0a0 a2=0 a3=1 items=0 ppid=3769 pid=5289 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:42:10.727000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 00:42:10.731000 audit[5289]: NETFILTER_CFG table=nat:134 family=2 entries=35 op=nft_register_chain pid=5289 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 00:42:10.731000 audit[5289]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=14196 a0=3 a1=ffffed7df0a0 a2=0 a3=1 items=0 ppid=3769 pid=5289 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:42:10.731000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 00:42:10.744697 containerd[2132]: time="2026-01-20T00:42:10.744629165Z" level=info msg="connecting to shim 7847338752f6354cd2de43ceb6e076e0cba522e1b0cdc68252bae437e14d898f" address="unix:///run/containerd/s/a69aa1e3c6769425574febb9690d04b30fed9a3f9f1812cdddd0f63884e2621f" namespace=k8s.io protocol=ttrpc version=3 Jan 20 00:42:10.769582 systemd[1]: Started cri-containerd-7847338752f6354cd2de43ceb6e076e0cba522e1b0cdc68252bae437e14d898f.scope - libcontainer container 7847338752f6354cd2de43ceb6e076e0cba522e1b0cdc68252bae437e14d898f. Jan 20 00:42:10.776484 systemd-networkd[1725]: calie829a990937: Link UP Jan 20 00:42:10.777807 systemd-networkd[1725]: calie829a990937: Gained carrier Jan 20 00:42:10.783000 audit: BPF prog-id=250 op=LOAD Jan 20 00:42:10.784000 audit: BPF prog-id=251 op=LOAD Jan 20 00:42:10.784000 audit[5310]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001b0180 a2=98 a3=0 items=0 ppid=5299 pid=5310 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:42:10.784000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3738343733333837353266363335346364326465343363656236653037 Jan 20 00:42:10.784000 audit: BPF prog-id=251 op=UNLOAD Jan 20 00:42:10.784000 audit[5310]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5299 pid=5310 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:42:10.784000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3738343733333837353266363335346364326465343363656236653037 Jan 20 00:42:10.784000 audit: BPF prog-id=252 op=LOAD Jan 20 00:42:10.784000 audit[5310]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001b03e8 a2=98 a3=0 items=0 ppid=5299 pid=5310 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:42:10.784000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3738343733333837353266363335346364326465343363656236653037 Jan 20 00:42:10.784000 audit: BPF prog-id=253 op=LOAD Jan 20 00:42:10.784000 audit[5310]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=40001b0168 a2=98 a3=0 items=0 ppid=5299 pid=5310 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:42:10.784000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3738343733333837353266363335346364326465343363656236653037 Jan 20 00:42:10.785000 audit: BPF prog-id=253 op=UNLOAD Jan 20 00:42:10.785000 audit[5310]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5299 pid=5310 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:42:10.785000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3738343733333837353266363335346364326465343363656236653037 Jan 20 00:42:10.785000 audit: BPF prog-id=252 op=UNLOAD Jan 20 00:42:10.785000 audit[5310]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5299 pid=5310 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:42:10.785000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3738343733333837353266363335346364326465343363656236653037 Jan 20 00:42:10.785000 audit: BPF prog-id=254 op=LOAD Jan 20 00:42:10.785000 audit[5310]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001b0648 a2=98 a3=0 items=0 ppid=5299 pid=5310 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:42:10.785000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3738343733333837353266363335346364326465343363656236653037 Jan 20 00:42:10.790348 kubelet[3599]: I0120 00:42:10.790261 3599 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-w85xq" podStartSLOduration=39.790240699 podStartE2EDuration="39.790240699s" podCreationTimestamp="2026-01-20 00:41:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 00:42:10.696579814 +0000 UTC m=+46.243544879" watchObservedRunningTime="2026-01-20 00:42:10.790240699 +0000 UTC m=+46.337205764" Jan 20 00:42:10.793437 containerd[2132]: 2026-01-20 00:42:10.608 [INFO][5239] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4515.1.0--n--fc9e3ff023-k8s-calico--apiserver--6d94f7fcbb--jr4rt-eth0 calico-apiserver-6d94f7fcbb- calico-apiserver 77e9acbe-87a2-440f-b406-8c8900ab52f5 873 0 2026-01-20 00:41:42 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6d94f7fcbb projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4515.1.0-n-fc9e3ff023 calico-apiserver-6d94f7fcbb-jr4rt eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calie829a990937 [] [] }} ContainerID="a04879d2d41d3cadba0e951d863b58a619fd7c64f894baf953f447e6cb1d3732" Namespace="calico-apiserver" Pod="calico-apiserver-6d94f7fcbb-jr4rt" WorkloadEndpoint="ci--4515.1.0--n--fc9e3ff023-k8s-calico--apiserver--6d94f7fcbb--jr4rt-" Jan 20 00:42:10.793437 containerd[2132]: 2026-01-20 00:42:10.608 [INFO][5239] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a04879d2d41d3cadba0e951d863b58a619fd7c64f894baf953f447e6cb1d3732" Namespace="calico-apiserver" Pod="calico-apiserver-6d94f7fcbb-jr4rt" WorkloadEndpoint="ci--4515.1.0--n--fc9e3ff023-k8s-calico--apiserver--6d94f7fcbb--jr4rt-eth0" Jan 20 00:42:10.793437 containerd[2132]: 2026-01-20 00:42:10.632 [INFO][5265] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a04879d2d41d3cadba0e951d863b58a619fd7c64f894baf953f447e6cb1d3732" HandleID="k8s-pod-network.a04879d2d41d3cadba0e951d863b58a619fd7c64f894baf953f447e6cb1d3732" Workload="ci--4515.1.0--n--fc9e3ff023-k8s-calico--apiserver--6d94f7fcbb--jr4rt-eth0" Jan 20 00:42:10.793437 containerd[2132]: 2026-01-20 00:42:10.632 [INFO][5265] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="a04879d2d41d3cadba0e951d863b58a619fd7c64f894baf953f447e6cb1d3732" HandleID="k8s-pod-network.a04879d2d41d3cadba0e951d863b58a619fd7c64f894baf953f447e6cb1d3732" Workload="ci--4515.1.0--n--fc9e3ff023-k8s-calico--apiserver--6d94f7fcbb--jr4rt-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002cf870), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4515.1.0-n-fc9e3ff023", "pod":"calico-apiserver-6d94f7fcbb-jr4rt", "timestamp":"2026-01-20 00:42:10.632072609 +0000 UTC"}, Hostname:"ci-4515.1.0-n-fc9e3ff023", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 20 00:42:10.793437 containerd[2132]: 2026-01-20 00:42:10.632 [INFO][5265] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 00:42:10.793437 containerd[2132]: 2026-01-20 00:42:10.652 [INFO][5265] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 00:42:10.793437 containerd[2132]: 2026-01-20 00:42:10.652 [INFO][5265] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4515.1.0-n-fc9e3ff023' Jan 20 00:42:10.793437 containerd[2132]: 2026-01-20 00:42:10.729 [INFO][5265] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.a04879d2d41d3cadba0e951d863b58a619fd7c64f894baf953f447e6cb1d3732" host="ci-4515.1.0-n-fc9e3ff023" Jan 20 00:42:10.793437 containerd[2132]: 2026-01-20 00:42:10.734 [INFO][5265] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4515.1.0-n-fc9e3ff023" Jan 20 00:42:10.793437 containerd[2132]: 2026-01-20 00:42:10.746 [INFO][5265] ipam/ipam.go 511: Trying affinity for 192.168.51.64/26 host="ci-4515.1.0-n-fc9e3ff023" Jan 20 00:42:10.793437 containerd[2132]: 2026-01-20 00:42:10.748 [INFO][5265] ipam/ipam.go 158: Attempting to load block cidr=192.168.51.64/26 host="ci-4515.1.0-n-fc9e3ff023" Jan 20 00:42:10.793437 containerd[2132]: 2026-01-20 00:42:10.751 [INFO][5265] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.51.64/26 host="ci-4515.1.0-n-fc9e3ff023" Jan 20 00:42:10.793437 containerd[2132]: 2026-01-20 00:42:10.751 [INFO][5265] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.51.64/26 handle="k8s-pod-network.a04879d2d41d3cadba0e951d863b58a619fd7c64f894baf953f447e6cb1d3732" host="ci-4515.1.0-n-fc9e3ff023" Jan 20 00:42:10.793437 containerd[2132]: 2026-01-20 00:42:10.753 [INFO][5265] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.a04879d2d41d3cadba0e951d863b58a619fd7c64f894baf953f447e6cb1d3732 Jan 20 00:42:10.793437 containerd[2132]: 2026-01-20 00:42:10.758 [INFO][5265] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.51.64/26 handle="k8s-pod-network.a04879d2d41d3cadba0e951d863b58a619fd7c64f894baf953f447e6cb1d3732" host="ci-4515.1.0-n-fc9e3ff023" Jan 20 00:42:10.793437 containerd[2132]: 2026-01-20 00:42:10.768 [INFO][5265] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.51.69/26] block=192.168.51.64/26 handle="k8s-pod-network.a04879d2d41d3cadba0e951d863b58a619fd7c64f894baf953f447e6cb1d3732" host="ci-4515.1.0-n-fc9e3ff023" Jan 20 00:42:10.793437 containerd[2132]: 2026-01-20 00:42:10.768 [INFO][5265] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.51.69/26] handle="k8s-pod-network.a04879d2d41d3cadba0e951d863b58a619fd7c64f894baf953f447e6cb1d3732" host="ci-4515.1.0-n-fc9e3ff023" Jan 20 00:42:10.793437 containerd[2132]: 2026-01-20 00:42:10.768 [INFO][5265] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 00:42:10.793437 containerd[2132]: 2026-01-20 00:42:10.768 [INFO][5265] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.51.69/26] IPv6=[] ContainerID="a04879d2d41d3cadba0e951d863b58a619fd7c64f894baf953f447e6cb1d3732" HandleID="k8s-pod-network.a04879d2d41d3cadba0e951d863b58a619fd7c64f894baf953f447e6cb1d3732" Workload="ci--4515.1.0--n--fc9e3ff023-k8s-calico--apiserver--6d94f7fcbb--jr4rt-eth0" Jan 20 00:42:10.793795 containerd[2132]: 2026-01-20 00:42:10.771 [INFO][5239] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a04879d2d41d3cadba0e951d863b58a619fd7c64f894baf953f447e6cb1d3732" Namespace="calico-apiserver" Pod="calico-apiserver-6d94f7fcbb-jr4rt" WorkloadEndpoint="ci--4515.1.0--n--fc9e3ff023-k8s-calico--apiserver--6d94f7fcbb--jr4rt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4515.1.0--n--fc9e3ff023-k8s-calico--apiserver--6d94f7fcbb--jr4rt-eth0", GenerateName:"calico-apiserver-6d94f7fcbb-", Namespace:"calico-apiserver", SelfLink:"", UID:"77e9acbe-87a2-440f-b406-8c8900ab52f5", ResourceVersion:"873", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 0, 41, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6d94f7fcbb", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4515.1.0-n-fc9e3ff023", ContainerID:"", Pod:"calico-apiserver-6d94f7fcbb-jr4rt", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.51.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie829a990937", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 00:42:10.793795 containerd[2132]: 2026-01-20 00:42:10.771 [INFO][5239] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.51.69/32] ContainerID="a04879d2d41d3cadba0e951d863b58a619fd7c64f894baf953f447e6cb1d3732" Namespace="calico-apiserver" Pod="calico-apiserver-6d94f7fcbb-jr4rt" WorkloadEndpoint="ci--4515.1.0--n--fc9e3ff023-k8s-calico--apiserver--6d94f7fcbb--jr4rt-eth0" Jan 20 00:42:10.793795 containerd[2132]: 2026-01-20 00:42:10.771 [INFO][5239] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie829a990937 ContainerID="a04879d2d41d3cadba0e951d863b58a619fd7c64f894baf953f447e6cb1d3732" Namespace="calico-apiserver" Pod="calico-apiserver-6d94f7fcbb-jr4rt" WorkloadEndpoint="ci--4515.1.0--n--fc9e3ff023-k8s-calico--apiserver--6d94f7fcbb--jr4rt-eth0" Jan 20 00:42:10.793795 containerd[2132]: 2026-01-20 00:42:10.777 [INFO][5239] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a04879d2d41d3cadba0e951d863b58a619fd7c64f894baf953f447e6cb1d3732" Namespace="calico-apiserver" Pod="calico-apiserver-6d94f7fcbb-jr4rt" WorkloadEndpoint="ci--4515.1.0--n--fc9e3ff023-k8s-calico--apiserver--6d94f7fcbb--jr4rt-eth0" Jan 20 00:42:10.793795 containerd[2132]: 2026-01-20 00:42:10.778 [INFO][5239] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a04879d2d41d3cadba0e951d863b58a619fd7c64f894baf953f447e6cb1d3732" Namespace="calico-apiserver" Pod="calico-apiserver-6d94f7fcbb-jr4rt" WorkloadEndpoint="ci--4515.1.0--n--fc9e3ff023-k8s-calico--apiserver--6d94f7fcbb--jr4rt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4515.1.0--n--fc9e3ff023-k8s-calico--apiserver--6d94f7fcbb--jr4rt-eth0", GenerateName:"calico-apiserver-6d94f7fcbb-", Namespace:"calico-apiserver", SelfLink:"", UID:"77e9acbe-87a2-440f-b406-8c8900ab52f5", ResourceVersion:"873", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 0, 41, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6d94f7fcbb", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4515.1.0-n-fc9e3ff023", ContainerID:"a04879d2d41d3cadba0e951d863b58a619fd7c64f894baf953f447e6cb1d3732", Pod:"calico-apiserver-6d94f7fcbb-jr4rt", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.51.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie829a990937", MAC:"c6:99:3d:4c:55:eb", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 00:42:10.793795 containerd[2132]: 2026-01-20 00:42:10.790 [INFO][5239] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a04879d2d41d3cadba0e951d863b58a619fd7c64f894baf953f447e6cb1d3732" Namespace="calico-apiserver" Pod="calico-apiserver-6d94f7fcbb-jr4rt" WorkloadEndpoint="ci--4515.1.0--n--fc9e3ff023-k8s-calico--apiserver--6d94f7fcbb--jr4rt-eth0" Jan 20 00:42:10.807000 audit[5336]: NETFILTER_CFG table=filter:135 family=2 entries=55 op=nft_register_chain pid=5336 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 20 00:42:10.807000 audit[5336]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=28304 a0=3 a1=ffffc53234c0 a2=0 a3=ffffa8f92fa8 items=0 ppid=4877 pid=5336 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:42:10.807000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 20 00:42:10.823271 containerd[2132]: time="2026-01-20T00:42:10.823235017Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-n9ngh,Uid:28862320-350f-4f29-92bb-d8201c93580b,Namespace:calico-system,Attempt:0,} returns sandbox id \"7847338752f6354cd2de43ceb6e076e0cba522e1b0cdc68252bae437e14d898f\"" Jan 20 00:42:10.824453 containerd[2132]: time="2026-01-20T00:42:10.824361490Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 20 00:42:10.845573 containerd[2132]: time="2026-01-20T00:42:10.845543118Z" level=info msg="connecting to shim a04879d2d41d3cadba0e951d863b58a619fd7c64f894baf953f447e6cb1d3732" address="unix:///run/containerd/s/1c3d11037a4d4348d6354cd1566a58fcdf8493769a8143a265cde541ea0e8a06" namespace=k8s.io protocol=ttrpc version=3 Jan 20 00:42:10.866438 systemd[1]: Started cri-containerd-a04879d2d41d3cadba0e951d863b58a619fd7c64f894baf953f447e6cb1d3732.scope - libcontainer container a04879d2d41d3cadba0e951d863b58a619fd7c64f894baf953f447e6cb1d3732. Jan 20 00:42:10.873000 audit: BPF prog-id=255 op=LOAD Jan 20 00:42:10.873000 audit: BPF prog-id=256 op=LOAD Jan 20 00:42:10.873000 audit[5363]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000176180 a2=98 a3=0 items=0 ppid=5351 pid=5363 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:42:10.873000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6130343837396432643431643363616462613065393531643836336235 Jan 20 00:42:10.873000 audit: BPF prog-id=256 op=UNLOAD Jan 20 00:42:10.873000 audit[5363]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=5351 pid=5363 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:42:10.873000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6130343837396432643431643363616462613065393531643836336235 Jan 20 00:42:10.873000 audit: BPF prog-id=257 op=LOAD Jan 20 00:42:10.873000 audit[5363]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001763e8 a2=98 a3=0 items=0 ppid=5351 pid=5363 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:42:10.873000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6130343837396432643431643363616462613065393531643836336235 Jan 20 00:42:10.873000 audit: BPF prog-id=258 op=LOAD Jan 20 00:42:10.873000 audit[5363]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=4000176168 a2=98 a3=0 items=0 ppid=5351 pid=5363 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:42:10.873000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6130343837396432643431643363616462613065393531643836336235 Jan 20 00:42:10.873000 audit: BPF prog-id=258 op=UNLOAD Jan 20 00:42:10.873000 audit[5363]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=5351 pid=5363 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:42:10.873000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6130343837396432643431643363616462613065393531643836336235 Jan 20 00:42:10.873000 audit: BPF prog-id=257 op=UNLOAD Jan 20 00:42:10.873000 audit[5363]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=5351 pid=5363 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:42:10.873000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6130343837396432643431643363616462613065393531643836336235 Jan 20 00:42:10.873000 audit: BPF prog-id=259 op=LOAD Jan 20 00:42:10.873000 audit[5363]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000176648 a2=98 a3=0 items=0 ppid=5351 pid=5363 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:42:10.873000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6130343837396432643431643363616462613065393531643836336235 Jan 20 00:42:10.896122 containerd[2132]: time="2026-01-20T00:42:10.896098153Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6d94f7fcbb-jr4rt,Uid:77e9acbe-87a2-440f-b406-8c8900ab52f5,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"a04879d2d41d3cadba0e951d863b58a619fd7c64f894baf953f447e6cb1d3732\"" Jan 20 00:42:11.080605 systemd-networkd[1725]: cali5516c1d59a0: Gained IPv6LL Jan 20 00:42:11.092103 containerd[2132]: time="2026-01-20T00:42:11.092030441Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 00:42:11.096064 containerd[2132]: time="2026-01-20T00:42:11.095934464Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 20 00:42:11.096064 containerd[2132]: time="2026-01-20T00:42:11.096006203Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 20 00:42:11.096199 kubelet[3599]: E0120 00:42:11.096155 3599 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 20 00:42:11.096243 kubelet[3599]: E0120 00:42:11.096200 3599 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 20 00:42:11.096795 kubelet[3599]: E0120 00:42:11.096474 3599 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6shcx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-n9ngh_calico-system(28862320-350f-4f29-92bb-d8201c93580b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 20 00:42:11.096910 containerd[2132]: time="2026-01-20T00:42:11.096622922Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 20 00:42:11.098531 kubelet[3599]: E0120 00:42:11.098500 3599 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-n9ngh" podUID="28862320-350f-4f29-92bb-d8201c93580b" Jan 20 00:42:11.334089 containerd[2132]: time="2026-01-20T00:42:11.333685810Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 00:42:11.336794 containerd[2132]: time="2026-01-20T00:42:11.336762931Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 20 00:42:11.336862 containerd[2132]: time="2026-01-20T00:42:11.336834301Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 20 00:42:11.336985 kubelet[3599]: E0120 00:42:11.336950 3599 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 00:42:11.337457 kubelet[3599]: E0120 00:42:11.336995 3599 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 00:42:11.337457 kubelet[3599]: E0120 00:42:11.337096 3599 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mpls5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6d94f7fcbb-jr4rt_calico-apiserver(77e9acbe-87a2-440f-b406-8c8900ab52f5): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 20 00:42:11.338311 kubelet[3599]: E0120 00:42:11.338244 3599 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6d94f7fcbb-jr4rt" podUID="77e9acbe-87a2-440f-b406-8c8900ab52f5" Jan 20 00:42:11.540798 containerd[2132]: time="2026-01-20T00:42:11.540730674Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-57d88c779f-cqh6x,Uid:86fc1b8f-992e-433a-a4e5-96b8bd195d5d,Namespace:calico-system,Attempt:0,}" Jan 20 00:42:11.541223 containerd[2132]: time="2026-01-20T00:42:11.540738914Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-t6nwm,Uid:e914416f-b403-4119-a223-0b5c6e18edd3,Namespace:calico-system,Attempt:0,}" Jan 20 00:42:11.658772 systemd-networkd[1725]: calidfd1c058a7e: Link UP Jan 20 00:42:11.659979 systemd-networkd[1725]: calidfd1c058a7e: Gained carrier Jan 20 00:42:11.672742 kubelet[3599]: E0120 00:42:11.672706 3599 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6d94f7fcbb-jr4rt" podUID="77e9acbe-87a2-440f-b406-8c8900ab52f5" Jan 20 00:42:11.675687 kubelet[3599]: E0120 00:42:11.675566 3599 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-n9ngh" podUID="28862320-350f-4f29-92bb-d8201c93580b" Jan 20 00:42:11.676235 kubelet[3599]: E0120 00:42:11.676116 3599 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7c8fb8fd4d-9hn8w" podUID="6eedb683-7841-469d-9465-68ae5bed2952" Jan 20 00:42:11.679778 containerd[2132]: 2026-01-20 00:42:11.603 [INFO][5390] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4515.1.0--n--fc9e3ff023-k8s-calico--kube--controllers--57d88c779f--cqh6x-eth0 calico-kube-controllers-57d88c779f- calico-system 86fc1b8f-992e-433a-a4e5-96b8bd195d5d 871 0 2026-01-20 00:41:46 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:57d88c779f projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4515.1.0-n-fc9e3ff023 calico-kube-controllers-57d88c779f-cqh6x eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calidfd1c058a7e [] [] }} ContainerID="6470236082ad8f546c0605ef041ddb3662819f1b4ec2738284742df1776e4fc5" Namespace="calico-system" Pod="calico-kube-controllers-57d88c779f-cqh6x" WorkloadEndpoint="ci--4515.1.0--n--fc9e3ff023-k8s-calico--kube--controllers--57d88c779f--cqh6x-" Jan 20 00:42:11.679778 containerd[2132]: 2026-01-20 00:42:11.603 [INFO][5390] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="6470236082ad8f546c0605ef041ddb3662819f1b4ec2738284742df1776e4fc5" Namespace="calico-system" Pod="calico-kube-controllers-57d88c779f-cqh6x" WorkloadEndpoint="ci--4515.1.0--n--fc9e3ff023-k8s-calico--kube--controllers--57d88c779f--cqh6x-eth0" Jan 20 00:42:11.679778 containerd[2132]: 2026-01-20 00:42:11.627 [INFO][5414] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6470236082ad8f546c0605ef041ddb3662819f1b4ec2738284742df1776e4fc5" HandleID="k8s-pod-network.6470236082ad8f546c0605ef041ddb3662819f1b4ec2738284742df1776e4fc5" Workload="ci--4515.1.0--n--fc9e3ff023-k8s-calico--kube--controllers--57d88c779f--cqh6x-eth0" Jan 20 00:42:11.679778 containerd[2132]: 2026-01-20 00:42:11.627 [INFO][5414] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="6470236082ad8f546c0605ef041ddb3662819f1b4ec2738284742df1776e4fc5" HandleID="k8s-pod-network.6470236082ad8f546c0605ef041ddb3662819f1b4ec2738284742df1776e4fc5" Workload="ci--4515.1.0--n--fc9e3ff023-k8s-calico--kube--controllers--57d88c779f--cqh6x-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400024bb80), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4515.1.0-n-fc9e3ff023", "pod":"calico-kube-controllers-57d88c779f-cqh6x", "timestamp":"2026-01-20 00:42:11.627448155 +0000 UTC"}, Hostname:"ci-4515.1.0-n-fc9e3ff023", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 20 00:42:11.679778 containerd[2132]: 2026-01-20 00:42:11.627 [INFO][5414] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 00:42:11.679778 containerd[2132]: 2026-01-20 00:42:11.627 [INFO][5414] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 00:42:11.679778 containerd[2132]: 2026-01-20 00:42:11.627 [INFO][5414] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4515.1.0-n-fc9e3ff023' Jan 20 00:42:11.679778 containerd[2132]: 2026-01-20 00:42:11.632 [INFO][5414] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.6470236082ad8f546c0605ef041ddb3662819f1b4ec2738284742df1776e4fc5" host="ci-4515.1.0-n-fc9e3ff023" Jan 20 00:42:11.679778 containerd[2132]: 2026-01-20 00:42:11.635 [INFO][5414] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4515.1.0-n-fc9e3ff023" Jan 20 00:42:11.679778 containerd[2132]: 2026-01-20 00:42:11.638 [INFO][5414] ipam/ipam.go 511: Trying affinity for 192.168.51.64/26 host="ci-4515.1.0-n-fc9e3ff023" Jan 20 00:42:11.679778 containerd[2132]: 2026-01-20 00:42:11.640 [INFO][5414] ipam/ipam.go 158: Attempting to load block cidr=192.168.51.64/26 host="ci-4515.1.0-n-fc9e3ff023" Jan 20 00:42:11.679778 containerd[2132]: 2026-01-20 00:42:11.641 [INFO][5414] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.51.64/26 host="ci-4515.1.0-n-fc9e3ff023" Jan 20 00:42:11.679778 containerd[2132]: 2026-01-20 00:42:11.641 [INFO][5414] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.51.64/26 handle="k8s-pod-network.6470236082ad8f546c0605ef041ddb3662819f1b4ec2738284742df1776e4fc5" host="ci-4515.1.0-n-fc9e3ff023" Jan 20 00:42:11.679778 containerd[2132]: 2026-01-20 00:42:11.642 [INFO][5414] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.6470236082ad8f546c0605ef041ddb3662819f1b4ec2738284742df1776e4fc5 Jan 20 00:42:11.679778 containerd[2132]: 2026-01-20 00:42:11.646 [INFO][5414] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.51.64/26 handle="k8s-pod-network.6470236082ad8f546c0605ef041ddb3662819f1b4ec2738284742df1776e4fc5" host="ci-4515.1.0-n-fc9e3ff023" Jan 20 00:42:11.679778 containerd[2132]: 2026-01-20 00:42:11.653 [INFO][5414] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.51.70/26] block=192.168.51.64/26 handle="k8s-pod-network.6470236082ad8f546c0605ef041ddb3662819f1b4ec2738284742df1776e4fc5" host="ci-4515.1.0-n-fc9e3ff023" Jan 20 00:42:11.679778 containerd[2132]: 2026-01-20 00:42:11.653 [INFO][5414] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.51.70/26] handle="k8s-pod-network.6470236082ad8f546c0605ef041ddb3662819f1b4ec2738284742df1776e4fc5" host="ci-4515.1.0-n-fc9e3ff023" Jan 20 00:42:11.679778 containerd[2132]: 2026-01-20 00:42:11.653 [INFO][5414] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 00:42:11.679778 containerd[2132]: 2026-01-20 00:42:11.654 [INFO][5414] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.51.70/26] IPv6=[] ContainerID="6470236082ad8f546c0605ef041ddb3662819f1b4ec2738284742df1776e4fc5" HandleID="k8s-pod-network.6470236082ad8f546c0605ef041ddb3662819f1b4ec2738284742df1776e4fc5" Workload="ci--4515.1.0--n--fc9e3ff023-k8s-calico--kube--controllers--57d88c779f--cqh6x-eth0" Jan 20 00:42:11.680974 containerd[2132]: 2026-01-20 00:42:11.655 [INFO][5390] cni-plugin/k8s.go 418: Populated endpoint ContainerID="6470236082ad8f546c0605ef041ddb3662819f1b4ec2738284742df1776e4fc5" Namespace="calico-system" Pod="calico-kube-controllers-57d88c779f-cqh6x" WorkloadEndpoint="ci--4515.1.0--n--fc9e3ff023-k8s-calico--kube--controllers--57d88c779f--cqh6x-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4515.1.0--n--fc9e3ff023-k8s-calico--kube--controllers--57d88c779f--cqh6x-eth0", GenerateName:"calico-kube-controllers-57d88c779f-", Namespace:"calico-system", SelfLink:"", UID:"86fc1b8f-992e-433a-a4e5-96b8bd195d5d", ResourceVersion:"871", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 0, 41, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"57d88c779f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4515.1.0-n-fc9e3ff023", ContainerID:"", Pod:"calico-kube-controllers-57d88c779f-cqh6x", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.51.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calidfd1c058a7e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 00:42:11.680974 containerd[2132]: 2026-01-20 00:42:11.655 [INFO][5390] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.51.70/32] ContainerID="6470236082ad8f546c0605ef041ddb3662819f1b4ec2738284742df1776e4fc5" Namespace="calico-system" Pod="calico-kube-controllers-57d88c779f-cqh6x" WorkloadEndpoint="ci--4515.1.0--n--fc9e3ff023-k8s-calico--kube--controllers--57d88c779f--cqh6x-eth0" Jan 20 00:42:11.680974 containerd[2132]: 2026-01-20 00:42:11.655 [INFO][5390] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calidfd1c058a7e ContainerID="6470236082ad8f546c0605ef041ddb3662819f1b4ec2738284742df1776e4fc5" Namespace="calico-system" Pod="calico-kube-controllers-57d88c779f-cqh6x" WorkloadEndpoint="ci--4515.1.0--n--fc9e3ff023-k8s-calico--kube--controllers--57d88c779f--cqh6x-eth0" Jan 20 00:42:11.680974 containerd[2132]: 2026-01-20 00:42:11.660 [INFO][5390] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6470236082ad8f546c0605ef041ddb3662819f1b4ec2738284742df1776e4fc5" Namespace="calico-system" Pod="calico-kube-controllers-57d88c779f-cqh6x" WorkloadEndpoint="ci--4515.1.0--n--fc9e3ff023-k8s-calico--kube--controllers--57d88c779f--cqh6x-eth0" Jan 20 00:42:11.680974 containerd[2132]: 2026-01-20 00:42:11.661 [INFO][5390] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="6470236082ad8f546c0605ef041ddb3662819f1b4ec2738284742df1776e4fc5" Namespace="calico-system" Pod="calico-kube-controllers-57d88c779f-cqh6x" WorkloadEndpoint="ci--4515.1.0--n--fc9e3ff023-k8s-calico--kube--controllers--57d88c779f--cqh6x-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4515.1.0--n--fc9e3ff023-k8s-calico--kube--controllers--57d88c779f--cqh6x-eth0", GenerateName:"calico-kube-controllers-57d88c779f-", Namespace:"calico-system", SelfLink:"", UID:"86fc1b8f-992e-433a-a4e5-96b8bd195d5d", ResourceVersion:"871", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 0, 41, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"57d88c779f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4515.1.0-n-fc9e3ff023", ContainerID:"6470236082ad8f546c0605ef041ddb3662819f1b4ec2738284742df1776e4fc5", Pod:"calico-kube-controllers-57d88c779f-cqh6x", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.51.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calidfd1c058a7e", MAC:"6e:1b:5a:5d:7a:3a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 00:42:11.680974 containerd[2132]: 2026-01-20 00:42:11.676 [INFO][5390] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="6470236082ad8f546c0605ef041ddb3662819f1b4ec2738284742df1776e4fc5" Namespace="calico-system" Pod="calico-kube-controllers-57d88c779f-cqh6x" WorkloadEndpoint="ci--4515.1.0--n--fc9e3ff023-k8s-calico--kube--controllers--57d88c779f--cqh6x-eth0" Jan 20 00:42:11.706343 kernel: kauditd_printk_skb: 365 callbacks suppressed Jan 20 00:42:11.706420 kernel: audit: type=1325 audit(1768869731.700:725): table=filter:136 family=2 entries=48 op=nft_register_chain pid=5437 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 20 00:42:11.700000 audit[5437]: NETFILTER_CFG table=filter:136 family=2 entries=48 op=nft_register_chain pid=5437 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 20 00:42:11.700000 audit[5437]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=23124 a0=3 a1=ffffffb874d0 a2=0 a3=ffff9be48fa8 items=0 ppid=4877 pid=5437 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:42:11.735266 kernel: audit: type=1300 audit(1768869731.700:725): arch=c00000b7 syscall=211 success=yes exit=23124 a0=3 a1=ffffffb874d0 a2=0 a3=ffff9be48fa8 items=0 ppid=4877 pid=5437 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:42:11.735350 kernel: audit: type=1327 audit(1768869731.700:725): proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 20 00:42:11.700000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 20 00:42:11.747000 audit[5439]: NETFILTER_CFG table=filter:137 family=2 entries=14 op=nft_register_rule pid=5439 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 00:42:11.757716 kernel: audit: type=1325 audit(1768869731.747:726): table=filter:137 family=2 entries=14 op=nft_register_rule pid=5439 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 00:42:11.747000 audit[5439]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=fffff8cde8f0 a2=0 a3=1 items=0 ppid=3769 pid=5439 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:42:11.778621 containerd[2132]: time="2026-01-20T00:42:11.778584410Z" level=info msg="connecting to shim 6470236082ad8f546c0605ef041ddb3662819f1b4ec2738284742df1776e4fc5" address="unix:///run/containerd/s/cfaa26be4631b17496912ab00b285b0ddd58b7c49e8fe91fba516334dc0a6685" namespace=k8s.io protocol=ttrpc version=3 Jan 20 00:42:11.780259 kernel: audit: type=1300 audit(1768869731.747:726): arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=fffff8cde8f0 a2=0 a3=1 items=0 ppid=3769 pid=5439 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:42:11.747000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 00:42:11.789946 kernel: audit: type=1327 audit(1768869731.747:726): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 00:42:11.790719 systemd-networkd[1725]: cali65703926848: Gained IPv6LL Jan 20 00:42:11.762000 audit[5439]: NETFILTER_CFG table=nat:138 family=2 entries=20 op=nft_register_rule pid=5439 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 00:42:11.799972 kernel: audit: type=1325 audit(1768869731.762:727): table=nat:138 family=2 entries=20 op=nft_register_rule pid=5439 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 00:42:11.762000 audit[5439]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5772 a0=3 a1=fffff8cde8f0 a2=0 a3=1 items=0 ppid=3769 pid=5439 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:42:11.818322 kernel: audit: type=1300 audit(1768869731.762:727): arch=c00000b7 syscall=211 success=yes exit=5772 a0=3 a1=fffff8cde8f0 a2=0 a3=1 items=0 ppid=3769 pid=5439 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:42:11.762000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 00:42:11.827698 kernel: audit: type=1327 audit(1768869731.762:727): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 00:42:11.851498 systemd[1]: Started cri-containerd-6470236082ad8f546c0605ef041ddb3662819f1b4ec2738284742df1776e4fc5.scope - libcontainer container 6470236082ad8f546c0605ef041ddb3662819f1b4ec2738284742df1776e4fc5. Jan 20 00:42:11.860455 systemd-networkd[1725]: calid28ca98819b: Link UP Jan 20 00:42:11.861659 systemd-networkd[1725]: calid28ca98819b: Gained carrier Jan 20 00:42:11.869000 audit: BPF prog-id=260 op=LOAD Jan 20 00:42:11.875000 audit: BPF prog-id=261 op=LOAD Jan 20 00:42:11.877540 kernel: audit: type=1334 audit(1768869731.869:728): prog-id=260 op=LOAD Jan 20 00:42:11.875000 audit[5460]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000138180 a2=98 a3=0 items=0 ppid=5448 pid=5460 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:42:11.875000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3634373032333630383261643866353436633036303565663034316464 Jan 20 00:42:11.875000 audit: BPF prog-id=261 op=UNLOAD Jan 20 00:42:11.875000 audit[5460]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=5448 pid=5460 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:42:11.875000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3634373032333630383261643866353436633036303565663034316464 Jan 20 00:42:11.875000 audit: BPF prog-id=262 op=LOAD Jan 20 00:42:11.875000 audit[5460]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001383e8 a2=98 a3=0 items=0 ppid=5448 pid=5460 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:42:11.875000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3634373032333630383261643866353436633036303565663034316464 Jan 20 00:42:11.875000 audit: BPF prog-id=263 op=LOAD Jan 20 00:42:11.875000 audit[5460]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=4000138168 a2=98 a3=0 items=0 ppid=5448 pid=5460 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:42:11.875000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3634373032333630383261643866353436633036303565663034316464 Jan 20 00:42:11.875000 audit: BPF prog-id=263 op=UNLOAD Jan 20 00:42:11.875000 audit[5460]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=5448 pid=5460 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:42:11.875000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3634373032333630383261643866353436633036303565663034316464 Jan 20 00:42:11.875000 audit: BPF prog-id=262 op=UNLOAD Jan 20 00:42:11.875000 audit[5460]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=5448 pid=5460 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:42:11.875000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3634373032333630383261643866353436633036303565663034316464 Jan 20 00:42:11.875000 audit: BPF prog-id=264 op=LOAD Jan 20 00:42:11.875000 audit[5460]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000138648 a2=98 a3=0 items=0 ppid=5448 pid=5460 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:42:11.875000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3634373032333630383261643866353436633036303565663034316464 Jan 20 00:42:11.888270 containerd[2132]: 2026-01-20 00:42:11.606 [INFO][5399] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4515.1.0--n--fc9e3ff023-k8s-csi--node--driver--t6nwm-eth0 csi-node-driver- calico-system e914416f-b403-4119-a223-0b5c6e18edd3 753 0 2026-01-20 00:41:46 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:857b56db8f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4515.1.0-n-fc9e3ff023 csi-node-driver-t6nwm eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calid28ca98819b [] [] }} ContainerID="25e7eaf493f5a7d2d550aa270656d039d8f268ccff52edc3521d5f8ff541dbd1" Namespace="calico-system" Pod="csi-node-driver-t6nwm" WorkloadEndpoint="ci--4515.1.0--n--fc9e3ff023-k8s-csi--node--driver--t6nwm-" Jan 20 00:42:11.888270 containerd[2132]: 2026-01-20 00:42:11.606 [INFO][5399] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="25e7eaf493f5a7d2d550aa270656d039d8f268ccff52edc3521d5f8ff541dbd1" Namespace="calico-system" Pod="csi-node-driver-t6nwm" WorkloadEndpoint="ci--4515.1.0--n--fc9e3ff023-k8s-csi--node--driver--t6nwm-eth0" Jan 20 00:42:11.888270 containerd[2132]: 2026-01-20 00:42:11.628 [INFO][5416] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="25e7eaf493f5a7d2d550aa270656d039d8f268ccff52edc3521d5f8ff541dbd1" HandleID="k8s-pod-network.25e7eaf493f5a7d2d550aa270656d039d8f268ccff52edc3521d5f8ff541dbd1" Workload="ci--4515.1.0--n--fc9e3ff023-k8s-csi--node--driver--t6nwm-eth0" Jan 20 00:42:11.888270 containerd[2132]: 2026-01-20 00:42:11.628 [INFO][5416] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="25e7eaf493f5a7d2d550aa270656d039d8f268ccff52edc3521d5f8ff541dbd1" HandleID="k8s-pod-network.25e7eaf493f5a7d2d550aa270656d039d8f268ccff52edc3521d5f8ff541dbd1" Workload="ci--4515.1.0--n--fc9e3ff023-k8s-csi--node--driver--t6nwm-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002d3660), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4515.1.0-n-fc9e3ff023", "pod":"csi-node-driver-t6nwm", "timestamp":"2026-01-20 00:42:11.628117075 +0000 UTC"}, Hostname:"ci-4515.1.0-n-fc9e3ff023", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 20 00:42:11.888270 containerd[2132]: 2026-01-20 00:42:11.628 [INFO][5416] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 00:42:11.888270 containerd[2132]: 2026-01-20 00:42:11.653 [INFO][5416] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 00:42:11.888270 containerd[2132]: 2026-01-20 00:42:11.654 [INFO][5416] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4515.1.0-n-fc9e3ff023' Jan 20 00:42:11.888270 containerd[2132]: 2026-01-20 00:42:11.746 [INFO][5416] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.25e7eaf493f5a7d2d550aa270656d039d8f268ccff52edc3521d5f8ff541dbd1" host="ci-4515.1.0-n-fc9e3ff023" Jan 20 00:42:11.888270 containerd[2132]: 2026-01-20 00:42:11.758 [INFO][5416] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4515.1.0-n-fc9e3ff023" Jan 20 00:42:11.888270 containerd[2132]: 2026-01-20 00:42:11.781 [INFO][5416] ipam/ipam.go 511: Trying affinity for 192.168.51.64/26 host="ci-4515.1.0-n-fc9e3ff023" Jan 20 00:42:11.888270 containerd[2132]: 2026-01-20 00:42:11.791 [INFO][5416] ipam/ipam.go 158: Attempting to load block cidr=192.168.51.64/26 host="ci-4515.1.0-n-fc9e3ff023" Jan 20 00:42:11.888270 containerd[2132]: 2026-01-20 00:42:11.819 [INFO][5416] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.51.64/26 host="ci-4515.1.0-n-fc9e3ff023" Jan 20 00:42:11.888270 containerd[2132]: 2026-01-20 00:42:11.821 [INFO][5416] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.51.64/26 handle="k8s-pod-network.25e7eaf493f5a7d2d550aa270656d039d8f268ccff52edc3521d5f8ff541dbd1" host="ci-4515.1.0-n-fc9e3ff023" Jan 20 00:42:11.888270 containerd[2132]: 2026-01-20 00:42:11.828 [INFO][5416] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.25e7eaf493f5a7d2d550aa270656d039d8f268ccff52edc3521d5f8ff541dbd1 Jan 20 00:42:11.888270 containerd[2132]: 2026-01-20 00:42:11.835 [INFO][5416] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.51.64/26 handle="k8s-pod-network.25e7eaf493f5a7d2d550aa270656d039d8f268ccff52edc3521d5f8ff541dbd1" host="ci-4515.1.0-n-fc9e3ff023" Jan 20 00:42:11.888270 containerd[2132]: 2026-01-20 00:42:11.845 [INFO][5416] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.51.71/26] block=192.168.51.64/26 handle="k8s-pod-network.25e7eaf493f5a7d2d550aa270656d039d8f268ccff52edc3521d5f8ff541dbd1" host="ci-4515.1.0-n-fc9e3ff023" Jan 20 00:42:11.888270 containerd[2132]: 2026-01-20 00:42:11.846 [INFO][5416] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.51.71/26] handle="k8s-pod-network.25e7eaf493f5a7d2d550aa270656d039d8f268ccff52edc3521d5f8ff541dbd1" host="ci-4515.1.0-n-fc9e3ff023" Jan 20 00:42:11.888270 containerd[2132]: 2026-01-20 00:42:11.846 [INFO][5416] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 00:42:11.888270 containerd[2132]: 2026-01-20 00:42:11.846 [INFO][5416] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.51.71/26] IPv6=[] ContainerID="25e7eaf493f5a7d2d550aa270656d039d8f268ccff52edc3521d5f8ff541dbd1" HandleID="k8s-pod-network.25e7eaf493f5a7d2d550aa270656d039d8f268ccff52edc3521d5f8ff541dbd1" Workload="ci--4515.1.0--n--fc9e3ff023-k8s-csi--node--driver--t6nwm-eth0" Jan 20 00:42:11.888653 containerd[2132]: 2026-01-20 00:42:11.855 [INFO][5399] cni-plugin/k8s.go 418: Populated endpoint ContainerID="25e7eaf493f5a7d2d550aa270656d039d8f268ccff52edc3521d5f8ff541dbd1" Namespace="calico-system" Pod="csi-node-driver-t6nwm" WorkloadEndpoint="ci--4515.1.0--n--fc9e3ff023-k8s-csi--node--driver--t6nwm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4515.1.0--n--fc9e3ff023-k8s-csi--node--driver--t6nwm-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"e914416f-b403-4119-a223-0b5c6e18edd3", ResourceVersion:"753", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 0, 41, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4515.1.0-n-fc9e3ff023", ContainerID:"", Pod:"csi-node-driver-t6nwm", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.51.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calid28ca98819b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 00:42:11.888653 containerd[2132]: 2026-01-20 00:42:11.855 [INFO][5399] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.51.71/32] ContainerID="25e7eaf493f5a7d2d550aa270656d039d8f268ccff52edc3521d5f8ff541dbd1" Namespace="calico-system" Pod="csi-node-driver-t6nwm" WorkloadEndpoint="ci--4515.1.0--n--fc9e3ff023-k8s-csi--node--driver--t6nwm-eth0" Jan 20 00:42:11.888653 containerd[2132]: 2026-01-20 00:42:11.855 [INFO][5399] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid28ca98819b ContainerID="25e7eaf493f5a7d2d550aa270656d039d8f268ccff52edc3521d5f8ff541dbd1" Namespace="calico-system" Pod="csi-node-driver-t6nwm" WorkloadEndpoint="ci--4515.1.0--n--fc9e3ff023-k8s-csi--node--driver--t6nwm-eth0" Jan 20 00:42:11.888653 containerd[2132]: 2026-01-20 00:42:11.863 [INFO][5399] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="25e7eaf493f5a7d2d550aa270656d039d8f268ccff52edc3521d5f8ff541dbd1" Namespace="calico-system" Pod="csi-node-driver-t6nwm" WorkloadEndpoint="ci--4515.1.0--n--fc9e3ff023-k8s-csi--node--driver--t6nwm-eth0" Jan 20 00:42:11.888653 containerd[2132]: 2026-01-20 00:42:11.867 [INFO][5399] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="25e7eaf493f5a7d2d550aa270656d039d8f268ccff52edc3521d5f8ff541dbd1" Namespace="calico-system" Pod="csi-node-driver-t6nwm" WorkloadEndpoint="ci--4515.1.0--n--fc9e3ff023-k8s-csi--node--driver--t6nwm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4515.1.0--n--fc9e3ff023-k8s-csi--node--driver--t6nwm-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"e914416f-b403-4119-a223-0b5c6e18edd3", ResourceVersion:"753", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 0, 41, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4515.1.0-n-fc9e3ff023", ContainerID:"25e7eaf493f5a7d2d550aa270656d039d8f268ccff52edc3521d5f8ff541dbd1", Pod:"csi-node-driver-t6nwm", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.51.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calid28ca98819b", MAC:"ba:ea:92:c7:8a:34", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 00:42:11.888653 containerd[2132]: 2026-01-20 00:42:11.885 [INFO][5399] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="25e7eaf493f5a7d2d550aa270656d039d8f268ccff52edc3521d5f8ff541dbd1" Namespace="calico-system" Pod="csi-node-driver-t6nwm" WorkloadEndpoint="ci--4515.1.0--n--fc9e3ff023-k8s-csi--node--driver--t6nwm-eth0" Jan 20 00:42:11.925000 audit[5496]: NETFILTER_CFG table=filter:139 family=2 entries=58 op=nft_register_chain pid=5496 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 20 00:42:11.925000 audit[5496]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=27164 a0=3 a1=fffff29bc7d0 a2=0 a3=ffffaee56fa8 items=0 ppid=4877 pid=5496 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:42:11.925000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 20 00:42:11.929775 containerd[2132]: time="2026-01-20T00:42:11.929517787Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-57d88c779f-cqh6x,Uid:86fc1b8f-992e-433a-a4e5-96b8bd195d5d,Namespace:calico-system,Attempt:0,} returns sandbox id \"6470236082ad8f546c0605ef041ddb3662819f1b4ec2738284742df1776e4fc5\"" Jan 20 00:42:11.932794 containerd[2132]: time="2026-01-20T00:42:11.932760599Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 20 00:42:11.952254 containerd[2132]: time="2026-01-20T00:42:11.952189483Z" level=info msg="connecting to shim 25e7eaf493f5a7d2d550aa270656d039d8f268ccff52edc3521d5f8ff541dbd1" address="unix:///run/containerd/s/e884f2cec4efbead2e1fdc7144521879c384770d3cd70934c0ba5f58a6866f4b" namespace=k8s.io protocol=ttrpc version=3 Jan 20 00:42:11.969491 systemd[1]: Started cri-containerd-25e7eaf493f5a7d2d550aa270656d039d8f268ccff52edc3521d5f8ff541dbd1.scope - libcontainer container 25e7eaf493f5a7d2d550aa270656d039d8f268ccff52edc3521d5f8ff541dbd1. Jan 20 00:42:11.975000 audit: BPF prog-id=265 op=LOAD Jan 20 00:42:11.976000 audit: BPF prog-id=266 op=LOAD Jan 20 00:42:11.976000 audit[5517]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130180 a2=98 a3=0 items=0 ppid=5506 pid=5517 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:42:11.976000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3235653765616634393366356137643264353530616132373036353664 Jan 20 00:42:11.976000 audit: BPF prog-id=266 op=UNLOAD Jan 20 00:42:11.976000 audit[5517]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5506 pid=5517 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:42:11.976000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3235653765616634393366356137643264353530616132373036353664 Jan 20 00:42:11.976000 audit: BPF prog-id=267 op=LOAD Jan 20 00:42:11.976000 audit[5517]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001303e8 a2=98 a3=0 items=0 ppid=5506 pid=5517 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:42:11.976000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3235653765616634393366356137643264353530616132373036353664 Jan 20 00:42:11.976000 audit: BPF prog-id=268 op=LOAD Jan 20 00:42:11.976000 audit[5517]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000130168 a2=98 a3=0 items=0 ppid=5506 pid=5517 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:42:11.976000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3235653765616634393366356137643264353530616132373036353664 Jan 20 00:42:11.976000 audit: BPF prog-id=268 op=UNLOAD Jan 20 00:42:11.976000 audit[5517]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5506 pid=5517 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:42:11.976000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3235653765616634393366356137643264353530616132373036353664 Jan 20 00:42:11.976000 audit: BPF prog-id=267 op=UNLOAD Jan 20 00:42:11.976000 audit[5517]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5506 pid=5517 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:42:11.976000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3235653765616634393366356137643264353530616132373036353664 Jan 20 00:42:11.976000 audit: BPF prog-id=269 op=LOAD Jan 20 00:42:11.976000 audit[5517]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130648 a2=98 a3=0 items=0 ppid=5506 pid=5517 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:42:11.976000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3235653765616634393366356137643264353530616132373036353664 Jan 20 00:42:11.992089 containerd[2132]: time="2026-01-20T00:42:11.992067045Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-t6nwm,Uid:e914416f-b403-4119-a223-0b5c6e18edd3,Namespace:calico-system,Attempt:0,} returns sandbox id \"25e7eaf493f5a7d2d550aa270656d039d8f268ccff52edc3521d5f8ff541dbd1\"" Jan 20 00:42:12.104426 systemd-networkd[1725]: calie829a990937: Gained IPv6LL Jan 20 00:42:12.219073 containerd[2132]: time="2026-01-20T00:42:12.218912122Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 00:42:12.222761 containerd[2132]: time="2026-01-20T00:42:12.222672580Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 20 00:42:12.222761 containerd[2132]: time="2026-01-20T00:42:12.222728640Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 20 00:42:12.222952 kubelet[3599]: E0120 00:42:12.222895 3599 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 20 00:42:12.222952 kubelet[3599]: E0120 00:42:12.222947 3599 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 20 00:42:12.223323 kubelet[3599]: E0120 00:42:12.223149 3599 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qm99j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-57d88c779f-cqh6x_calico-system(86fc1b8f-992e-433a-a4e5-96b8bd195d5d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 20 00:42:12.223551 containerd[2132]: time="2026-01-20T00:42:12.223202722Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 20 00:42:12.224865 kubelet[3599]: E0120 00:42:12.224763 3599 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-57d88c779f-cqh6x" podUID="86fc1b8f-992e-433a-a4e5-96b8bd195d5d" Jan 20 00:42:12.488250 containerd[2132]: time="2026-01-20T00:42:12.487748236Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 00:42:12.497255 containerd[2132]: time="2026-01-20T00:42:12.497155335Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 20 00:42:12.497255 containerd[2132]: time="2026-01-20T00:42:12.497201323Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 20 00:42:12.497499 kubelet[3599]: E0120 00:42:12.497382 3599 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 20 00:42:12.497499 kubelet[3599]: E0120 00:42:12.497429 3599 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 20 00:42:12.498223 kubelet[3599]: E0120 00:42:12.497539 3599 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-njtjb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-t6nwm_calico-system(e914416f-b403-4119-a223-0b5c6e18edd3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 20 00:42:12.499769 containerd[2132]: time="2026-01-20T00:42:12.499708001Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 20 00:42:12.552409 systemd-networkd[1725]: caliefcc097399e: Gained IPv6LL Jan 20 00:42:12.678074 kubelet[3599]: E0120 00:42:12.678038 3599 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-57d88c779f-cqh6x" podUID="86fc1b8f-992e-433a-a4e5-96b8bd195d5d" Jan 20 00:42:12.684778 kubelet[3599]: E0120 00:42:12.684581 3599 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6d94f7fcbb-jr4rt" podUID="77e9acbe-87a2-440f-b406-8c8900ab52f5" Jan 20 00:42:12.684778 kubelet[3599]: E0120 00:42:12.684700 3599 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-n9ngh" podUID="28862320-350f-4f29-92bb-d8201c93580b" Jan 20 00:42:12.744750 systemd-networkd[1725]: calidfd1c058a7e: Gained IPv6LL Jan 20 00:42:12.779684 containerd[2132]: time="2026-01-20T00:42:12.779655793Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 00:42:12.783314 containerd[2132]: time="2026-01-20T00:42:12.783238238Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 20 00:42:12.783369 containerd[2132]: time="2026-01-20T00:42:12.783295754Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 20 00:42:12.783561 kubelet[3599]: E0120 00:42:12.783524 3599 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 20 00:42:12.783561 kubelet[3599]: E0120 00:42:12.783561 3599 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 20 00:42:12.784336 kubelet[3599]: E0120 00:42:12.783640 3599 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-njtjb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-t6nwm_calico-system(e914416f-b403-4119-a223-0b5c6e18edd3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 20 00:42:12.785500 kubelet[3599]: E0120 00:42:12.785472 3599 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-t6nwm" podUID="e914416f-b403-4119-a223-0b5c6e18edd3" Jan 20 00:42:12.804000 audit[5541]: NETFILTER_CFG table=filter:140 family=2 entries=14 op=nft_register_rule pid=5541 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 00:42:12.804000 audit[5541]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=fffffc684b70 a2=0 a3=1 items=0 ppid=3769 pid=5541 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:42:12.804000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 00:42:12.816000 audit[5541]: NETFILTER_CFG table=nat:141 family=2 entries=20 op=nft_register_rule pid=5541 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 00:42:12.816000 audit[5541]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5772 a0=3 a1=fffffc684b70 a2=0 a3=1 items=0 ppid=3769 pid=5541 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:42:12.816000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 00:42:13.320494 systemd-networkd[1725]: calid28ca98819b: Gained IPv6LL Jan 20 00:42:13.540800 containerd[2132]: time="2026-01-20T00:42:13.540759946Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7c8fb8fd4d-xhrnz,Uid:0b136bd0-6a42-4726-87cd-a3538d5ee86b,Namespace:calico-apiserver,Attempt:0,}" Jan 20 00:42:13.627954 systemd-networkd[1725]: calid21073a9148: Link UP Jan 20 00:42:13.629471 systemd-networkd[1725]: calid21073a9148: Gained carrier Jan 20 00:42:13.643067 containerd[2132]: 2026-01-20 00:42:13.573 [INFO][5543] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4515.1.0--n--fc9e3ff023-k8s-calico--apiserver--7c8fb8fd4d--xhrnz-eth0 calico-apiserver-7c8fb8fd4d- calico-apiserver 0b136bd0-6a42-4726-87cd-a3538d5ee86b 874 0 2026-01-20 00:41:41 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7c8fb8fd4d projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4515.1.0-n-fc9e3ff023 calico-apiserver-7c8fb8fd4d-xhrnz eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calid21073a9148 [] [] }} ContainerID="dcb77af11172ca7fd90b1f998e932a4850b38369293bc0eb1e781b5e59c7dca8" Namespace="calico-apiserver" Pod="calico-apiserver-7c8fb8fd4d-xhrnz" WorkloadEndpoint="ci--4515.1.0--n--fc9e3ff023-k8s-calico--apiserver--7c8fb8fd4d--xhrnz-" Jan 20 00:42:13.643067 containerd[2132]: 2026-01-20 00:42:13.573 [INFO][5543] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="dcb77af11172ca7fd90b1f998e932a4850b38369293bc0eb1e781b5e59c7dca8" Namespace="calico-apiserver" Pod="calico-apiserver-7c8fb8fd4d-xhrnz" WorkloadEndpoint="ci--4515.1.0--n--fc9e3ff023-k8s-calico--apiserver--7c8fb8fd4d--xhrnz-eth0" Jan 20 00:42:13.643067 containerd[2132]: 2026-01-20 00:42:13.592 [INFO][5554] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="dcb77af11172ca7fd90b1f998e932a4850b38369293bc0eb1e781b5e59c7dca8" HandleID="k8s-pod-network.dcb77af11172ca7fd90b1f998e932a4850b38369293bc0eb1e781b5e59c7dca8" Workload="ci--4515.1.0--n--fc9e3ff023-k8s-calico--apiserver--7c8fb8fd4d--xhrnz-eth0" Jan 20 00:42:13.643067 containerd[2132]: 2026-01-20 00:42:13.592 [INFO][5554] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="dcb77af11172ca7fd90b1f998e932a4850b38369293bc0eb1e781b5e59c7dca8" HandleID="k8s-pod-network.dcb77af11172ca7fd90b1f998e932a4850b38369293bc0eb1e781b5e59c7dca8" Workload="ci--4515.1.0--n--fc9e3ff023-k8s-calico--apiserver--7c8fb8fd4d--xhrnz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002ab4a0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4515.1.0-n-fc9e3ff023", "pod":"calico-apiserver-7c8fb8fd4d-xhrnz", "timestamp":"2026-01-20 00:42:13.592352279 +0000 UTC"}, Hostname:"ci-4515.1.0-n-fc9e3ff023", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 20 00:42:13.643067 containerd[2132]: 2026-01-20 00:42:13.592 [INFO][5554] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 00:42:13.643067 containerd[2132]: 2026-01-20 00:42:13.592 [INFO][5554] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 00:42:13.643067 containerd[2132]: 2026-01-20 00:42:13.592 [INFO][5554] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4515.1.0-n-fc9e3ff023' Jan 20 00:42:13.643067 containerd[2132]: 2026-01-20 00:42:13.597 [INFO][5554] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.dcb77af11172ca7fd90b1f998e932a4850b38369293bc0eb1e781b5e59c7dca8" host="ci-4515.1.0-n-fc9e3ff023" Jan 20 00:42:13.643067 containerd[2132]: 2026-01-20 00:42:13.600 [INFO][5554] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4515.1.0-n-fc9e3ff023" Jan 20 00:42:13.643067 containerd[2132]: 2026-01-20 00:42:13.603 [INFO][5554] ipam/ipam.go 511: Trying affinity for 192.168.51.64/26 host="ci-4515.1.0-n-fc9e3ff023" Jan 20 00:42:13.643067 containerd[2132]: 2026-01-20 00:42:13.605 [INFO][5554] ipam/ipam.go 158: Attempting to load block cidr=192.168.51.64/26 host="ci-4515.1.0-n-fc9e3ff023" Jan 20 00:42:13.643067 containerd[2132]: 2026-01-20 00:42:13.607 [INFO][5554] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.51.64/26 host="ci-4515.1.0-n-fc9e3ff023" Jan 20 00:42:13.643067 containerd[2132]: 2026-01-20 00:42:13.607 [INFO][5554] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.51.64/26 handle="k8s-pod-network.dcb77af11172ca7fd90b1f998e932a4850b38369293bc0eb1e781b5e59c7dca8" host="ci-4515.1.0-n-fc9e3ff023" Jan 20 00:42:13.643067 containerd[2132]: 2026-01-20 00:42:13.609 [INFO][5554] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.dcb77af11172ca7fd90b1f998e932a4850b38369293bc0eb1e781b5e59c7dca8 Jan 20 00:42:13.643067 containerd[2132]: 2026-01-20 00:42:13.613 [INFO][5554] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.51.64/26 handle="k8s-pod-network.dcb77af11172ca7fd90b1f998e932a4850b38369293bc0eb1e781b5e59c7dca8" host="ci-4515.1.0-n-fc9e3ff023" Jan 20 00:42:13.643067 containerd[2132]: 2026-01-20 00:42:13.622 [INFO][5554] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.51.72/26] block=192.168.51.64/26 handle="k8s-pod-network.dcb77af11172ca7fd90b1f998e932a4850b38369293bc0eb1e781b5e59c7dca8" host="ci-4515.1.0-n-fc9e3ff023" Jan 20 00:42:13.643067 containerd[2132]: 2026-01-20 00:42:13.622 [INFO][5554] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.51.72/26] handle="k8s-pod-network.dcb77af11172ca7fd90b1f998e932a4850b38369293bc0eb1e781b5e59c7dca8" host="ci-4515.1.0-n-fc9e3ff023" Jan 20 00:42:13.643067 containerd[2132]: 2026-01-20 00:42:13.622 [INFO][5554] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 00:42:13.643067 containerd[2132]: 2026-01-20 00:42:13.622 [INFO][5554] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.51.72/26] IPv6=[] ContainerID="dcb77af11172ca7fd90b1f998e932a4850b38369293bc0eb1e781b5e59c7dca8" HandleID="k8s-pod-network.dcb77af11172ca7fd90b1f998e932a4850b38369293bc0eb1e781b5e59c7dca8" Workload="ci--4515.1.0--n--fc9e3ff023-k8s-calico--apiserver--7c8fb8fd4d--xhrnz-eth0" Jan 20 00:42:13.644577 containerd[2132]: 2026-01-20 00:42:13.624 [INFO][5543] cni-plugin/k8s.go 418: Populated endpoint ContainerID="dcb77af11172ca7fd90b1f998e932a4850b38369293bc0eb1e781b5e59c7dca8" Namespace="calico-apiserver" Pod="calico-apiserver-7c8fb8fd4d-xhrnz" WorkloadEndpoint="ci--4515.1.0--n--fc9e3ff023-k8s-calico--apiserver--7c8fb8fd4d--xhrnz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4515.1.0--n--fc9e3ff023-k8s-calico--apiserver--7c8fb8fd4d--xhrnz-eth0", GenerateName:"calico-apiserver-7c8fb8fd4d-", Namespace:"calico-apiserver", SelfLink:"", UID:"0b136bd0-6a42-4726-87cd-a3538d5ee86b", ResourceVersion:"874", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 0, 41, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7c8fb8fd4d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4515.1.0-n-fc9e3ff023", ContainerID:"", Pod:"calico-apiserver-7c8fb8fd4d-xhrnz", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.51.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calid21073a9148", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 00:42:13.644577 containerd[2132]: 2026-01-20 00:42:13.624 [INFO][5543] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.51.72/32] ContainerID="dcb77af11172ca7fd90b1f998e932a4850b38369293bc0eb1e781b5e59c7dca8" Namespace="calico-apiserver" Pod="calico-apiserver-7c8fb8fd4d-xhrnz" WorkloadEndpoint="ci--4515.1.0--n--fc9e3ff023-k8s-calico--apiserver--7c8fb8fd4d--xhrnz-eth0" Jan 20 00:42:13.644577 containerd[2132]: 2026-01-20 00:42:13.624 [INFO][5543] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid21073a9148 ContainerID="dcb77af11172ca7fd90b1f998e932a4850b38369293bc0eb1e781b5e59c7dca8" Namespace="calico-apiserver" Pod="calico-apiserver-7c8fb8fd4d-xhrnz" WorkloadEndpoint="ci--4515.1.0--n--fc9e3ff023-k8s-calico--apiserver--7c8fb8fd4d--xhrnz-eth0" Jan 20 00:42:13.644577 containerd[2132]: 2026-01-20 00:42:13.630 [INFO][5543] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="dcb77af11172ca7fd90b1f998e932a4850b38369293bc0eb1e781b5e59c7dca8" Namespace="calico-apiserver" Pod="calico-apiserver-7c8fb8fd4d-xhrnz" WorkloadEndpoint="ci--4515.1.0--n--fc9e3ff023-k8s-calico--apiserver--7c8fb8fd4d--xhrnz-eth0" Jan 20 00:42:13.644577 containerd[2132]: 2026-01-20 00:42:13.630 [INFO][5543] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="dcb77af11172ca7fd90b1f998e932a4850b38369293bc0eb1e781b5e59c7dca8" Namespace="calico-apiserver" Pod="calico-apiserver-7c8fb8fd4d-xhrnz" WorkloadEndpoint="ci--4515.1.0--n--fc9e3ff023-k8s-calico--apiserver--7c8fb8fd4d--xhrnz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4515.1.0--n--fc9e3ff023-k8s-calico--apiserver--7c8fb8fd4d--xhrnz-eth0", GenerateName:"calico-apiserver-7c8fb8fd4d-", Namespace:"calico-apiserver", SelfLink:"", UID:"0b136bd0-6a42-4726-87cd-a3538d5ee86b", ResourceVersion:"874", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 0, 41, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7c8fb8fd4d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4515.1.0-n-fc9e3ff023", ContainerID:"dcb77af11172ca7fd90b1f998e932a4850b38369293bc0eb1e781b5e59c7dca8", Pod:"calico-apiserver-7c8fb8fd4d-xhrnz", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.51.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calid21073a9148", MAC:"b6:64:14:ce:0e:79", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 00:42:13.644577 containerd[2132]: 2026-01-20 00:42:13.640 [INFO][5543] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="dcb77af11172ca7fd90b1f998e932a4850b38369293bc0eb1e781b5e59c7dca8" Namespace="calico-apiserver" Pod="calico-apiserver-7c8fb8fd4d-xhrnz" WorkloadEndpoint="ci--4515.1.0--n--fc9e3ff023-k8s-calico--apiserver--7c8fb8fd4d--xhrnz-eth0" Jan 20 00:42:13.653000 audit[5572]: NETFILTER_CFG table=filter:142 family=2 entries=41 op=nft_register_chain pid=5572 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 20 00:42:13.653000 audit[5572]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=23096 a0=3 a1=ffffda3be0f0 a2=0 a3=ffffa55a6fa8 items=0 ppid=4877 pid=5572 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:42:13.653000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 20 00:42:13.684722 kubelet[3599]: E0120 00:42:13.684625 3599 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-t6nwm" podUID="e914416f-b403-4119-a223-0b5c6e18edd3" Jan 20 00:42:13.684722 kubelet[3599]: E0120 00:42:13.684696 3599 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-57d88c779f-cqh6x" podUID="86fc1b8f-992e-433a-a4e5-96b8bd195d5d" Jan 20 00:42:13.685086 containerd[2132]: time="2026-01-20T00:42:13.684011244Z" level=info msg="connecting to shim dcb77af11172ca7fd90b1f998e932a4850b38369293bc0eb1e781b5e59c7dca8" address="unix:///run/containerd/s/62e31218629a11a5e4fe3e1857de49133f8f7398b01884b6b70a608626bb1fd9" namespace=k8s.io protocol=ttrpc version=3 Jan 20 00:42:13.708755 systemd[1]: Started cri-containerd-dcb77af11172ca7fd90b1f998e932a4850b38369293bc0eb1e781b5e59c7dca8.scope - libcontainer container dcb77af11172ca7fd90b1f998e932a4850b38369293bc0eb1e781b5e59c7dca8. Jan 20 00:42:13.724000 audit: BPF prog-id=270 op=LOAD Jan 20 00:42:13.724000 audit: BPF prog-id=271 op=LOAD Jan 20 00:42:13.724000 audit[5593]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001b0180 a2=98 a3=0 items=0 ppid=5580 pid=5593 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:42:13.724000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6463623737616631313137326361376664393062316639393865393332 Jan 20 00:42:13.724000 audit: BPF prog-id=271 op=UNLOAD Jan 20 00:42:13.724000 audit[5593]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5580 pid=5593 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:42:13.724000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6463623737616631313137326361376664393062316639393865393332 Jan 20 00:42:13.724000 audit: BPF prog-id=272 op=LOAD Jan 20 00:42:13.724000 audit[5593]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001b03e8 a2=98 a3=0 items=0 ppid=5580 pid=5593 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:42:13.724000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6463623737616631313137326361376664393062316639393865393332 Jan 20 00:42:13.724000 audit: BPF prog-id=273 op=LOAD Jan 20 00:42:13.724000 audit[5593]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=40001b0168 a2=98 a3=0 items=0 ppid=5580 pid=5593 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:42:13.724000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6463623737616631313137326361376664393062316639393865393332 Jan 20 00:42:13.724000 audit: BPF prog-id=273 op=UNLOAD Jan 20 00:42:13.724000 audit[5593]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5580 pid=5593 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:42:13.724000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6463623737616631313137326361376664393062316639393865393332 Jan 20 00:42:13.724000 audit: BPF prog-id=272 op=UNLOAD Jan 20 00:42:13.724000 audit[5593]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5580 pid=5593 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:42:13.724000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6463623737616631313137326361376664393062316639393865393332 Jan 20 00:42:13.724000 audit: BPF prog-id=274 op=LOAD Jan 20 00:42:13.724000 audit[5593]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001b0648 a2=98 a3=0 items=0 ppid=5580 pid=5593 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:42:13.724000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6463623737616631313137326361376664393062316639393865393332 Jan 20 00:42:13.750285 containerd[2132]: time="2026-01-20T00:42:13.750256851Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7c8fb8fd4d-xhrnz,Uid:0b136bd0-6a42-4726-87cd-a3538d5ee86b,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"dcb77af11172ca7fd90b1f998e932a4850b38369293bc0eb1e781b5e59c7dca8\"" Jan 20 00:42:13.751578 containerd[2132]: time="2026-01-20T00:42:13.751548001Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 20 00:42:14.005861 containerd[2132]: time="2026-01-20T00:42:14.005743642Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 00:42:14.009318 containerd[2132]: time="2026-01-20T00:42:14.009264849Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 20 00:42:14.009583 containerd[2132]: time="2026-01-20T00:42:14.009447639Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 20 00:42:14.009648 kubelet[3599]: E0120 00:42:14.009605 3599 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 00:42:14.009704 kubelet[3599]: E0120 00:42:14.009660 3599 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 00:42:14.009831 kubelet[3599]: E0120 00:42:14.009777 3599 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-q6cps,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-7c8fb8fd4d-xhrnz_calico-apiserver(0b136bd0-6a42-4726-87cd-a3538d5ee86b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 20 00:42:14.011012 kubelet[3599]: E0120 00:42:14.010972 3599 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7c8fb8fd4d-xhrnz" podUID="0b136bd0-6a42-4726-87cd-a3538d5ee86b" Jan 20 00:42:14.540831 containerd[2132]: time="2026-01-20T00:42:14.540788893Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-smdnr,Uid:e57a0c68-c8b0-453f-a1e5-cacbecaee897,Namespace:kube-system,Attempt:0,}" Jan 20 00:42:14.635002 systemd-networkd[1725]: cali17eb80f50a7: Link UP Jan 20 00:42:14.635648 systemd-networkd[1725]: cali17eb80f50a7: Gained carrier Jan 20 00:42:14.654868 containerd[2132]: 2026-01-20 00:42:14.580 [INFO][5619] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4515.1.0--n--fc9e3ff023-k8s-coredns--674b8bbfcf--smdnr-eth0 coredns-674b8bbfcf- kube-system e57a0c68-c8b0-453f-a1e5-cacbecaee897 870 0 2026-01-20 00:41:31 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4515.1.0-n-fc9e3ff023 coredns-674b8bbfcf-smdnr eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali17eb80f50a7 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="fc71ac599d9bd1e999e616ef159cb08911fd54c9ee9f9be915294362adc8b7ad" Namespace="kube-system" Pod="coredns-674b8bbfcf-smdnr" WorkloadEndpoint="ci--4515.1.0--n--fc9e3ff023-k8s-coredns--674b8bbfcf--smdnr-" Jan 20 00:42:14.654868 containerd[2132]: 2026-01-20 00:42:14.581 [INFO][5619] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="fc71ac599d9bd1e999e616ef159cb08911fd54c9ee9f9be915294362adc8b7ad" Namespace="kube-system" Pod="coredns-674b8bbfcf-smdnr" WorkloadEndpoint="ci--4515.1.0--n--fc9e3ff023-k8s-coredns--674b8bbfcf--smdnr-eth0" Jan 20 00:42:14.654868 containerd[2132]: 2026-01-20 00:42:14.598 [INFO][5632] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="fc71ac599d9bd1e999e616ef159cb08911fd54c9ee9f9be915294362adc8b7ad" HandleID="k8s-pod-network.fc71ac599d9bd1e999e616ef159cb08911fd54c9ee9f9be915294362adc8b7ad" Workload="ci--4515.1.0--n--fc9e3ff023-k8s-coredns--674b8bbfcf--smdnr-eth0" Jan 20 00:42:14.654868 containerd[2132]: 2026-01-20 00:42:14.598 [INFO][5632] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="fc71ac599d9bd1e999e616ef159cb08911fd54c9ee9f9be915294362adc8b7ad" HandleID="k8s-pod-network.fc71ac599d9bd1e999e616ef159cb08911fd54c9ee9f9be915294362adc8b7ad" Workload="ci--4515.1.0--n--fc9e3ff023-k8s-coredns--674b8bbfcf--smdnr-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400024afa0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4515.1.0-n-fc9e3ff023", "pod":"coredns-674b8bbfcf-smdnr", "timestamp":"2026-01-20 00:42:14.598288816 +0000 UTC"}, Hostname:"ci-4515.1.0-n-fc9e3ff023", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 20 00:42:14.654868 containerd[2132]: 2026-01-20 00:42:14.598 [INFO][5632] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 00:42:14.654868 containerd[2132]: 2026-01-20 00:42:14.598 [INFO][5632] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 00:42:14.654868 containerd[2132]: 2026-01-20 00:42:14.598 [INFO][5632] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4515.1.0-n-fc9e3ff023' Jan 20 00:42:14.654868 containerd[2132]: 2026-01-20 00:42:14.605 [INFO][5632] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.fc71ac599d9bd1e999e616ef159cb08911fd54c9ee9f9be915294362adc8b7ad" host="ci-4515.1.0-n-fc9e3ff023" Jan 20 00:42:14.654868 containerd[2132]: 2026-01-20 00:42:14.611 [INFO][5632] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4515.1.0-n-fc9e3ff023" Jan 20 00:42:14.654868 containerd[2132]: 2026-01-20 00:42:14.614 [INFO][5632] ipam/ipam.go 511: Trying affinity for 192.168.51.64/26 host="ci-4515.1.0-n-fc9e3ff023" Jan 20 00:42:14.654868 containerd[2132]: 2026-01-20 00:42:14.615 [INFO][5632] ipam/ipam.go 158: Attempting to load block cidr=192.168.51.64/26 host="ci-4515.1.0-n-fc9e3ff023" Jan 20 00:42:14.654868 containerd[2132]: 2026-01-20 00:42:14.617 [INFO][5632] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.51.64/26 host="ci-4515.1.0-n-fc9e3ff023" Jan 20 00:42:14.654868 containerd[2132]: 2026-01-20 00:42:14.617 [INFO][5632] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.51.64/26 handle="k8s-pod-network.fc71ac599d9bd1e999e616ef159cb08911fd54c9ee9f9be915294362adc8b7ad" host="ci-4515.1.0-n-fc9e3ff023" Jan 20 00:42:14.654868 containerd[2132]: 2026-01-20 00:42:14.619 [INFO][5632] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.fc71ac599d9bd1e999e616ef159cb08911fd54c9ee9f9be915294362adc8b7ad Jan 20 00:42:14.654868 containerd[2132]: 2026-01-20 00:42:14.623 [INFO][5632] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.51.64/26 handle="k8s-pod-network.fc71ac599d9bd1e999e616ef159cb08911fd54c9ee9f9be915294362adc8b7ad" host="ci-4515.1.0-n-fc9e3ff023" Jan 20 00:42:14.654868 containerd[2132]: 2026-01-20 00:42:14.631 [INFO][5632] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.51.73/26] block=192.168.51.64/26 handle="k8s-pod-network.fc71ac599d9bd1e999e616ef159cb08911fd54c9ee9f9be915294362adc8b7ad" host="ci-4515.1.0-n-fc9e3ff023" Jan 20 00:42:14.654868 containerd[2132]: 2026-01-20 00:42:14.631 [INFO][5632] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.51.73/26] handle="k8s-pod-network.fc71ac599d9bd1e999e616ef159cb08911fd54c9ee9f9be915294362adc8b7ad" host="ci-4515.1.0-n-fc9e3ff023" Jan 20 00:42:14.654868 containerd[2132]: 2026-01-20 00:42:14.631 [INFO][5632] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 00:42:14.654868 containerd[2132]: 2026-01-20 00:42:14.631 [INFO][5632] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.51.73/26] IPv6=[] ContainerID="fc71ac599d9bd1e999e616ef159cb08911fd54c9ee9f9be915294362adc8b7ad" HandleID="k8s-pod-network.fc71ac599d9bd1e999e616ef159cb08911fd54c9ee9f9be915294362adc8b7ad" Workload="ci--4515.1.0--n--fc9e3ff023-k8s-coredns--674b8bbfcf--smdnr-eth0" Jan 20 00:42:14.655222 containerd[2132]: 2026-01-20 00:42:14.633 [INFO][5619] cni-plugin/k8s.go 418: Populated endpoint ContainerID="fc71ac599d9bd1e999e616ef159cb08911fd54c9ee9f9be915294362adc8b7ad" Namespace="kube-system" Pod="coredns-674b8bbfcf-smdnr" WorkloadEndpoint="ci--4515.1.0--n--fc9e3ff023-k8s-coredns--674b8bbfcf--smdnr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4515.1.0--n--fc9e3ff023-k8s-coredns--674b8bbfcf--smdnr-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"e57a0c68-c8b0-453f-a1e5-cacbecaee897", ResourceVersion:"870", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 0, 41, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4515.1.0-n-fc9e3ff023", ContainerID:"", Pod:"coredns-674b8bbfcf-smdnr", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.51.73/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali17eb80f50a7", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 00:42:14.655222 containerd[2132]: 2026-01-20 00:42:14.633 [INFO][5619] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.51.73/32] ContainerID="fc71ac599d9bd1e999e616ef159cb08911fd54c9ee9f9be915294362adc8b7ad" Namespace="kube-system" Pod="coredns-674b8bbfcf-smdnr" WorkloadEndpoint="ci--4515.1.0--n--fc9e3ff023-k8s-coredns--674b8bbfcf--smdnr-eth0" Jan 20 00:42:14.655222 containerd[2132]: 2026-01-20 00:42:14.633 [INFO][5619] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali17eb80f50a7 ContainerID="fc71ac599d9bd1e999e616ef159cb08911fd54c9ee9f9be915294362adc8b7ad" Namespace="kube-system" Pod="coredns-674b8bbfcf-smdnr" WorkloadEndpoint="ci--4515.1.0--n--fc9e3ff023-k8s-coredns--674b8bbfcf--smdnr-eth0" Jan 20 00:42:14.655222 containerd[2132]: 2026-01-20 00:42:14.635 [INFO][5619] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="fc71ac599d9bd1e999e616ef159cb08911fd54c9ee9f9be915294362adc8b7ad" Namespace="kube-system" Pod="coredns-674b8bbfcf-smdnr" WorkloadEndpoint="ci--4515.1.0--n--fc9e3ff023-k8s-coredns--674b8bbfcf--smdnr-eth0" Jan 20 00:42:14.655222 containerd[2132]: 2026-01-20 00:42:14.636 [INFO][5619] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="fc71ac599d9bd1e999e616ef159cb08911fd54c9ee9f9be915294362adc8b7ad" Namespace="kube-system" Pod="coredns-674b8bbfcf-smdnr" WorkloadEndpoint="ci--4515.1.0--n--fc9e3ff023-k8s-coredns--674b8bbfcf--smdnr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4515.1.0--n--fc9e3ff023-k8s-coredns--674b8bbfcf--smdnr-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"e57a0c68-c8b0-453f-a1e5-cacbecaee897", ResourceVersion:"870", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 0, 41, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4515.1.0-n-fc9e3ff023", ContainerID:"fc71ac599d9bd1e999e616ef159cb08911fd54c9ee9f9be915294362adc8b7ad", Pod:"coredns-674b8bbfcf-smdnr", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.51.73/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali17eb80f50a7", MAC:"22:7a:fa:d1:9c:ba", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 00:42:14.655222 containerd[2132]: 2026-01-20 00:42:14.645 [INFO][5619] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="fc71ac599d9bd1e999e616ef159cb08911fd54c9ee9f9be915294362adc8b7ad" Namespace="kube-system" Pod="coredns-674b8bbfcf-smdnr" WorkloadEndpoint="ci--4515.1.0--n--fc9e3ff023-k8s-coredns--674b8bbfcf--smdnr-eth0" Jan 20 00:42:14.668000 audit[5647]: NETFILTER_CFG table=filter:143 family=2 entries=48 op=nft_register_chain pid=5647 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 20 00:42:14.668000 audit[5647]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=22688 a0=3 a1=ffffc871d3e0 a2=0 a3=ffff9328cfa8 items=0 ppid=4877 pid=5647 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:42:14.668000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 20 00:42:14.683924 kubelet[3599]: E0120 00:42:14.683744 3599 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7c8fb8fd4d-xhrnz" podUID="0b136bd0-6a42-4726-87cd-a3538d5ee86b" Jan 20 00:42:14.710000 audit[5649]: NETFILTER_CFG table=filter:144 family=2 entries=14 op=nft_register_rule pid=5649 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 00:42:14.710000 audit[5649]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=fffff1b14e70 a2=0 a3=1 items=0 ppid=3769 pid=5649 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:42:14.710000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 00:42:14.714000 audit[5649]: NETFILTER_CFG table=nat:145 family=2 entries=20 op=nft_register_rule pid=5649 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 00:42:14.714000 audit[5649]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5772 a0=3 a1=fffff1b14e70 a2=0 a3=1 items=0 ppid=3769 pid=5649 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:42:14.714000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 00:42:14.833845 containerd[2132]: time="2026-01-20T00:42:14.833760369Z" level=info msg="connecting to shim fc71ac599d9bd1e999e616ef159cb08911fd54c9ee9f9be915294362adc8b7ad" address="unix:///run/containerd/s/433831a72b69acb180faded13763fce5370eac084d6fec60d8f4edbe3ac9159a" namespace=k8s.io protocol=ttrpc version=3 Jan 20 00:42:14.854450 systemd[1]: Started cri-containerd-fc71ac599d9bd1e999e616ef159cb08911fd54c9ee9f9be915294362adc8b7ad.scope - libcontainer container fc71ac599d9bd1e999e616ef159cb08911fd54c9ee9f9be915294362adc8b7ad. Jan 20 00:42:14.861000 audit: BPF prog-id=275 op=LOAD Jan 20 00:42:14.862000 audit: BPF prog-id=276 op=LOAD Jan 20 00:42:14.862000 audit[5670]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130180 a2=98 a3=0 items=0 ppid=5659 pid=5670 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:42:14.862000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6663373161633539396439626431653939396536313665663135396362 Jan 20 00:42:14.862000 audit: BPF prog-id=276 op=UNLOAD Jan 20 00:42:14.862000 audit[5670]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5659 pid=5670 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:42:14.862000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6663373161633539396439626431653939396536313665663135396362 Jan 20 00:42:14.862000 audit: BPF prog-id=277 op=LOAD Jan 20 00:42:14.862000 audit[5670]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001303e8 a2=98 a3=0 items=0 ppid=5659 pid=5670 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:42:14.862000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6663373161633539396439626431653939396536313665663135396362 Jan 20 00:42:14.862000 audit: BPF prog-id=278 op=LOAD Jan 20 00:42:14.862000 audit[5670]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000130168 a2=98 a3=0 items=0 ppid=5659 pid=5670 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:42:14.862000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6663373161633539396439626431653939396536313665663135396362 Jan 20 00:42:14.862000 audit: BPF prog-id=278 op=UNLOAD Jan 20 00:42:14.862000 audit[5670]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5659 pid=5670 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:42:14.862000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6663373161633539396439626431653939396536313665663135396362 Jan 20 00:42:14.862000 audit: BPF prog-id=277 op=UNLOAD Jan 20 00:42:14.862000 audit[5670]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5659 pid=5670 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:42:14.862000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6663373161633539396439626431653939396536313665663135396362 Jan 20 00:42:14.862000 audit: BPF prog-id=279 op=LOAD Jan 20 00:42:14.862000 audit[5670]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130648 a2=98 a3=0 items=0 ppid=5659 pid=5670 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:42:14.862000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6663373161633539396439626431653939396536313665663135396362 Jan 20 00:42:14.882537 containerd[2132]: time="2026-01-20T00:42:14.882490022Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-smdnr,Uid:e57a0c68-c8b0-453f-a1e5-cacbecaee897,Namespace:kube-system,Attempt:0,} returns sandbox id \"fc71ac599d9bd1e999e616ef159cb08911fd54c9ee9f9be915294362adc8b7ad\"" Jan 20 00:42:14.891271 containerd[2132]: time="2026-01-20T00:42:14.891245370Z" level=info msg="CreateContainer within sandbox \"fc71ac599d9bd1e999e616ef159cb08911fd54c9ee9f9be915294362adc8b7ad\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 20 00:42:14.928001 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1596425526.mount: Deactivated successfully. Jan 20 00:42:14.929341 containerd[2132]: time="2026-01-20T00:42:14.929150577Z" level=info msg="Container eb586d2114a244a00e54d8d448e634bad4c68b87a0f2e02bdd0465c315906e61: CDI devices from CRI Config.CDIDevices: []" Jan 20 00:42:14.931788 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1498391986.mount: Deactivated successfully. Jan 20 00:42:14.947799 containerd[2132]: time="2026-01-20T00:42:14.947768519Z" level=info msg="CreateContainer within sandbox \"fc71ac599d9bd1e999e616ef159cb08911fd54c9ee9f9be915294362adc8b7ad\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"eb586d2114a244a00e54d8d448e634bad4c68b87a0f2e02bdd0465c315906e61\"" Jan 20 00:42:14.948455 containerd[2132]: time="2026-01-20T00:42:14.948428371Z" level=info msg="StartContainer for \"eb586d2114a244a00e54d8d448e634bad4c68b87a0f2e02bdd0465c315906e61\"" Jan 20 00:42:14.949402 containerd[2132]: time="2026-01-20T00:42:14.949377114Z" level=info msg="connecting to shim eb586d2114a244a00e54d8d448e634bad4c68b87a0f2e02bdd0465c315906e61" address="unix:///run/containerd/s/433831a72b69acb180faded13763fce5370eac084d6fec60d8f4edbe3ac9159a" protocol=ttrpc version=3 Jan 20 00:42:14.964437 systemd[1]: Started cri-containerd-eb586d2114a244a00e54d8d448e634bad4c68b87a0f2e02bdd0465c315906e61.scope - libcontainer container eb586d2114a244a00e54d8d448e634bad4c68b87a0f2e02bdd0465c315906e61. Jan 20 00:42:14.971000 audit: BPF prog-id=280 op=LOAD Jan 20 00:42:14.972000 audit: BPF prog-id=281 op=LOAD Jan 20 00:42:14.972000 audit[5696]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000176180 a2=98 a3=0 items=0 ppid=5659 pid=5696 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:42:14.972000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6562353836643231313461323434613030653534643864343438653633 Jan 20 00:42:14.972000 audit: BPF prog-id=281 op=UNLOAD Jan 20 00:42:14.972000 audit[5696]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5659 pid=5696 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:42:14.972000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6562353836643231313461323434613030653534643864343438653633 Jan 20 00:42:14.972000 audit: BPF prog-id=282 op=LOAD Jan 20 00:42:14.972000 audit[5696]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001763e8 a2=98 a3=0 items=0 ppid=5659 pid=5696 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:42:14.972000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6562353836643231313461323434613030653534643864343438653633 Jan 20 00:42:14.972000 audit: BPF prog-id=283 op=LOAD Jan 20 00:42:14.972000 audit[5696]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000176168 a2=98 a3=0 items=0 ppid=5659 pid=5696 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:42:14.972000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6562353836643231313461323434613030653534643864343438653633 Jan 20 00:42:14.972000 audit: BPF prog-id=283 op=UNLOAD Jan 20 00:42:14.972000 audit[5696]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5659 pid=5696 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:42:14.972000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6562353836643231313461323434613030653534643864343438653633 Jan 20 00:42:14.972000 audit: BPF prog-id=282 op=UNLOAD Jan 20 00:42:14.972000 audit[5696]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5659 pid=5696 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:42:14.972000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6562353836643231313461323434613030653534643864343438653633 Jan 20 00:42:14.972000 audit: BPF prog-id=284 op=LOAD Jan 20 00:42:14.972000 audit[5696]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000176648 a2=98 a3=0 items=0 ppid=5659 pid=5696 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:42:14.972000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6562353836643231313461323434613030653534643864343438653633 Jan 20 00:42:14.991345 containerd[2132]: time="2026-01-20T00:42:14.991317489Z" level=info msg="StartContainer for \"eb586d2114a244a00e54d8d448e634bad4c68b87a0f2e02bdd0465c315906e61\" returns successfully" Jan 20 00:42:15.368461 systemd-networkd[1725]: calid21073a9148: Gained IPv6LL Jan 20 00:42:15.687486 kubelet[3599]: E0120 00:42:15.687339 3599 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7c8fb8fd4d-xhrnz" podUID="0b136bd0-6a42-4726-87cd-a3538d5ee86b" Jan 20 00:42:15.709933 kubelet[3599]: I0120 00:42:15.709887 3599 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-smdnr" podStartSLOduration=44.709874299 podStartE2EDuration="44.709874299s" podCreationTimestamp="2026-01-20 00:41:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 00:42:15.709168854 +0000 UTC m=+51.256133935" watchObservedRunningTime="2026-01-20 00:42:15.709874299 +0000 UTC m=+51.256839364" Jan 20 00:42:15.726000 audit[5729]: NETFILTER_CFG table=filter:146 family=2 entries=14 op=nft_register_rule pid=5729 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 00:42:15.726000 audit[5729]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=fffffc9b3170 a2=0 a3=1 items=0 ppid=3769 pid=5729 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:42:15.726000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 00:42:15.768000 audit[5729]: NETFILTER_CFG table=nat:147 family=2 entries=56 op=nft_register_chain pid=5729 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 00:42:15.768000 audit[5729]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=19860 a0=3 a1=fffffc9b3170 a2=0 a3=1 items=0 ppid=3769 pid=5729 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:42:15.768000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 00:42:16.009726 systemd-networkd[1725]: cali17eb80f50a7: Gained IPv6LL Jan 20 00:42:19.541895 containerd[2132]: time="2026-01-20T00:42:19.541854905Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 20 00:42:19.770452 containerd[2132]: time="2026-01-20T00:42:19.770285882Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 00:42:19.773406 containerd[2132]: time="2026-01-20T00:42:19.773312580Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 20 00:42:19.773406 containerd[2132]: time="2026-01-20T00:42:19.773329453Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 20 00:42:19.773552 kubelet[3599]: E0120 00:42:19.773481 3599 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 20 00:42:19.773552 kubelet[3599]: E0120 00:42:19.773517 3599 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 20 00:42:19.773967 kubelet[3599]: E0120 00:42:19.773607 3599 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:4ed88e40380e41739ab0886868a4c216,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-4s4k9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-78b7cf9965-hz2t4_calico-system(050b7649-47d7-4543-80dc-167b27775ab2): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 20 00:42:19.775875 containerd[2132]: time="2026-01-20T00:42:19.775849481Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 20 00:42:20.049809 containerd[2132]: time="2026-01-20T00:42:20.049642488Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 00:42:20.053141 containerd[2132]: time="2026-01-20T00:42:20.053051578Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 20 00:42:20.053141 containerd[2132]: time="2026-01-20T00:42:20.053110957Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 20 00:42:20.053340 kubelet[3599]: E0120 00:42:20.053277 3599 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 20 00:42:20.053638 kubelet[3599]: E0120 00:42:20.053349 3599 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 20 00:42:20.053638 kubelet[3599]: E0120 00:42:20.053581 3599 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4s4k9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-78b7cf9965-hz2t4_calico-system(050b7649-47d7-4543-80dc-167b27775ab2): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 20 00:42:20.055147 kubelet[3599]: E0120 00:42:20.054726 3599 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-78b7cf9965-hz2t4" podUID="050b7649-47d7-4543-80dc-167b27775ab2" Jan 20 00:42:24.541698 containerd[2132]: time="2026-01-20T00:42:24.541418756Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 20 00:42:24.785852 containerd[2132]: time="2026-01-20T00:42:24.785806876Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 00:42:24.788790 containerd[2132]: time="2026-01-20T00:42:24.788753699Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 20 00:42:24.788866 containerd[2132]: time="2026-01-20T00:42:24.788819054Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 20 00:42:24.789050 kubelet[3599]: E0120 00:42:24.789014 3599 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 20 00:42:24.789901 kubelet[3599]: E0120 00:42:24.789376 3599 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 20 00:42:24.789901 kubelet[3599]: E0120 00:42:24.789589 3599 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qm99j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-57d88c779f-cqh6x_calico-system(86fc1b8f-992e-433a-a4e5-96b8bd195d5d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 20 00:42:24.790379 containerd[2132]: time="2026-01-20T00:42:24.790345975Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 20 00:42:24.791458 kubelet[3599]: E0120 00:42:24.791352 3599 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-57d88c779f-cqh6x" podUID="86fc1b8f-992e-433a-a4e5-96b8bd195d5d" Jan 20 00:42:25.057154 containerd[2132]: time="2026-01-20T00:42:25.057102128Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 00:42:25.063392 containerd[2132]: time="2026-01-20T00:42:25.063359093Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 20 00:42:25.064063 containerd[2132]: time="2026-01-20T00:42:25.063427471Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 20 00:42:25.064118 kubelet[3599]: E0120 00:42:25.063562 3599 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 00:42:25.064118 kubelet[3599]: E0120 00:42:25.063606 3599 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 00:42:25.064118 kubelet[3599]: E0120 00:42:25.063714 3599 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mpls5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6d94f7fcbb-jr4rt_calico-apiserver(77e9acbe-87a2-440f-b406-8c8900ab52f5): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 20 00:42:25.065169 kubelet[3599]: E0120 00:42:25.065142 3599 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6d94f7fcbb-jr4rt" podUID="77e9acbe-87a2-440f-b406-8c8900ab52f5" Jan 20 00:42:26.542785 containerd[2132]: time="2026-01-20T00:42:26.542708911Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 20 00:42:26.789152 containerd[2132]: time="2026-01-20T00:42:26.789038595Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 00:42:26.792374 containerd[2132]: time="2026-01-20T00:42:26.792298911Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 20 00:42:26.792443 containerd[2132]: time="2026-01-20T00:42:26.792354002Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 20 00:42:26.792627 kubelet[3599]: E0120 00:42:26.792598 3599 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 00:42:26.793135 kubelet[3599]: E0120 00:42:26.792907 3599 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 00:42:26.793135 kubelet[3599]: E0120 00:42:26.793036 3599 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xkm7x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-7c8fb8fd4d-9hn8w_calico-apiserver(6eedb683-7841-469d-9465-68ae5bed2952): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 20 00:42:26.794185 kubelet[3599]: E0120 00:42:26.794139 3599 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7c8fb8fd4d-9hn8w" podUID="6eedb683-7841-469d-9465-68ae5bed2952" Jan 20 00:42:27.541044 containerd[2132]: time="2026-01-20T00:42:27.540886487Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 20 00:42:27.815484 containerd[2132]: time="2026-01-20T00:42:27.815370672Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 00:42:27.818576 containerd[2132]: time="2026-01-20T00:42:27.818545864Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 20 00:42:27.818576 containerd[2132]: time="2026-01-20T00:42:27.818600963Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 20 00:42:27.818737 kubelet[3599]: E0120 00:42:27.818698 3599 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 20 00:42:27.818922 kubelet[3599]: E0120 00:42:27.818749 3599 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 20 00:42:27.818922 kubelet[3599]: E0120 00:42:27.818861 3599 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6shcx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-n9ngh_calico-system(28862320-350f-4f29-92bb-d8201c93580b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 20 00:42:27.820439 kubelet[3599]: E0120 00:42:27.820288 3599 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-n9ngh" podUID="28862320-350f-4f29-92bb-d8201c93580b" Jan 20 00:42:28.542611 containerd[2132]: time="2026-01-20T00:42:28.542553243Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 20 00:42:28.782379 containerd[2132]: time="2026-01-20T00:42:28.782340103Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 00:42:28.785654 containerd[2132]: time="2026-01-20T00:42:28.785619155Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 20 00:42:28.785814 containerd[2132]: time="2026-01-20T00:42:28.785732041Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 20 00:42:28.785983 kubelet[3599]: E0120 00:42:28.785953 3599 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 20 00:42:28.786086 kubelet[3599]: E0120 00:42:28.786072 3599 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 20 00:42:28.786291 kubelet[3599]: E0120 00:42:28.786238 3599 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-njtjb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-t6nwm_calico-system(e914416f-b403-4119-a223-0b5c6e18edd3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 20 00:42:28.788706 containerd[2132]: time="2026-01-20T00:42:28.788174239Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 20 00:42:29.041870 containerd[2132]: time="2026-01-20T00:42:29.041836639Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 00:42:29.044943 containerd[2132]: time="2026-01-20T00:42:29.044906650Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 20 00:42:29.045006 containerd[2132]: time="2026-01-20T00:42:29.044979302Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 20 00:42:29.045159 kubelet[3599]: E0120 00:42:29.045127 3599 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 20 00:42:29.045572 kubelet[3599]: E0120 00:42:29.045408 3599 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 20 00:42:29.045572 kubelet[3599]: E0120 00:42:29.045531 3599 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-njtjb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-t6nwm_calico-system(e914416f-b403-4119-a223-0b5c6e18edd3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 20 00:42:29.047455 kubelet[3599]: E0120 00:42:29.047420 3599 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-t6nwm" podUID="e914416f-b403-4119-a223-0b5c6e18edd3" Jan 20 00:42:29.541674 containerd[2132]: time="2026-01-20T00:42:29.541612120Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 20 00:42:29.781403 containerd[2132]: time="2026-01-20T00:42:29.781226643Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 00:42:29.784411 containerd[2132]: time="2026-01-20T00:42:29.784382138Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 20 00:42:29.784554 containerd[2132]: time="2026-01-20T00:42:29.784431508Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 20 00:42:29.784693 kubelet[3599]: E0120 00:42:29.784655 3599 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 00:42:29.784693 kubelet[3599]: E0120 00:42:29.784696 3599 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 00:42:29.785150 kubelet[3599]: E0120 00:42:29.784802 3599 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-q6cps,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-7c8fb8fd4d-xhrnz_calico-apiserver(0b136bd0-6a42-4726-87cd-a3538d5ee86b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 20 00:42:29.786372 kubelet[3599]: E0120 00:42:29.786347 3599 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7c8fb8fd4d-xhrnz" podUID="0b136bd0-6a42-4726-87cd-a3538d5ee86b" Jan 20 00:42:31.541775 kubelet[3599]: E0120 00:42:31.541726 3599 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-78b7cf9965-hz2t4" podUID="050b7649-47d7-4543-80dc-167b27775ab2" Jan 20 00:42:39.541856 kubelet[3599]: E0120 00:42:39.541785 3599 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-n9ngh" podUID="28862320-350f-4f29-92bb-d8201c93580b" Jan 20 00:42:39.542237 kubelet[3599]: E0120 00:42:39.542144 3599 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6d94f7fcbb-jr4rt" podUID="77e9acbe-87a2-440f-b406-8c8900ab52f5" Jan 20 00:42:40.541322 kubelet[3599]: E0120 00:42:40.541232 3599 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-57d88c779f-cqh6x" podUID="86fc1b8f-992e-433a-a4e5-96b8bd195d5d" Jan 20 00:42:41.548205 kubelet[3599]: E0120 00:42:41.547423 3599 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7c8fb8fd4d-9hn8w" podUID="6eedb683-7841-469d-9465-68ae5bed2952" Jan 20 00:42:41.549433 kubelet[3599]: E0120 00:42:41.549391 3599 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-t6nwm" podUID="e914416f-b403-4119-a223-0b5c6e18edd3" Jan 20 00:42:42.542985 containerd[2132]: time="2026-01-20T00:42:42.542888395Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 20 00:42:42.943604 containerd[2132]: time="2026-01-20T00:42:42.943162180Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 00:42:42.946711 containerd[2132]: time="2026-01-20T00:42:42.946629871Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 20 00:42:42.946776 containerd[2132]: time="2026-01-20T00:42:42.946684777Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 20 00:42:42.946895 kubelet[3599]: E0120 00:42:42.946839 3599 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 20 00:42:42.947149 kubelet[3599]: E0120 00:42:42.946900 3599 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 20 00:42:42.948986 kubelet[3599]: E0120 00:42:42.948944 3599 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:4ed88e40380e41739ab0886868a4c216,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-4s4k9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-78b7cf9965-hz2t4_calico-system(050b7649-47d7-4543-80dc-167b27775ab2): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 20 00:42:42.951049 containerd[2132]: time="2026-01-20T00:42:42.951019537Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 20 00:42:43.234018 containerd[2132]: time="2026-01-20T00:42:43.233856593Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 00:42:43.237502 containerd[2132]: time="2026-01-20T00:42:43.237411735Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 20 00:42:43.237502 containerd[2132]: time="2026-01-20T00:42:43.237465145Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 20 00:42:43.237663 kubelet[3599]: E0120 00:42:43.237619 3599 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 20 00:42:43.237824 kubelet[3599]: E0120 00:42:43.237665 3599 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 20 00:42:43.237824 kubelet[3599]: E0120 00:42:43.237768 3599 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4s4k9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-78b7cf9965-hz2t4_calico-system(050b7649-47d7-4543-80dc-167b27775ab2): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 20 00:42:43.239061 kubelet[3599]: E0120 00:42:43.239021 3599 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-78b7cf9965-hz2t4" podUID="050b7649-47d7-4543-80dc-167b27775ab2" Jan 20 00:42:44.542391 kubelet[3599]: E0120 00:42:44.542338 3599 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7c8fb8fd4d-xhrnz" podUID="0b136bd0-6a42-4726-87cd-a3538d5ee86b" Jan 20 00:42:52.542953 containerd[2132]: time="2026-01-20T00:42:52.542813204Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 20 00:42:52.823617 containerd[2132]: time="2026-01-20T00:42:52.823343886Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 00:42:52.827070 containerd[2132]: time="2026-01-20T00:42:52.826990601Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 20 00:42:52.827070 containerd[2132]: time="2026-01-20T00:42:52.827046771Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 20 00:42:52.827444 kubelet[3599]: E0120 00:42:52.827413 3599 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 20 00:42:52.828750 kubelet[3599]: E0120 00:42:52.828353 3599 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 20 00:42:52.828896 kubelet[3599]: E0120 00:42:52.828861 3599 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6shcx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-n9ngh_calico-system(28862320-350f-4f29-92bb-d8201c93580b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 20 00:42:52.830333 kubelet[3599]: E0120 00:42:52.830076 3599 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-n9ngh" podUID="28862320-350f-4f29-92bb-d8201c93580b" Jan 20 00:42:53.542286 containerd[2132]: time="2026-01-20T00:42:53.542246299Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 20 00:42:53.789036 containerd[2132]: time="2026-01-20T00:42:53.788981649Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 00:42:53.792190 containerd[2132]: time="2026-01-20T00:42:53.792158951Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 20 00:42:53.792248 containerd[2132]: time="2026-01-20T00:42:53.792214178Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 20 00:42:53.792526 kubelet[3599]: E0120 00:42:53.792376 3599 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 20 00:42:53.792526 kubelet[3599]: E0120 00:42:53.792424 3599 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 20 00:42:53.793196 kubelet[3599]: E0120 00:42:53.792623 3599 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qm99j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-57d88c779f-cqh6x_calico-system(86fc1b8f-992e-433a-a4e5-96b8bd195d5d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 20 00:42:53.793657 containerd[2132]: time="2026-01-20T00:42:53.792776945Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 20 00:42:53.794447 kubelet[3599]: E0120 00:42:53.794373 3599 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-57d88c779f-cqh6x" podUID="86fc1b8f-992e-433a-a4e5-96b8bd195d5d" Jan 20 00:42:54.064916 containerd[2132]: time="2026-01-20T00:42:54.064676231Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 00:42:54.068163 containerd[2132]: time="2026-01-20T00:42:54.068079055Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 20 00:42:54.068163 containerd[2132]: time="2026-01-20T00:42:54.068137354Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 20 00:42:54.068334 kubelet[3599]: E0120 00:42:54.068285 3599 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 00:42:54.068636 kubelet[3599]: E0120 00:42:54.068345 3599 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 00:42:54.068636 kubelet[3599]: E0120 00:42:54.068456 3599 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xkm7x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-7c8fb8fd4d-9hn8w_calico-apiserver(6eedb683-7841-469d-9465-68ae5bed2952): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 20 00:42:54.070463 kubelet[3599]: E0120 00:42:54.070432 3599 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7c8fb8fd4d-9hn8w" podUID="6eedb683-7841-469d-9465-68ae5bed2952" Jan 20 00:42:54.544664 containerd[2132]: time="2026-01-20T00:42:54.544617855Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 20 00:42:54.871260 containerd[2132]: time="2026-01-20T00:42:54.871140465Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 00:42:54.874320 containerd[2132]: time="2026-01-20T00:42:54.874261469Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 20 00:42:54.874404 containerd[2132]: time="2026-01-20T00:42:54.874356817Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 20 00:42:54.874672 kubelet[3599]: E0120 00:42:54.874638 3599 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 00:42:54.874727 kubelet[3599]: E0120 00:42:54.874678 3599 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 00:42:54.874889 kubelet[3599]: E0120 00:42:54.874851 3599 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mpls5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6d94f7fcbb-jr4rt_calico-apiserver(77e9acbe-87a2-440f-b406-8c8900ab52f5): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 20 00:42:54.876117 containerd[2132]: time="2026-01-20T00:42:54.875268927Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 20 00:42:54.876883 kubelet[3599]: E0120 00:42:54.876841 3599 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6d94f7fcbb-jr4rt" podUID="77e9acbe-87a2-440f-b406-8c8900ab52f5" Jan 20 00:42:55.110734 containerd[2132]: time="2026-01-20T00:42:55.110691759Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 00:42:55.113762 containerd[2132]: time="2026-01-20T00:42:55.113728359Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 20 00:42:55.113848 containerd[2132]: time="2026-01-20T00:42:55.113801538Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 20 00:42:55.114058 kubelet[3599]: E0120 00:42:55.114018 3599 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 20 00:42:55.115162 kubelet[3599]: E0120 00:42:55.114371 3599 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 20 00:42:55.115357 kubelet[3599]: E0120 00:42:55.115328 3599 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-njtjb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-t6nwm_calico-system(e914416f-b403-4119-a223-0b5c6e18edd3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 20 00:42:55.117555 containerd[2132]: time="2026-01-20T00:42:55.117526960Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 20 00:42:55.370127 containerd[2132]: time="2026-01-20T00:42:55.369947718Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 00:42:55.373052 containerd[2132]: time="2026-01-20T00:42:55.372962533Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 20 00:42:55.373052 containerd[2132]: time="2026-01-20T00:42:55.373019096Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 20 00:42:55.373791 kubelet[3599]: E0120 00:42:55.373751 3599 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 20 00:42:55.373862 kubelet[3599]: E0120 00:42:55.373799 3599 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 20 00:42:55.374088 kubelet[3599]: E0120 00:42:55.373941 3599 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-njtjb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-t6nwm_calico-system(e914416f-b403-4119-a223-0b5c6e18edd3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 20 00:42:55.375260 kubelet[3599]: E0120 00:42:55.375207 3599 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-t6nwm" podUID="e914416f-b403-4119-a223-0b5c6e18edd3" Jan 20 00:42:57.544471 containerd[2132]: time="2026-01-20T00:42:57.542678687Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 20 00:42:57.544818 kubelet[3599]: E0120 00:42:57.544413 3599 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-78b7cf9965-hz2t4" podUID="050b7649-47d7-4543-80dc-167b27775ab2" Jan 20 00:42:57.807277 containerd[2132]: time="2026-01-20T00:42:57.807157283Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 00:42:57.810701 containerd[2132]: time="2026-01-20T00:42:57.810649454Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 20 00:42:57.811126 containerd[2132]: time="2026-01-20T00:42:57.810730066Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 20 00:42:57.811185 kubelet[3599]: E0120 00:42:57.810908 3599 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 00:42:57.811185 kubelet[3599]: E0120 00:42:57.810951 3599 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 00:42:57.811185 kubelet[3599]: E0120 00:42:57.811063 3599 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-q6cps,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-7c8fb8fd4d-xhrnz_calico-apiserver(0b136bd0-6a42-4726-87cd-a3538d5ee86b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 20 00:42:57.812453 kubelet[3599]: E0120 00:42:57.812396 3599 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7c8fb8fd4d-xhrnz" podUID="0b136bd0-6a42-4726-87cd-a3538d5ee86b" Jan 20 00:43:05.542431 kubelet[3599]: E0120 00:43:05.542107 3599 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-n9ngh" podUID="28862320-350f-4f29-92bb-d8201c93580b" Jan 20 00:43:05.542826 kubelet[3599]: E0120 00:43:05.542519 3599 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7c8fb8fd4d-9hn8w" podUID="6eedb683-7841-469d-9465-68ae5bed2952" Jan 20 00:43:07.541811 kubelet[3599]: E0120 00:43:07.541712 3599 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6d94f7fcbb-jr4rt" podUID="77e9acbe-87a2-440f-b406-8c8900ab52f5" Jan 20 00:43:07.542588 kubelet[3599]: E0120 00:43:07.542323 3599 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-57d88c779f-cqh6x" podUID="86fc1b8f-992e-433a-a4e5-96b8bd195d5d" Jan 20 00:43:08.541433 kubelet[3599]: E0120 00:43:08.540890 3599 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7c8fb8fd4d-xhrnz" podUID="0b136bd0-6a42-4726-87cd-a3538d5ee86b" Jan 20 00:43:09.374386 kernel: kauditd_printk_skb: 136 callbacks suppressed Jan 20 00:43:09.374499 kernel: audit: type=1130 audit(1768869789.370:777): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.200.20.14:22-10.200.16.10:57930 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 00:43:09.370000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.200.20.14:22-10.200.16.10:57930 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 00:43:09.370286 systemd[1]: Started sshd@7-10.200.20.14:22-10.200.16.10:57930.service - OpenSSH per-connection server daemon (10.200.16.10:57930). Jan 20 00:43:09.806683 sshd[5823]: Accepted publickey for core from 10.200.16.10 port 57930 ssh2: RSA SHA256:cmmm7c4wjQpU6I6GIDB2gDRUMOHvT66UOlhswyLAq5I Jan 20 00:43:09.806000 audit[5823]: USER_ACCT pid=5823 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 00:43:09.828891 sshd-session[5823]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 00:43:09.825000 audit[5823]: CRED_ACQ pid=5823 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 00:43:09.834629 systemd-logind[2110]: New session 10 of user core. Jan 20 00:43:09.847844 kernel: audit: type=1101 audit(1768869789.806:778): pid=5823 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 00:43:09.847912 kernel: audit: type=1103 audit(1768869789.825:779): pid=5823 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 00:43:09.857330 kernel: audit: type=1006 audit(1768869789.825:780): pid=5823 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=10 res=1 Jan 20 00:43:09.858480 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 20 00:43:09.825000 audit[5823]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffc4135be0 a2=3 a3=0 items=0 ppid=1 pid=5823 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:43:09.878255 kernel: audit: type=1300 audit(1768869789.825:780): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffc4135be0 a2=3 a3=0 items=0 ppid=1 pid=5823 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:43:09.825000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 00:43:09.885279 kernel: audit: type=1327 audit(1768869789.825:780): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 00:43:09.879000 audit[5823]: USER_START pid=5823 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 00:43:09.903314 kernel: audit: type=1105 audit(1768869789.879:781): pid=5823 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 00:43:09.904000 audit[5826]: CRED_ACQ pid=5826 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 00:43:09.919071 kernel: audit: type=1103 audit(1768869789.904:782): pid=5826 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 00:43:10.116115 sshd[5826]: Connection closed by 10.200.16.10 port 57930 Jan 20 00:43:10.116825 sshd-session[5823]: pam_unix(sshd:session): session closed for user core Jan 20 00:43:10.117000 audit[5823]: USER_END pid=5823 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 00:43:10.120267 systemd-logind[2110]: Session 10 logged out. Waiting for processes to exit. Jan 20 00:43:10.124105 systemd[1]: sshd@7-10.200.20.14:22-10.200.16.10:57930.service: Deactivated successfully. Jan 20 00:43:10.127382 systemd[1]: session-10.scope: Deactivated successfully. Jan 20 00:43:10.130841 systemd-logind[2110]: Removed session 10. Jan 20 00:43:10.118000 audit[5823]: CRED_DISP pid=5823 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 00:43:10.150308 kernel: audit: type=1106 audit(1768869790.117:783): pid=5823 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 00:43:10.150373 kernel: audit: type=1104 audit(1768869790.118:784): pid=5823 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 00:43:10.125000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.200.20.14:22-10.200.16.10:57930 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 00:43:10.543667 kubelet[3599]: E0120 00:43:10.543586 3599 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-t6nwm" podUID="e914416f-b403-4119-a223-0b5c6e18edd3" Jan 20 00:43:10.544349 kubelet[3599]: E0120 00:43:10.544127 3599 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-78b7cf9965-hz2t4" podUID="050b7649-47d7-4543-80dc-167b27775ab2" Jan 20 00:43:15.200000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.200.20.14:22-10.200.16.10:49516 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 00:43:15.200489 systemd[1]: Started sshd@8-10.200.20.14:22-10.200.16.10:49516.service - OpenSSH per-connection server daemon (10.200.16.10:49516). Jan 20 00:43:15.204449 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 20 00:43:15.204506 kernel: audit: type=1130 audit(1768869795.200:786): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.200.20.14:22-10.200.16.10:49516 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 00:43:15.605000 audit[5839]: USER_ACCT pid=5839 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 00:43:15.622378 sshd[5839]: Accepted publickey for core from 10.200.16.10 port 49516 ssh2: RSA SHA256:cmmm7c4wjQpU6I6GIDB2gDRUMOHvT66UOlhswyLAq5I Jan 20 00:43:15.623694 sshd-session[5839]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 00:43:15.622000 audit[5839]: CRED_ACQ pid=5839 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 00:43:15.639478 kernel: audit: type=1101 audit(1768869795.605:787): pid=5839 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 00:43:15.639772 kernel: audit: type=1103 audit(1768869795.622:788): pid=5839 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 00:43:15.645143 systemd-logind[2110]: New session 11 of user core. Jan 20 00:43:15.648858 kernel: audit: type=1006 audit(1768869795.622:789): pid=5839 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=11 res=1 Jan 20 00:43:15.622000 audit[5839]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffed249a40 a2=3 a3=0 items=0 ppid=1 pid=5839 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:43:15.665921 kernel: audit: type=1300 audit(1768869795.622:789): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffed249a40 a2=3 a3=0 items=0 ppid=1 pid=5839 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:43:15.674461 kernel: audit: type=1327 audit(1768869795.622:789): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 00:43:15.622000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 00:43:15.674556 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 20 00:43:15.677000 audit[5839]: USER_START pid=5839 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 00:43:15.695000 audit[5842]: CRED_ACQ pid=5842 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 00:43:15.710666 kernel: audit: type=1105 audit(1768869795.677:790): pid=5839 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 00:43:15.710723 kernel: audit: type=1103 audit(1768869795.695:791): pid=5842 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 00:43:15.911234 sshd[5842]: Connection closed by 10.200.16.10 port 49516 Jan 20 00:43:15.911704 sshd-session[5839]: pam_unix(sshd:session): session closed for user core Jan 20 00:43:15.911000 audit[5839]: USER_END pid=5839 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 00:43:15.931615 systemd[1]: sshd@8-10.200.20.14:22-10.200.16.10:49516.service: Deactivated successfully. Jan 20 00:43:15.912000 audit[5839]: CRED_DISP pid=5839 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 00:43:15.937025 systemd[1]: session-11.scope: Deactivated successfully. Jan 20 00:43:15.940854 systemd-logind[2110]: Session 11 logged out. Waiting for processes to exit. Jan 20 00:43:15.945197 systemd-logind[2110]: Removed session 11. Jan 20 00:43:15.946103 kernel: audit: type=1106 audit(1768869795.911:792): pid=5839 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 00:43:15.946166 kernel: audit: type=1104 audit(1768869795.912:793): pid=5839 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 00:43:15.931000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.200.20.14:22-10.200.16.10:49516 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 00:43:17.541574 kubelet[3599]: E0120 00:43:17.541531 3599 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-n9ngh" podUID="28862320-350f-4f29-92bb-d8201c93580b" Jan 20 00:43:17.541574 kubelet[3599]: E0120 00:43:17.541862 3599 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7c8fb8fd4d-9hn8w" podUID="6eedb683-7841-469d-9465-68ae5bed2952" Jan 20 00:43:18.542404 kubelet[3599]: E0120 00:43:18.541988 3599 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6d94f7fcbb-jr4rt" podUID="77e9acbe-87a2-440f-b406-8c8900ab52f5" Jan 20 00:43:19.541694 kubelet[3599]: E0120 00:43:19.541653 3599 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-57d88c779f-cqh6x" podUID="86fc1b8f-992e-433a-a4e5-96b8bd195d5d" Jan 20 00:43:20.999000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.200.20.14:22-10.200.16.10:43244 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 00:43:21.000845 systemd[1]: Started sshd@9-10.200.20.14:22-10.200.16.10:43244.service - OpenSSH per-connection server daemon (10.200.16.10:43244). Jan 20 00:43:21.004095 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 20 00:43:21.004148 kernel: audit: type=1130 audit(1768869800.999:795): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.200.20.14:22-10.200.16.10:43244 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 00:43:21.439094 sshd[5854]: Accepted publickey for core from 10.200.16.10 port 43244 ssh2: RSA SHA256:cmmm7c4wjQpU6I6GIDB2gDRUMOHvT66UOlhswyLAq5I Jan 20 00:43:21.437000 audit[5854]: USER_ACCT pid=5854 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 00:43:21.457440 sshd-session[5854]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 00:43:21.455000 audit[5854]: CRED_ACQ pid=5854 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 00:43:21.465139 systemd-logind[2110]: New session 12 of user core. Jan 20 00:43:21.479576 kernel: audit: type=1101 audit(1768869801.437:796): pid=5854 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 00:43:21.479642 kernel: audit: type=1103 audit(1768869801.455:797): pid=5854 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 00:43:21.488491 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 20 00:43:21.494725 kernel: audit: type=1006 audit(1768869801.455:798): pid=5854 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=12 res=1 Jan 20 00:43:21.499382 kernel: audit: type=1300 audit(1768869801.455:798): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffc25bf690 a2=3 a3=0 items=0 ppid=1 pid=5854 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:43:21.455000 audit[5854]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffc25bf690 a2=3 a3=0 items=0 ppid=1 pid=5854 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:43:21.455000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 00:43:21.528966 kernel: audit: type=1327 audit(1768869801.455:798): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 00:43:21.495000 audit[5854]: USER_START pid=5854 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 00:43:21.549809 kernel: audit: type=1105 audit(1768869801.495:799): pid=5854 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 00:43:21.497000 audit[5857]: CRED_ACQ pid=5857 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 00:43:21.565042 kernel: audit: type=1103 audit(1768869801.497:800): pid=5857 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 00:43:21.566723 kubelet[3599]: E0120 00:43:21.566671 3599 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-78b7cf9965-hz2t4" podUID="050b7649-47d7-4543-80dc-167b27775ab2" Jan 20 00:43:21.740185 sshd[5857]: Connection closed by 10.200.16.10 port 43244 Jan 20 00:43:21.741514 sshd-session[5854]: pam_unix(sshd:session): session closed for user core Jan 20 00:43:21.742000 audit[5854]: USER_END pid=5854 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 00:43:21.746377 systemd[1]: sshd@9-10.200.20.14:22-10.200.16.10:43244.service: Deactivated successfully. Jan 20 00:43:21.746837 systemd-logind[2110]: Session 12 logged out. Waiting for processes to exit. Jan 20 00:43:21.749616 systemd[1]: session-12.scope: Deactivated successfully. Jan 20 00:43:21.752753 systemd-logind[2110]: Removed session 12. Jan 20 00:43:21.743000 audit[5854]: CRED_DISP pid=5854 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 00:43:21.782135 kernel: audit: type=1106 audit(1768869801.742:801): pid=5854 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 00:43:21.782211 kernel: audit: type=1104 audit(1768869801.743:802): pid=5854 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 00:43:21.743000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.200.20.14:22-10.200.16.10:43244 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 00:43:21.834516 systemd[1]: Started sshd@10-10.200.20.14:22-10.200.16.10:43252.service - OpenSSH per-connection server daemon (10.200.16.10:43252). Jan 20 00:43:21.833000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.200.20.14:22-10.200.16.10:43252 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 00:43:22.253000 audit[5870]: USER_ACCT pid=5870 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 00:43:22.255163 sshd[5870]: Accepted publickey for core from 10.200.16.10 port 43252 ssh2: RSA SHA256:cmmm7c4wjQpU6I6GIDB2gDRUMOHvT66UOlhswyLAq5I Jan 20 00:43:22.254000 audit[5870]: CRED_ACQ pid=5870 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 00:43:22.254000 audit[5870]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffde558300 a2=3 a3=0 items=0 ppid=1 pid=5870 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:43:22.254000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 00:43:22.256530 sshd-session[5870]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 00:43:22.263077 systemd-logind[2110]: New session 13 of user core. Jan 20 00:43:22.267460 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 20 00:43:22.268000 audit[5870]: USER_START pid=5870 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 00:43:22.271000 audit[5873]: CRED_ACQ pid=5873 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 00:43:22.628348 sshd[5873]: Connection closed by 10.200.16.10 port 43252 Jan 20 00:43:22.628801 sshd-session[5870]: pam_unix(sshd:session): session closed for user core Jan 20 00:43:22.628000 audit[5870]: USER_END pid=5870 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 00:43:22.628000 audit[5870]: CRED_DISP pid=5870 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 00:43:22.632289 systemd[1]: sshd@10-10.200.20.14:22-10.200.16.10:43252.service: Deactivated successfully. Jan 20 00:43:22.631000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.200.20.14:22-10.200.16.10:43252 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 00:43:22.634499 systemd[1]: session-13.scope: Deactivated successfully. Jan 20 00:43:22.635896 systemd-logind[2110]: Session 13 logged out. Waiting for processes to exit. Jan 20 00:43:22.637956 systemd-logind[2110]: Removed session 13. Jan 20 00:43:22.710522 systemd[1]: Started sshd@11-10.200.20.14:22-10.200.16.10:43266.service - OpenSSH per-connection server daemon (10.200.16.10:43266). Jan 20 00:43:22.709000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.200.20.14:22-10.200.16.10:43266 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 00:43:23.103571 sshd[5890]: Accepted publickey for core from 10.200.16.10 port 43266 ssh2: RSA SHA256:cmmm7c4wjQpU6I6GIDB2gDRUMOHvT66UOlhswyLAq5I Jan 20 00:43:23.102000 audit[5890]: USER_ACCT pid=5890 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 00:43:23.104000 audit[5890]: CRED_ACQ pid=5890 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 00:43:23.104000 audit[5890]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=fffff7510260 a2=3 a3=0 items=0 ppid=1 pid=5890 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:43:23.104000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 00:43:23.106131 sshd-session[5890]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 00:43:23.112481 systemd-logind[2110]: New session 14 of user core. Jan 20 00:43:23.118461 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 20 00:43:23.120000 audit[5890]: USER_START pid=5890 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 00:43:23.122000 audit[5893]: CRED_ACQ pid=5893 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 00:43:23.366877 sshd[5893]: Connection closed by 10.200.16.10 port 43266 Jan 20 00:43:23.367765 sshd-session[5890]: pam_unix(sshd:session): session closed for user core Jan 20 00:43:23.367000 audit[5890]: USER_END pid=5890 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 00:43:23.367000 audit[5890]: CRED_DISP pid=5890 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 00:43:23.370614 systemd[1]: sshd@11-10.200.20.14:22-10.200.16.10:43266.service: Deactivated successfully. Jan 20 00:43:23.369000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.200.20.14:22-10.200.16.10:43266 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 00:43:23.372927 systemd[1]: session-14.scope: Deactivated successfully. Jan 20 00:43:23.373816 systemd-logind[2110]: Session 14 logged out. Waiting for processes to exit. Jan 20 00:43:23.375731 systemd-logind[2110]: Removed session 14. Jan 20 00:43:23.542326 kubelet[3599]: E0120 00:43:23.541487 3599 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7c8fb8fd4d-xhrnz" podUID="0b136bd0-6a42-4726-87cd-a3538d5ee86b" Jan 20 00:43:23.543653 kubelet[3599]: E0120 00:43:23.543619 3599 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-t6nwm" podUID="e914416f-b403-4119-a223-0b5c6e18edd3" Jan 20 00:43:28.455218 systemd[1]: Started sshd@12-10.200.20.14:22-10.200.16.10:43268.service - OpenSSH per-connection server daemon (10.200.16.10:43268). Jan 20 00:43:28.459794 kernel: kauditd_printk_skb: 23 callbacks suppressed Jan 20 00:43:28.459870 kernel: audit: type=1130 audit(1768869808.454:822): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.200.20.14:22-10.200.16.10:43268 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 00:43:28.454000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.200.20.14:22-10.200.16.10:43268 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 00:43:28.540863 kubelet[3599]: E0120 00:43:28.540823 3599 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7c8fb8fd4d-9hn8w" podUID="6eedb683-7841-469d-9465-68ae5bed2952" Jan 20 00:43:28.882000 audit[5910]: USER_ACCT pid=5910 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 00:43:28.900406 sshd[5910]: Accepted publickey for core from 10.200.16.10 port 43268 ssh2: RSA SHA256:cmmm7c4wjQpU6I6GIDB2gDRUMOHvT66UOlhswyLAq5I Jan 20 00:43:28.900841 sshd-session[5910]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 00:43:28.899000 audit[5910]: CRED_ACQ pid=5910 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 00:43:28.917063 kernel: audit: type=1101 audit(1768869808.882:823): pid=5910 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 00:43:28.917135 kernel: audit: type=1103 audit(1768869808.899:824): pid=5910 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 00:43:28.926939 kernel: audit: type=1006 audit(1768869808.899:825): pid=5910 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=15 res=1 Jan 20 00:43:28.899000 audit[5910]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffcf781c70 a2=3 a3=0 items=0 ppid=1 pid=5910 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:43:28.944092 kernel: audit: type=1300 audit(1768869808.899:825): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffcf781c70 a2=3 a3=0 items=0 ppid=1 pid=5910 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:43:28.899000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 00:43:28.950968 kernel: audit: type=1327 audit(1768869808.899:825): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 00:43:28.954089 systemd-logind[2110]: New session 15 of user core. Jan 20 00:43:28.962427 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 20 00:43:28.962000 audit[5910]: USER_START pid=5910 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 00:43:28.981000 audit[5919]: CRED_ACQ pid=5919 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 00:43:28.996663 kernel: audit: type=1105 audit(1768869808.962:826): pid=5910 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 00:43:28.996722 kernel: audit: type=1103 audit(1768869808.981:827): pid=5919 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 00:43:29.182760 sshd[5919]: Connection closed by 10.200.16.10 port 43268 Jan 20 00:43:29.183484 sshd-session[5910]: pam_unix(sshd:session): session closed for user core Jan 20 00:43:29.183000 audit[5910]: USER_END pid=5910 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 00:43:29.203563 systemd[1]: sshd@12-10.200.20.14:22-10.200.16.10:43268.service: Deactivated successfully. Jan 20 00:43:29.205060 systemd[1]: session-15.scope: Deactivated successfully. Jan 20 00:43:29.183000 audit[5910]: CRED_DISP pid=5910 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 00:43:29.219408 kernel: audit: type=1106 audit(1768869809.183:828): pid=5910 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 00:43:29.219452 kernel: audit: type=1104 audit(1768869809.183:829): pid=5910 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 00:43:29.220552 systemd-logind[2110]: Session 15 logged out. Waiting for processes to exit. Jan 20 00:43:29.221846 systemd-logind[2110]: Removed session 15. Jan 20 00:43:29.202000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.200.20.14:22-10.200.16.10:43268 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 00:43:29.541260 kubelet[3599]: E0120 00:43:29.541218 3599 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-n9ngh" podUID="28862320-350f-4f29-92bb-d8201c93580b" Jan 20 00:43:30.542380 kubelet[3599]: E0120 00:43:30.542162 3599 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6d94f7fcbb-jr4rt" podUID="77e9acbe-87a2-440f-b406-8c8900ab52f5" Jan 20 00:43:32.541092 kubelet[3599]: E0120 00:43:32.540895 3599 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-57d88c779f-cqh6x" podUID="86fc1b8f-992e-433a-a4e5-96b8bd195d5d" Jan 20 00:43:33.435857 waagent[2371]: 2026-01-20T00:43:33.435807Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 2] Jan 20 00:43:33.443651 waagent[2371]: 2026-01-20T00:43:33.443616Z INFO ExtHandler Jan 20 00:43:33.443720 waagent[2371]: 2026-01-20T00:43:33.443698Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: 4533edc1-d706-4593-b6b9-62f33ed1e3df eTag: 6076365426311911690 source: Fabric] Jan 20 00:43:33.444137 waagent[2371]: 2026-01-20T00:43:33.444102Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Jan 20 00:43:33.444717 waagent[2371]: 2026-01-20T00:43:33.444684Z INFO ExtHandler Jan 20 00:43:33.444764 waagent[2371]: 2026-01-20T00:43:33.444746Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 2] Jan 20 00:43:33.487323 waagent[2371]: 2026-01-20T00:43:33.487282Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Jan 20 00:43:33.540824 waagent[2371]: 2026-01-20T00:43:33.540757Z INFO ExtHandler Downloaded certificate {'thumbprint': 'D7475F5640D56B75CE412ED987DD024D26219A76', 'hasPrivateKey': True} Jan 20 00:43:33.541095 containerd[2132]: time="2026-01-20T00:43:33.541040340Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 20 00:43:33.542129 waagent[2371]: 2026-01-20T00:43:33.541641Z INFO ExtHandler Fetch goal state completed Jan 20 00:43:33.542129 waagent[2371]: 2026-01-20T00:43:33.542065Z INFO ExtHandler ExtHandler Jan 20 00:43:33.542207 waagent[2371]: 2026-01-20T00:43:33.542158Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_2 channel: WireServer source: Fabric activity: 0286da75-562e-4540-9dc5-2829be754306 correlation 68776c76-e18b-4b11-a85c-f4b1877da454 created: 2026-01-20T00:43:27.536995Z] Jan 20 00:43:33.542551 waagent[2371]: 2026-01-20T00:43:33.542514Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Jan 20 00:43:33.543163 waagent[2371]: 2026-01-20T00:43:33.543104Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_2 1 ms] Jan 20 00:43:33.794155 containerd[2132]: time="2026-01-20T00:43:33.794117737Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 00:43:33.797200 containerd[2132]: time="2026-01-20T00:43:33.797173482Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 20 00:43:33.797335 containerd[2132]: time="2026-01-20T00:43:33.797229108Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 20 00:43:33.797527 kubelet[3599]: E0120 00:43:33.797494 3599 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 20 00:43:33.798021 kubelet[3599]: E0120 00:43:33.797844 3599 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 20 00:43:33.798021 kubelet[3599]: E0120 00:43:33.797985 3599 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:4ed88e40380e41739ab0886868a4c216,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-4s4k9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-78b7cf9965-hz2t4_calico-system(050b7649-47d7-4543-80dc-167b27775ab2): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 20 00:43:33.800367 containerd[2132]: time="2026-01-20T00:43:33.800177009Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 20 00:43:34.061149 containerd[2132]: time="2026-01-20T00:43:34.061034385Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 00:43:34.065288 containerd[2132]: time="2026-01-20T00:43:34.065213718Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 20 00:43:34.065386 containerd[2132]: time="2026-01-20T00:43:34.065311658Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 20 00:43:34.065525 kubelet[3599]: E0120 00:43:34.065493 3599 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 20 00:43:34.065624 kubelet[3599]: E0120 00:43:34.065609 3599 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 20 00:43:34.065811 kubelet[3599]: E0120 00:43:34.065767 3599 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4s4k9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-78b7cf9965-hz2t4_calico-system(050b7649-47d7-4543-80dc-167b27775ab2): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 20 00:43:34.067571 kubelet[3599]: E0120 00:43:34.067252 3599 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-78b7cf9965-hz2t4" podUID="050b7649-47d7-4543-80dc-167b27775ab2" Jan 20 00:43:34.267000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.200.20.14:22-10.200.16.10:39824 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 00:43:34.267249 systemd[1]: Started sshd@13-10.200.20.14:22-10.200.16.10:39824.service - OpenSSH per-connection server daemon (10.200.16.10:39824). Jan 20 00:43:34.286250 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 20 00:43:34.286351 kernel: audit: type=1130 audit(1768869814.267:831): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.200.20.14:22-10.200.16.10:39824 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 00:43:34.541957 kubelet[3599]: E0120 00:43:34.541750 3599 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-t6nwm" podUID="e914416f-b403-4119-a223-0b5c6e18edd3" Jan 20 00:43:34.675000 audit[5937]: USER_ACCT pid=5937 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 00:43:34.675935 sshd[5937]: Accepted publickey for core from 10.200.16.10 port 39824 ssh2: RSA SHA256:cmmm7c4wjQpU6I6GIDB2gDRUMOHvT66UOlhswyLAq5I Jan 20 00:43:34.693000 audit[5937]: CRED_ACQ pid=5937 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 00:43:34.694378 sshd-session[5937]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 00:43:34.709092 kernel: audit: type=1101 audit(1768869814.675:832): pid=5937 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 00:43:34.709160 kernel: audit: type=1103 audit(1768869814.693:833): pid=5937 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 00:43:34.721468 kernel: audit: type=1006 audit(1768869814.693:834): pid=5937 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=16 res=1 Jan 20 00:43:34.693000 audit[5937]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffcef7a5c0 a2=3 a3=0 items=0 ppid=1 pid=5937 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:43:34.738636 kernel: audit: type=1300 audit(1768869814.693:834): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffcef7a5c0 a2=3 a3=0 items=0 ppid=1 pid=5937 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:43:34.693000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 00:43:34.745998 kernel: audit: type=1327 audit(1768869814.693:834): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 00:43:34.748549 systemd-logind[2110]: New session 16 of user core. Jan 20 00:43:34.754504 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 20 00:43:34.756000 audit[5937]: USER_START pid=5937 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 00:43:34.776000 audit[5942]: CRED_ACQ pid=5942 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 00:43:34.790630 kernel: audit: type=1105 audit(1768869814.756:835): pid=5937 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 00:43:34.790699 kernel: audit: type=1103 audit(1768869814.776:836): pid=5942 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 00:43:34.980018 sshd[5942]: Connection closed by 10.200.16.10 port 39824 Jan 20 00:43:34.980243 sshd-session[5937]: pam_unix(sshd:session): session closed for user core Jan 20 00:43:34.981000 audit[5937]: USER_END pid=5937 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 00:43:34.985027 systemd[1]: sshd@13-10.200.20.14:22-10.200.16.10:39824.service: Deactivated successfully. Jan 20 00:43:34.987074 systemd[1]: session-16.scope: Deactivated successfully. Jan 20 00:43:34.981000 audit[5937]: CRED_DISP pid=5937 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 00:43:35.001190 systemd-logind[2110]: Session 16 logged out. Waiting for processes to exit. Jan 20 00:43:35.015412 kernel: audit: type=1106 audit(1768869814.981:837): pid=5937 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 00:43:35.015479 kernel: audit: type=1104 audit(1768869814.981:838): pid=5937 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 00:43:34.981000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.200.20.14:22-10.200.16.10:39824 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 00:43:35.016213 systemd-logind[2110]: Removed session 16. Jan 20 00:43:35.541337 kubelet[3599]: E0120 00:43:35.541236 3599 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7c8fb8fd4d-xhrnz" podUID="0b136bd0-6a42-4726-87cd-a3538d5ee86b" Jan 20 00:43:40.073580 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 20 00:43:40.073658 kernel: audit: type=1130 audit(1768869820.069:840): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.200.20.14:22-10.200.16.10:40558 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 00:43:40.069000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.200.20.14:22-10.200.16.10:40558 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 00:43:40.069539 systemd[1]: Started sshd@14-10.200.20.14:22-10.200.16.10:40558.service - OpenSSH per-connection server daemon (10.200.16.10:40558). Jan 20 00:43:40.499000 audit[5987]: USER_ACCT pid=5987 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 00:43:40.499688 sshd[5987]: Accepted publickey for core from 10.200.16.10 port 40558 ssh2: RSA SHA256:cmmm7c4wjQpU6I6GIDB2gDRUMOHvT66UOlhswyLAq5I Jan 20 00:43:40.516000 audit[5987]: CRED_ACQ pid=5987 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 00:43:40.516975 sshd-session[5987]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 00:43:40.522248 systemd-logind[2110]: New session 17 of user core. Jan 20 00:43:40.531119 kernel: audit: type=1101 audit(1768869820.499:841): pid=5987 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 00:43:40.531176 kernel: audit: type=1103 audit(1768869820.516:842): pid=5987 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 00:43:40.533510 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 20 00:43:40.541747 kernel: audit: type=1006 audit(1768869820.516:843): pid=5987 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=17 res=1 Jan 20 00:43:40.516000 audit[5987]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=fffff8388230 a2=3 a3=0 items=0 ppid=1 pid=5987 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:43:40.557809 kernel: audit: type=1300 audit(1768869820.516:843): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=fffff8388230 a2=3 a3=0 items=0 ppid=1 pid=5987 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:43:40.516000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 00:43:40.568644 kernel: audit: type=1327 audit(1768869820.516:843): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 00:43:40.545000 audit[5987]: USER_START pid=5987 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 00:43:40.586021 kernel: audit: type=1105 audit(1768869820.545:844): pid=5987 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 00:43:40.561000 audit[5990]: CRED_ACQ pid=5990 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 00:43:40.601161 kernel: audit: type=1103 audit(1768869820.561:845): pid=5990 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 00:43:40.840419 sshd[5990]: Connection closed by 10.200.16.10 port 40558 Jan 20 00:43:40.841120 sshd-session[5987]: pam_unix(sshd:session): session closed for user core Jan 20 00:43:40.842000 audit[5987]: USER_END pid=5987 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 00:43:40.848743 systemd[1]: sshd@14-10.200.20.14:22-10.200.16.10:40558.service: Deactivated successfully. Jan 20 00:43:40.854145 systemd[1]: session-17.scope: Deactivated successfully. Jan 20 00:43:40.859084 systemd-logind[2110]: Session 17 logged out. Waiting for processes to exit. Jan 20 00:43:40.842000 audit[5987]: CRED_DISP pid=5987 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 00:43:40.865036 systemd-logind[2110]: Removed session 17. Jan 20 00:43:40.876248 kernel: audit: type=1106 audit(1768869820.842:846): pid=5987 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 00:43:40.876346 kernel: audit: type=1104 audit(1768869820.842:847): pid=5987 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 00:43:40.848000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.200.20.14:22-10.200.16.10:40558 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 00:43:40.929000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.200.20.14:22-10.200.16.10:40562 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 00:43:40.929533 systemd[1]: Started sshd@15-10.200.20.14:22-10.200.16.10:40562.service - OpenSSH per-connection server daemon (10.200.16.10:40562). Jan 20 00:43:41.345000 audit[6002]: USER_ACCT pid=6002 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 00:43:41.346330 sshd[6002]: Accepted publickey for core from 10.200.16.10 port 40562 ssh2: RSA SHA256:cmmm7c4wjQpU6I6GIDB2gDRUMOHvT66UOlhswyLAq5I Jan 20 00:43:41.345000 audit[6002]: CRED_ACQ pid=6002 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 00:43:41.345000 audit[6002]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=fffff2e29850 a2=3 a3=0 items=0 ppid=1 pid=6002 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:43:41.345000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 00:43:41.347546 sshd-session[6002]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 00:43:41.351785 systemd-logind[2110]: New session 18 of user core. Jan 20 00:43:41.357454 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 20 00:43:41.357000 audit[6002]: USER_START pid=6002 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 00:43:41.360000 audit[6005]: CRED_ACQ pid=6005 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 00:43:41.542314 containerd[2132]: time="2026-01-20T00:43:41.542261229Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 20 00:43:41.739864 sshd[6005]: Connection closed by 10.200.16.10 port 40562 Jan 20 00:43:41.740588 sshd-session[6002]: pam_unix(sshd:session): session closed for user core Jan 20 00:43:41.740000 audit[6002]: USER_END pid=6002 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 00:43:41.740000 audit[6002]: CRED_DISP pid=6002 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 00:43:41.743812 systemd[1]: sshd@15-10.200.20.14:22-10.200.16.10:40562.service: Deactivated successfully. Jan 20 00:43:41.742000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.200.20.14:22-10.200.16.10:40562 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 00:43:41.745722 systemd[1]: session-18.scope: Deactivated successfully. Jan 20 00:43:41.747136 systemd-logind[2110]: Session 18 logged out. Waiting for processes to exit. Jan 20 00:43:41.749051 systemd-logind[2110]: Removed session 18. Jan 20 00:43:41.805381 containerd[2132]: time="2026-01-20T00:43:41.805342476Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 00:43:41.808547 containerd[2132]: time="2026-01-20T00:43:41.808517757Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 20 00:43:41.808702 containerd[2132]: time="2026-01-20T00:43:41.808599679Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 20 00:43:41.808779 kubelet[3599]: E0120 00:43:41.808737 3599 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 00:43:41.809038 kubelet[3599]: E0120 00:43:41.808785 3599 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 00:43:41.809038 kubelet[3599]: E0120 00:43:41.808899 3599 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mpls5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6d94f7fcbb-jr4rt_calico-apiserver(77e9acbe-87a2-440f-b406-8c8900ab52f5): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 20 00:43:41.810847 kubelet[3599]: E0120 00:43:41.810797 3599 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6d94f7fcbb-jr4rt" podUID="77e9acbe-87a2-440f-b406-8c8900ab52f5" Jan 20 00:43:41.827527 systemd[1]: Started sshd@16-10.200.20.14:22-10.200.16.10:40570.service - OpenSSH per-connection server daemon (10.200.16.10:40570). Jan 20 00:43:41.826000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.200.20.14:22-10.200.16.10:40570 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 00:43:42.239000 audit[6030]: USER_ACCT pid=6030 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 00:43:42.241033 sshd[6030]: Accepted publickey for core from 10.200.16.10 port 40570 ssh2: RSA SHA256:cmmm7c4wjQpU6I6GIDB2gDRUMOHvT66UOlhswyLAq5I Jan 20 00:43:42.241000 audit[6030]: CRED_ACQ pid=6030 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 00:43:42.241000 audit[6030]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffc6fb8700 a2=3 a3=0 items=0 ppid=1 pid=6030 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:43:42.241000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 00:43:42.243480 sshd-session[6030]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 00:43:42.248841 systemd-logind[2110]: New session 19 of user core. Jan 20 00:43:42.256133 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 20 00:43:42.257000 audit[6030]: USER_START pid=6030 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 00:43:42.259000 audit[6033]: CRED_ACQ pid=6033 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 00:43:42.542160 containerd[2132]: time="2026-01-20T00:43:42.541740594Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 20 00:43:42.798966 containerd[2132]: time="2026-01-20T00:43:42.798845905Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 00:43:42.802213 containerd[2132]: time="2026-01-20T00:43:42.802163679Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 20 00:43:42.802312 containerd[2132]: time="2026-01-20T00:43:42.802242867Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 20 00:43:42.803493 kubelet[3599]: E0120 00:43:42.802476 3599 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 00:43:42.803493 kubelet[3599]: E0120 00:43:42.802543 3599 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 00:43:42.803493 kubelet[3599]: E0120 00:43:42.802779 3599 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xkm7x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-7c8fb8fd4d-9hn8w_calico-apiserver(6eedb683-7841-469d-9465-68ae5bed2952): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 20 00:43:42.803769 containerd[2132]: time="2026-01-20T00:43:42.803547023Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 20 00:43:42.804724 kubelet[3599]: E0120 00:43:42.804692 3599 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7c8fb8fd4d-9hn8w" podUID="6eedb683-7841-469d-9465-68ae5bed2952" Jan 20 00:43:42.988000 audit[6045]: NETFILTER_CFG table=filter:148 family=2 entries=26 op=nft_register_rule pid=6045 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 00:43:42.988000 audit[6045]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=14176 a0=3 a1=fffffadff3e0 a2=0 a3=1 items=0 ppid=3769 pid=6045 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:43:42.988000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 00:43:42.998000 audit[6045]: NETFILTER_CFG table=nat:149 family=2 entries=20 op=nft_register_rule pid=6045 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 00:43:42.998000 audit[6045]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5772 a0=3 a1=fffffadff3e0 a2=0 a3=1 items=0 ppid=3769 pid=6045 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:43:42.998000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 00:43:43.009000 audit[6047]: NETFILTER_CFG table=filter:150 family=2 entries=38 op=nft_register_rule pid=6047 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 00:43:43.009000 audit[6047]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=14176 a0=3 a1=ffffff99f5a0 a2=0 a3=1 items=0 ppid=3769 pid=6047 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:43:43.009000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 00:43:43.013000 audit[6047]: NETFILTER_CFG table=nat:151 family=2 entries=20 op=nft_register_rule pid=6047 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 00:43:43.013000 audit[6047]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5772 a0=3 a1=ffffff99f5a0 a2=0 a3=1 items=0 ppid=3769 pid=6047 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:43:43.013000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 00:43:43.074441 sshd[6033]: Connection closed by 10.200.16.10 port 40570 Jan 20 00:43:43.076220 sshd-session[6030]: pam_unix(sshd:session): session closed for user core Jan 20 00:43:43.076000 audit[6030]: USER_END pid=6030 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 00:43:43.077000 audit[6030]: CRED_DISP pid=6030 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 00:43:43.080367 systemd[1]: session-19.scope: Deactivated successfully. Jan 20 00:43:43.082774 systemd[1]: sshd@16-10.200.20.14:22-10.200.16.10:40570.service: Deactivated successfully. Jan 20 00:43:43.081000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.200.20.14:22-10.200.16.10:40570 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 00:43:43.085728 systemd-logind[2110]: Session 19 logged out. Waiting for processes to exit. Jan 20 00:43:43.087636 containerd[2132]: time="2026-01-20T00:43:43.087602617Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 00:43:43.087903 systemd-logind[2110]: Removed session 19. Jan 20 00:43:43.091648 containerd[2132]: time="2026-01-20T00:43:43.091605411Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 20 00:43:43.092356 containerd[2132]: time="2026-01-20T00:43:43.091620876Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 20 00:43:43.092394 kubelet[3599]: E0120 00:43:43.091799 3599 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 20 00:43:43.092394 kubelet[3599]: E0120 00:43:43.091841 3599 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 20 00:43:43.092394 kubelet[3599]: E0120 00:43:43.091949 3599 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6shcx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-n9ngh_calico-system(28862320-350f-4f29-92bb-d8201c93580b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 20 00:43:43.093517 kubelet[3599]: E0120 00:43:43.093491 3599 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-n9ngh" podUID="28862320-350f-4f29-92bb-d8201c93580b" Jan 20 00:43:43.163227 systemd[1]: Started sshd@17-10.200.20.14:22-10.200.16.10:40574.service - OpenSSH per-connection server daemon (10.200.16.10:40574). Jan 20 00:43:43.162000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.200.20.14:22-10.200.16.10:40574 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 00:43:43.541946 containerd[2132]: time="2026-01-20T00:43:43.541827772Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 20 00:43:43.589000 audit[6052]: USER_ACCT pid=6052 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 00:43:43.590600 sshd[6052]: Accepted publickey for core from 10.200.16.10 port 40574 ssh2: RSA SHA256:cmmm7c4wjQpU6I6GIDB2gDRUMOHvT66UOlhswyLAq5I Jan 20 00:43:43.589000 audit[6052]: CRED_ACQ pid=6052 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 00:43:43.590000 audit[6052]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffeedefc40 a2=3 a3=0 items=0 ppid=1 pid=6052 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=20 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:43:43.590000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 00:43:43.591966 sshd-session[6052]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 00:43:43.596235 systemd-logind[2110]: New session 20 of user core. Jan 20 00:43:43.607437 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 20 00:43:43.608000 audit[6052]: USER_START pid=6052 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 00:43:43.609000 audit[6055]: CRED_ACQ pid=6055 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 00:43:43.816350 containerd[2132]: time="2026-01-20T00:43:43.816192982Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 00:43:43.819697 containerd[2132]: time="2026-01-20T00:43:43.819610848Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 20 00:43:43.819697 containerd[2132]: time="2026-01-20T00:43:43.819666219Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 20 00:43:43.819964 kubelet[3599]: E0120 00:43:43.819918 3599 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 20 00:43:43.820021 kubelet[3599]: E0120 00:43:43.819972 3599 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 20 00:43:43.820315 kubelet[3599]: E0120 00:43:43.820092 3599 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qm99j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-57d88c779f-cqh6x_calico-system(86fc1b8f-992e-433a-a4e5-96b8bd195d5d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 20 00:43:43.821687 kubelet[3599]: E0120 00:43:43.821648 3599 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-57d88c779f-cqh6x" podUID="86fc1b8f-992e-433a-a4e5-96b8bd195d5d" Jan 20 00:43:43.978550 sshd[6055]: Connection closed by 10.200.16.10 port 40574 Jan 20 00:43:43.978465 sshd-session[6052]: pam_unix(sshd:session): session closed for user core Jan 20 00:43:43.981000 audit[6052]: USER_END pid=6052 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 00:43:43.981000 audit[6052]: CRED_DISP pid=6052 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 00:43:43.987043 systemd[1]: sshd@17-10.200.20.14:22-10.200.16.10:40574.service: Deactivated successfully. Jan 20 00:43:43.987419 systemd-logind[2110]: Session 20 logged out. Waiting for processes to exit. Jan 20 00:43:43.986000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.200.20.14:22-10.200.16.10:40574 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 00:43:43.991567 systemd[1]: session-20.scope: Deactivated successfully. Jan 20 00:43:43.993381 systemd-logind[2110]: Removed session 20. Jan 20 00:43:44.061000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.200.20.14:22-10.200.16.10:40584 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 00:43:44.062531 systemd[1]: Started sshd@18-10.200.20.14:22-10.200.16.10:40584.service - OpenSSH per-connection server daemon (10.200.16.10:40584). Jan 20 00:43:44.453000 audit[6066]: USER_ACCT pid=6066 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 00:43:44.455216 sshd[6066]: Accepted publickey for core from 10.200.16.10 port 40584 ssh2: RSA SHA256:cmmm7c4wjQpU6I6GIDB2gDRUMOHvT66UOlhswyLAq5I Jan 20 00:43:44.454000 audit[6066]: CRED_ACQ pid=6066 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 00:43:44.455000 audit[6066]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffe99d14d0 a2=3 a3=0 items=0 ppid=1 pid=6066 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=21 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:43:44.455000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 00:43:44.456684 sshd-session[6066]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 00:43:44.460871 systemd-logind[2110]: New session 21 of user core. Jan 20 00:43:44.464426 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 20 00:43:44.465000 audit[6066]: USER_START pid=6066 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 00:43:44.466000 audit[6069]: CRED_ACQ pid=6069 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 00:43:44.709492 sshd[6069]: Connection closed by 10.200.16.10 port 40584 Jan 20 00:43:44.709864 sshd-session[6066]: pam_unix(sshd:session): session closed for user core Jan 20 00:43:44.710000 audit[6066]: USER_END pid=6066 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 00:43:44.710000 audit[6066]: CRED_DISP pid=6066 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 00:43:44.714015 systemd[1]: sshd@18-10.200.20.14:22-10.200.16.10:40584.service: Deactivated successfully. Jan 20 00:43:44.714000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.200.20.14:22-10.200.16.10:40584 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 00:43:44.717109 systemd[1]: session-21.scope: Deactivated successfully. Jan 20 00:43:44.718948 systemd-logind[2110]: Session 21 logged out. Waiting for processes to exit. Jan 20 00:43:44.719779 systemd-logind[2110]: Removed session 21. Jan 20 00:43:45.544131 kubelet[3599]: E0120 00:43:45.544060 3599 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-78b7cf9965-hz2t4" podUID="050b7649-47d7-4543-80dc-167b27775ab2" Jan 20 00:43:47.541358 containerd[2132]: time="2026-01-20T00:43:47.541019850Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 20 00:43:47.778946 containerd[2132]: time="2026-01-20T00:43:47.778767214Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 00:43:47.781860 containerd[2132]: time="2026-01-20T00:43:47.781827129Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 20 00:43:47.781955 containerd[2132]: time="2026-01-20T00:43:47.781847034Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 20 00:43:47.782157 kubelet[3599]: E0120 00:43:47.782102 3599 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 20 00:43:47.782474 kubelet[3599]: E0120 00:43:47.782166 3599 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 20 00:43:47.782474 kubelet[3599]: E0120 00:43:47.782266 3599 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-njtjb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-t6nwm_calico-system(e914416f-b403-4119-a223-0b5c6e18edd3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 20 00:43:47.784584 containerd[2132]: time="2026-01-20T00:43:47.784298356Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 20 00:43:48.031523 containerd[2132]: time="2026-01-20T00:43:48.030423644Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 00:43:48.034970 containerd[2132]: time="2026-01-20T00:43:48.034841261Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 20 00:43:48.034970 containerd[2132]: time="2026-01-20T00:43:48.034928976Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 20 00:43:48.035126 kubelet[3599]: E0120 00:43:48.035086 3599 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 20 00:43:48.035178 kubelet[3599]: E0120 00:43:48.035135 3599 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 20 00:43:48.035270 kubelet[3599]: E0120 00:43:48.035233 3599 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-njtjb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-t6nwm_calico-system(e914416f-b403-4119-a223-0b5c6e18edd3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 20 00:43:48.036467 kubelet[3599]: E0120 00:43:48.036433 3599 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-t6nwm" podUID="e914416f-b403-4119-a223-0b5c6e18edd3" Jan 20 00:43:48.839000 audit[6081]: NETFILTER_CFG table=filter:152 family=2 entries=26 op=nft_register_rule pid=6081 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 00:43:48.844422 kernel: kauditd_printk_skb: 57 callbacks suppressed Jan 20 00:43:48.844499 kernel: audit: type=1325 audit(1768869828.839:889): table=filter:152 family=2 entries=26 op=nft_register_rule pid=6081 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 00:43:48.839000 audit[6081]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=ffffe6ebd000 a2=0 a3=1 items=0 ppid=3769 pid=6081 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:43:48.874096 kernel: audit: type=1300 audit(1768869828.839:889): arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=ffffe6ebd000 a2=0 a3=1 items=0 ppid=3769 pid=6081 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:43:48.839000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 00:43:48.885896 kernel: audit: type=1327 audit(1768869828.839:889): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 00:43:48.876000 audit[6081]: NETFILTER_CFG table=nat:153 family=2 entries=104 op=nft_register_chain pid=6081 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 00:43:48.895537 kernel: audit: type=1325 audit(1768869828.876:890): table=nat:153 family=2 entries=104 op=nft_register_chain pid=6081 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 00:43:48.876000 audit[6081]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=48684 a0=3 a1=ffffe6ebd000 a2=0 a3=1 items=0 ppid=3769 pid=6081 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:43:48.913893 kernel: audit: type=1300 audit(1768869828.876:890): arch=c00000b7 syscall=211 success=yes exit=48684 a0=3 a1=ffffe6ebd000 a2=0 a3=1 items=0 ppid=3769 pid=6081 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:43:48.876000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 00:43:48.925730 kernel: audit: type=1327 audit(1768869828.876:890): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 00:43:49.793442 systemd[1]: Started sshd@19-10.200.20.14:22-10.200.16.10:42962.service - OpenSSH per-connection server daemon (10.200.16.10:42962). Jan 20 00:43:49.792000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.200.20.14:22-10.200.16.10:42962 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 00:43:49.808325 kernel: audit: type=1130 audit(1768869829.792:891): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.200.20.14:22-10.200.16.10:42962 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 00:43:50.186000 audit[6083]: USER_ACCT pid=6083 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 00:43:50.189484 sshd[6083]: Accepted publickey for core from 10.200.16.10 port 42962 ssh2: RSA SHA256:cmmm7c4wjQpU6I6GIDB2gDRUMOHvT66UOlhswyLAq5I Jan 20 00:43:50.207829 sshd-session[6083]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 00:43:50.206000 audit[6083]: CRED_ACQ pid=6083 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 00:43:50.224005 kernel: audit: type=1101 audit(1768869830.186:892): pid=6083 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 00:43:50.224070 kernel: audit: type=1103 audit(1768869830.206:893): pid=6083 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 00:43:50.229314 systemd-logind[2110]: New session 22 of user core. Jan 20 00:43:50.235461 kernel: audit: type=1006 audit(1768869830.206:894): pid=6083 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=22 res=1 Jan 20 00:43:50.206000 audit[6083]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=fffff157ddc0 a2=3 a3=0 items=0 ppid=1 pid=6083 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:43:50.206000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 00:43:50.238461 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 20 00:43:50.240000 audit[6083]: USER_START pid=6083 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 00:43:50.241000 audit[6086]: CRED_ACQ pid=6086 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 00:43:50.469594 sshd[6086]: Connection closed by 10.200.16.10 port 42962 Jan 20 00:43:50.470045 sshd-session[6083]: pam_unix(sshd:session): session closed for user core Jan 20 00:43:50.469000 audit[6083]: USER_END pid=6083 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 00:43:50.470000 audit[6083]: CRED_DISP pid=6083 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 00:43:50.474170 systemd[1]: sshd@19-10.200.20.14:22-10.200.16.10:42962.service: Deactivated successfully. Jan 20 00:43:50.473000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.200.20.14:22-10.200.16.10:42962 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 00:43:50.478205 systemd[1]: session-22.scope: Deactivated successfully. Jan 20 00:43:50.480130 systemd-logind[2110]: Session 22 logged out. Waiting for processes to exit. Jan 20 00:43:50.482287 systemd-logind[2110]: Removed session 22. Jan 20 00:43:50.542722 containerd[2132]: time="2026-01-20T00:43:50.542694011Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 20 00:43:50.846900 containerd[2132]: time="2026-01-20T00:43:50.846853052Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 00:43:50.849963 containerd[2132]: time="2026-01-20T00:43:50.849925095Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 20 00:43:50.850231 containerd[2132]: time="2026-01-20T00:43:50.850001066Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 20 00:43:50.850492 kubelet[3599]: E0120 00:43:50.850226 3599 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 00:43:50.850492 kubelet[3599]: E0120 00:43:50.850278 3599 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 00:43:50.850492 kubelet[3599]: E0120 00:43:50.850425 3599 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-q6cps,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-7c8fb8fd4d-xhrnz_calico-apiserver(0b136bd0-6a42-4726-87cd-a3538d5ee86b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 20 00:43:50.851669 kubelet[3599]: E0120 00:43:50.851558 3599 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7c8fb8fd4d-xhrnz" podUID="0b136bd0-6a42-4726-87cd-a3538d5ee86b" Jan 20 00:43:52.542324 kubelet[3599]: E0120 00:43:52.541831 3599 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6d94f7fcbb-jr4rt" podUID="77e9acbe-87a2-440f-b406-8c8900ab52f5" Jan 20 00:43:55.558311 systemd[1]: Started sshd@20-10.200.20.14:22-10.200.16.10:42974.service - OpenSSH per-connection server daemon (10.200.16.10:42974). Jan 20 00:43:55.564551 kernel: kauditd_printk_skb: 7 callbacks suppressed Jan 20 00:43:55.564625 kernel: audit: type=1130 audit(1768869835.557:900): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.200.20.14:22-10.200.16.10:42974 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 00:43:55.557000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.200.20.14:22-10.200.16.10:42974 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 00:43:55.993000 audit[6097]: USER_ACCT pid=6097 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 00:43:56.010139 sshd[6097]: Accepted publickey for core from 10.200.16.10 port 42974 ssh2: RSA SHA256:cmmm7c4wjQpU6I6GIDB2gDRUMOHvT66UOlhswyLAq5I Jan 20 00:43:56.011369 kernel: audit: type=1101 audit(1768869835.993:901): pid=6097 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 00:43:56.010000 audit[6097]: CRED_ACQ pid=6097 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 00:43:56.011906 sshd-session[6097]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 00:43:56.037558 kernel: audit: type=1103 audit(1768869836.010:902): pid=6097 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 00:43:56.037626 kernel: audit: type=1006 audit(1768869836.010:903): pid=6097 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=23 res=1 Jan 20 00:43:56.010000 audit[6097]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffe96c4de0 a2=3 a3=0 items=0 ppid=1 pid=6097 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:43:56.054289 kernel: audit: type=1300 audit(1768869836.010:903): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffe96c4de0 a2=3 a3=0 items=0 ppid=1 pid=6097 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:43:56.010000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 00:43:56.057255 systemd-logind[2110]: New session 23 of user core. Jan 20 00:43:56.060941 kernel: audit: type=1327 audit(1768869836.010:903): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 00:43:56.063446 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 20 00:43:56.064000 audit[6097]: USER_START pid=6097 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 00:43:56.066000 audit[6100]: CRED_ACQ pid=6100 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 00:43:56.098139 kernel: audit: type=1105 audit(1768869836.064:904): pid=6097 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 00:43:56.098201 kernel: audit: type=1103 audit(1768869836.066:905): pid=6100 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 00:43:56.285409 sshd[6100]: Connection closed by 10.200.16.10 port 42974 Jan 20 00:43:56.285873 sshd-session[6097]: pam_unix(sshd:session): session closed for user core Jan 20 00:43:56.285000 audit[6097]: USER_END pid=6097 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 00:43:56.305673 systemd[1]: sshd@20-10.200.20.14:22-10.200.16.10:42974.service: Deactivated successfully. Jan 20 00:43:56.307108 systemd[1]: session-23.scope: Deactivated successfully. Jan 20 00:43:56.285000 audit[6097]: CRED_DISP pid=6097 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 00:43:56.309715 systemd-logind[2110]: Session 23 logged out. Waiting for processes to exit. Jan 20 00:43:56.310665 systemd-logind[2110]: Removed session 23. Jan 20 00:43:56.321567 kernel: audit: type=1106 audit(1768869836.285:906): pid=6097 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 00:43:56.321625 kernel: audit: type=1104 audit(1768869836.285:907): pid=6097 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 00:43:56.304000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.200.20.14:22-10.200.16.10:42974 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 00:43:56.542900 kubelet[3599]: E0120 00:43:56.542378 3599 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7c8fb8fd4d-9hn8w" podUID="6eedb683-7841-469d-9465-68ae5bed2952" Jan 20 00:43:56.543755 kubelet[3599]: E0120 00:43:56.543602 3599 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-n9ngh" podUID="28862320-350f-4f29-92bb-d8201c93580b" Jan 20 00:43:56.544077 kubelet[3599]: E0120 00:43:56.543908 3599 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-57d88c779f-cqh6x" podUID="86fc1b8f-992e-433a-a4e5-96b8bd195d5d" Jan 20 00:43:58.548490 kubelet[3599]: E0120 00:43:58.547470 3599 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-78b7cf9965-hz2t4" podUID="050b7649-47d7-4543-80dc-167b27775ab2" Jan 20 00:43:59.542235 kubelet[3599]: E0120 00:43:59.542187 3599 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-t6nwm" podUID="e914416f-b403-4119-a223-0b5c6e18edd3" Jan 20 00:44:01.373000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.200.20.14:22-10.200.16.10:51342 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 00:44:01.373918 systemd[1]: Started sshd@21-10.200.20.14:22-10.200.16.10:51342.service - OpenSSH per-connection server daemon (10.200.16.10:51342). Jan 20 00:44:01.376954 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 20 00:44:01.377026 kernel: audit: type=1130 audit(1768869841.373:909): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.200.20.14:22-10.200.16.10:51342 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 00:44:01.788000 audit[6111]: USER_ACCT pid=6111 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 00:44:01.804419 sshd[6111]: Accepted publickey for core from 10.200.16.10 port 51342 ssh2: RSA SHA256:cmmm7c4wjQpU6I6GIDB2gDRUMOHvT66UOlhswyLAq5I Jan 20 00:44:01.805834 sshd-session[6111]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 00:44:01.804000 audit[6111]: CRED_ACQ pid=6111 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 00:44:01.825605 kernel: audit: type=1101 audit(1768869841.788:910): pid=6111 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 00:44:01.825674 kernel: audit: type=1103 audit(1768869841.804:911): pid=6111 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 00:44:01.834201 systemd-logind[2110]: New session 24 of user core. Jan 20 00:44:01.843274 kernel: audit: type=1006 audit(1768869841.805:912): pid=6111 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=24 res=1 Jan 20 00:44:01.846467 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 20 00:44:01.805000 audit[6111]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=fffff08d6cd0 a2=3 a3=0 items=0 ppid=1 pid=6111 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:44:01.864391 kernel: audit: type=1300 audit(1768869841.805:912): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=fffff08d6cd0 a2=3 a3=0 items=0 ppid=1 pid=6111 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:44:01.805000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 00:44:01.879237 kernel: audit: type=1327 audit(1768869841.805:912): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 00:44:01.869000 audit[6111]: USER_START pid=6111 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 00:44:01.897619 kernel: audit: type=1105 audit(1768869841.869:913): pid=6111 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 00:44:01.871000 audit[6114]: CRED_ACQ pid=6114 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 00:44:01.912896 kernel: audit: type=1103 audit(1768869841.871:914): pid=6114 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 00:44:02.068572 sshd[6114]: Connection closed by 10.200.16.10 port 51342 Jan 20 00:44:02.069251 sshd-session[6111]: pam_unix(sshd:session): session closed for user core Jan 20 00:44:02.070000 audit[6111]: USER_END pid=6111 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 00:44:02.073552 systemd-logind[2110]: Session 24 logged out. Waiting for processes to exit. Jan 20 00:44:02.075195 systemd[1]: sshd@21-10.200.20.14:22-10.200.16.10:51342.service: Deactivated successfully. Jan 20 00:44:02.079475 systemd[1]: session-24.scope: Deactivated successfully. Jan 20 00:44:02.084485 systemd-logind[2110]: Removed session 24. Jan 20 00:44:02.070000 audit[6111]: CRED_DISP pid=6111 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 00:44:02.103606 kernel: audit: type=1106 audit(1768869842.070:915): pid=6111 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 00:44:02.103715 kernel: audit: type=1104 audit(1768869842.070:916): pid=6111 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 00:44:02.075000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.200.20.14:22-10.200.16.10:51342 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 00:44:02.543062 kubelet[3599]: E0120 00:44:02.541857 3599 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7c8fb8fd4d-xhrnz" podUID="0b136bd0-6a42-4726-87cd-a3538d5ee86b" Jan 20 00:44:03.542290 kubelet[3599]: E0120 00:44:03.541531 3599 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6d94f7fcbb-jr4rt" podUID="77e9acbe-87a2-440f-b406-8c8900ab52f5" Jan 20 00:44:07.156000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.200.20.14:22-10.200.16.10:51356 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 00:44:07.157551 systemd[1]: Started sshd@22-10.200.20.14:22-10.200.16.10:51356.service - OpenSSH per-connection server daemon (10.200.16.10:51356). Jan 20 00:44:07.160764 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 20 00:44:07.160934 kernel: audit: type=1130 audit(1768869847.156:918): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.200.20.14:22-10.200.16.10:51356 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 00:44:07.541535 kubelet[3599]: E0120 00:44:07.541481 3599 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7c8fb8fd4d-9hn8w" podUID="6eedb683-7841-469d-9465-68ae5bed2952" Jan 20 00:44:07.592000 audit[6128]: USER_ACCT pid=6128 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 00:44:07.610603 sshd[6128]: Accepted publickey for core from 10.200.16.10 port 51356 ssh2: RSA SHA256:cmmm7c4wjQpU6I6GIDB2gDRUMOHvT66UOlhswyLAq5I Jan 20 00:44:07.611000 audit[6128]: CRED_ACQ pid=6128 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 00:44:07.612226 sshd-session[6128]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 00:44:07.627959 kernel: audit: type=1101 audit(1768869847.592:919): pid=6128 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 00:44:07.628019 kernel: audit: type=1103 audit(1768869847.611:920): pid=6128 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 00:44:07.635698 systemd-logind[2110]: New session 25 of user core. Jan 20 00:44:07.639274 kernel: audit: type=1006 audit(1768869847.611:921): pid=6128 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=25 res=1 Jan 20 00:44:07.611000 audit[6128]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffd3439a50 a2=3 a3=0 items=0 ppid=1 pid=6128 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:44:07.657481 kernel: audit: type=1300 audit(1768869847.611:921): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffd3439a50 a2=3 a3=0 items=0 ppid=1 pid=6128 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:44:07.611000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 00:44:07.663481 kernel: audit: type=1327 audit(1768869847.611:921): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 00:44:07.665479 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 20 00:44:07.668000 audit[6128]: USER_START pid=6128 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 00:44:07.686000 audit[6137]: CRED_ACQ pid=6137 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 00:44:07.702211 kernel: audit: type=1105 audit(1768869847.668:922): pid=6128 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 00:44:07.703012 kernel: audit: type=1103 audit(1768869847.686:923): pid=6137 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 00:44:07.904493 sshd[6137]: Connection closed by 10.200.16.10 port 51356 Jan 20 00:44:07.905239 sshd-session[6128]: pam_unix(sshd:session): session closed for user core Jan 20 00:44:07.906000 audit[6128]: USER_END pid=6128 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 00:44:07.932052 systemd[1]: sshd@22-10.200.20.14:22-10.200.16.10:51356.service: Deactivated successfully. Jan 20 00:44:07.933803 systemd[1]: session-25.scope: Deactivated successfully. Jan 20 00:44:07.935019 systemd-logind[2110]: Session 25 logged out. Waiting for processes to exit. Jan 20 00:44:07.936949 systemd-logind[2110]: Removed session 25. Jan 20 00:44:07.906000 audit[6128]: CRED_DISP pid=6128 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 00:44:07.954268 kernel: audit: type=1106 audit(1768869847.906:924): pid=6128 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 00:44:07.954354 kernel: audit: type=1104 audit(1768869847.906:925): pid=6128 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 00:44:07.930000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.200.20.14:22-10.200.16.10:51356 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 00:44:08.543186 kubelet[3599]: E0120 00:44:08.543148 3599 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-n9ngh" podUID="28862320-350f-4f29-92bb-d8201c93580b" Jan 20 00:44:11.540991 kubelet[3599]: E0120 00:44:11.540809 3599 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-57d88c779f-cqh6x" podUID="86fc1b8f-992e-433a-a4e5-96b8bd195d5d" Jan 20 00:44:12.544076 kubelet[3599]: E0120 00:44:12.543881 3599 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-78b7cf9965-hz2t4" podUID="050b7649-47d7-4543-80dc-167b27775ab2" Jan 20 00:44:12.996651 systemd[1]: Started sshd@23-10.200.20.14:22-10.200.16.10:47398.service - OpenSSH per-connection server daemon (10.200.16.10:47398). Jan 20 00:44:12.995000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.200.20.14:22-10.200.16.10:47398 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 00:44:12.999758 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 20 00:44:13.000064 kernel: audit: type=1130 audit(1768869852.995:927): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.200.20.14:22-10.200.16.10:47398 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 00:44:13.444000 audit[6168]: USER_ACCT pid=6168 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 00:44:13.446168 sshd[6168]: Accepted publickey for core from 10.200.16.10 port 47398 ssh2: RSA SHA256:cmmm7c4wjQpU6I6GIDB2gDRUMOHvT66UOlhswyLAq5I Jan 20 00:44:13.463144 sshd-session[6168]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 00:44:13.461000 audit[6168]: CRED_ACQ pid=6168 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 00:44:13.479425 kernel: audit: type=1101 audit(1768869853.444:928): pid=6168 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 00:44:13.479498 kernel: audit: type=1103 audit(1768869853.461:929): pid=6168 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 00:44:13.489087 kernel: audit: type=1006 audit(1768869853.461:930): pid=6168 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=26 res=1 Jan 20 00:44:13.461000 audit[6168]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffcb369000 a2=3 a3=0 items=0 ppid=1 pid=6168 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=26 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:44:13.505550 kernel: audit: type=1300 audit(1768869853.461:930): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffcb369000 a2=3 a3=0 items=0 ppid=1 pid=6168 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=26 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 00:44:13.461000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 00:44:13.512171 kernel: audit: type=1327 audit(1768869853.461:930): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 00:44:13.517712 systemd-logind[2110]: New session 26 of user core. Jan 20 00:44:13.522959 systemd[1]: Started session-26.scope - Session 26 of User core. Jan 20 00:44:13.525000 audit[6168]: USER_START pid=6168 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 00:44:13.543000 audit[6171]: CRED_ACQ pid=6171 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 00:44:13.558706 kernel: audit: type=1105 audit(1768869853.525:931): pid=6168 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 00:44:13.558763 kernel: audit: type=1103 audit(1768869853.543:932): pid=6171 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 00:44:13.742808 sshd[6171]: Connection closed by 10.200.16.10 port 47398 Jan 20 00:44:13.743189 sshd-session[6168]: pam_unix(sshd:session): session closed for user core Jan 20 00:44:13.742000 audit[6168]: USER_END pid=6168 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 00:44:13.746392 systemd[1]: session-26.scope: Deactivated successfully. Jan 20 00:44:13.747596 systemd-logind[2110]: Session 26 logged out. Waiting for processes to exit. Jan 20 00:44:13.748576 systemd[1]: sshd@23-10.200.20.14:22-10.200.16.10:47398.service: Deactivated successfully. Jan 20 00:44:13.752329 systemd-logind[2110]: Removed session 26. Jan 20 00:44:13.743000 audit[6168]: CRED_DISP pid=6168 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 00:44:13.777251 kernel: audit: type=1106 audit(1768869853.742:933): pid=6168 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 00:44:13.777319 kernel: audit: type=1104 audit(1768869853.743:934): pid=6168 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 00:44:13.747000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.200.20.14:22-10.200.16.10:47398 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 00:44:14.545552 kubelet[3599]: E0120 00:44:14.545512 3599 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-t6nwm" podUID="e914416f-b403-4119-a223-0b5c6e18edd3"