Nov 1 00:22:04.683546 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Nov 1 00:22:04.683565 kernel: Linux version 5.15.192-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP PREEMPT Fri Oct 31 23:12:38 -00 2025 Nov 1 00:22:04.683573 kernel: efi: EFI v2.70 by EDK II Nov 1 00:22:04.683578 kernel: efi: SMBIOS 3.0=0xd9260000 ACPI 2.0=0xd9240000 MEMATTR=0xda32b018 RNG=0xd9220018 MEMRESERVE=0xd9521c18 Nov 1 00:22:04.683583 kernel: random: crng init done Nov 1 00:22:04.683589 kernel: ACPI: Early table checksum verification disabled Nov 1 00:22:04.683595 kernel: ACPI: RSDP 0x00000000D9240000 000024 (v02 BOCHS ) Nov 1 00:22:04.683602 kernel: ACPI: XSDT 0x00000000D9230000 000064 (v01 BOCHS BXPC 00000001 01000013) Nov 1 00:22:04.683607 kernel: ACPI: FACP 0x00000000D91E0000 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Nov 1 00:22:04.683613 kernel: ACPI: DSDT 0x00000000D91F0000 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Nov 1 00:22:04.683618 kernel: ACPI: APIC 0x00000000D91D0000 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Nov 1 00:22:04.683623 kernel: ACPI: PPTT 0x00000000D91C0000 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Nov 1 00:22:04.683629 kernel: ACPI: GTDT 0x00000000D91B0000 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Nov 1 00:22:04.683634 kernel: ACPI: MCFG 0x00000000D91A0000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 1 00:22:04.683642 kernel: ACPI: SPCR 0x00000000D9190000 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Nov 1 00:22:04.683647 kernel: ACPI: DBG2 0x00000000D9180000 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Nov 1 00:22:04.683653 kernel: ACPI: IORT 0x00000000D9170000 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Nov 1 00:22:04.683659 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Nov 1 00:22:04.683665 kernel: NUMA: Failed to initialise from firmware Nov 1 00:22:04.683670 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Nov 1 00:22:04.683676 kernel: NUMA: NODE_DATA [mem 0xdcb0b900-0xdcb10fff] Nov 1 00:22:04.683681 kernel: Zone ranges: Nov 1 00:22:04.683687 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Nov 1 00:22:04.683694 kernel: DMA32 empty Nov 1 00:22:04.683699 kernel: Normal empty Nov 1 00:22:04.683705 kernel: Movable zone start for each node Nov 1 00:22:04.683710 kernel: Early memory node ranges Nov 1 00:22:04.683716 kernel: node 0: [mem 0x0000000040000000-0x00000000d924ffff] Nov 1 00:22:04.683721 kernel: node 0: [mem 0x00000000d9250000-0x00000000d951ffff] Nov 1 00:22:04.683727 kernel: node 0: [mem 0x00000000d9520000-0x00000000dc7fffff] Nov 1 00:22:04.683733 kernel: node 0: [mem 0x00000000dc800000-0x00000000dc88ffff] Nov 1 00:22:04.683738 kernel: node 0: [mem 0x00000000dc890000-0x00000000dc89ffff] Nov 1 00:22:04.683744 kernel: node 0: [mem 0x00000000dc8a0000-0x00000000dc9bffff] Nov 1 00:22:04.683749 kernel: node 0: [mem 0x00000000dc9c0000-0x00000000dcffffff] Nov 1 00:22:04.683755 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Nov 1 00:22:04.683762 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Nov 1 00:22:04.683768 kernel: psci: probing for conduit method from ACPI. Nov 1 00:22:04.683773 kernel: psci: PSCIv1.1 detected in firmware. Nov 1 00:22:04.683779 kernel: psci: Using standard PSCI v0.2 function IDs Nov 1 00:22:04.683785 kernel: psci: Trusted OS migration not required Nov 1 00:22:04.683793 kernel: psci: SMC Calling Convention v1.1 Nov 1 00:22:04.683799 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Nov 1 00:22:04.683806 kernel: ACPI: SRAT not present Nov 1 00:22:04.683812 kernel: percpu: Embedded 30 pages/cpu s83032 r8192 d31656 u122880 Nov 1 00:22:04.683819 kernel: pcpu-alloc: s83032 r8192 d31656 u122880 alloc=30*4096 Nov 1 00:22:04.683825 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Nov 1 00:22:04.683831 kernel: Detected PIPT I-cache on CPU0 Nov 1 00:22:04.683837 kernel: CPU features: detected: GIC system register CPU interface Nov 1 00:22:04.683843 kernel: CPU features: detected: Hardware dirty bit management Nov 1 00:22:04.683849 kernel: CPU features: detected: Spectre-v4 Nov 1 00:22:04.683855 kernel: CPU features: detected: Spectre-BHB Nov 1 00:22:04.683862 kernel: CPU features: kernel page table isolation forced ON by KASLR Nov 1 00:22:04.683868 kernel: CPU features: detected: Kernel page table isolation (KPTI) Nov 1 00:22:04.683874 kernel: CPU features: detected: ARM erratum 1418040 Nov 1 00:22:04.683880 kernel: CPU features: detected: SSBS not fully self-synchronizing Nov 1 00:22:04.683886 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 Nov 1 00:22:04.683892 kernel: Policy zone: DMA Nov 1 00:22:04.683899 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=284392058f112e827cd7c521dcce1be27e1367d0030df494642d12e41e342e29 Nov 1 00:22:04.683905 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Nov 1 00:22:04.683911 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Nov 1 00:22:04.683917 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Nov 1 00:22:04.683923 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Nov 1 00:22:04.683931 kernel: Memory: 2457340K/2572288K available (9792K kernel code, 2094K rwdata, 7592K rodata, 36416K init, 777K bss, 114948K reserved, 0K cma-reserved) Nov 1 00:22:04.683937 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Nov 1 00:22:04.683943 kernel: trace event string verifier disabled Nov 1 00:22:04.683949 kernel: rcu: Preemptible hierarchical RCU implementation. Nov 1 00:22:04.683956 kernel: rcu: RCU event tracing is enabled. Nov 1 00:22:04.683962 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Nov 1 00:22:04.683969 kernel: Trampoline variant of Tasks RCU enabled. Nov 1 00:22:04.683975 kernel: Tracing variant of Tasks RCU enabled. Nov 1 00:22:04.683981 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Nov 1 00:22:04.683987 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Nov 1 00:22:04.683993 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Nov 1 00:22:04.684000 kernel: GICv3: 256 SPIs implemented Nov 1 00:22:04.684007 kernel: GICv3: 0 Extended SPIs implemented Nov 1 00:22:04.684013 kernel: GICv3: Distributor has no Range Selector support Nov 1 00:22:04.684019 kernel: Root IRQ handler: gic_handle_irq Nov 1 00:22:04.684025 kernel: GICv3: 16 PPIs implemented Nov 1 00:22:04.684031 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Nov 1 00:22:04.684037 kernel: ACPI: SRAT not present Nov 1 00:22:04.684043 kernel: ITS [mem 0x08080000-0x0809ffff] Nov 1 00:22:04.684049 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400b0000 (indirect, esz 8, psz 64K, shr 1) Nov 1 00:22:04.684055 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400c0000 (flat, esz 8, psz 64K, shr 1) Nov 1 00:22:04.684061 kernel: GICv3: using LPI property table @0x00000000400d0000 Nov 1 00:22:04.684067 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000000400e0000 Nov 1 00:22:04.684075 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Nov 1 00:22:04.684081 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Nov 1 00:22:04.684087 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Nov 1 00:22:04.684093 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Nov 1 00:22:04.684099 kernel: arm-pv: using stolen time PV Nov 1 00:22:04.684106 kernel: Console: colour dummy device 80x25 Nov 1 00:22:04.684112 kernel: ACPI: Core revision 20210730 Nov 1 00:22:04.684119 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Nov 1 00:22:04.684125 kernel: pid_max: default: 32768 minimum: 301 Nov 1 00:22:04.684131 kernel: LSM: Security Framework initializing Nov 1 00:22:04.684138 kernel: SELinux: Initializing. Nov 1 00:22:04.684145 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Nov 1 00:22:04.684151 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Nov 1 00:22:04.684157 kernel: rcu: Hierarchical SRCU implementation. Nov 1 00:22:04.684163 kernel: Platform MSI: ITS@0x8080000 domain created Nov 1 00:22:04.684170 kernel: PCI/MSI: ITS@0x8080000 domain created Nov 1 00:22:04.684176 kernel: Remapping and enabling EFI services. Nov 1 00:22:04.684182 kernel: smp: Bringing up secondary CPUs ... Nov 1 00:22:04.684188 kernel: Detected PIPT I-cache on CPU1 Nov 1 00:22:04.684195 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Nov 1 00:22:04.684202 kernel: GICv3: CPU1: using allocated LPI pending table @0x00000000400f0000 Nov 1 00:22:04.684208 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Nov 1 00:22:04.684215 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Nov 1 00:22:04.684221 kernel: Detected PIPT I-cache on CPU2 Nov 1 00:22:04.684227 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Nov 1 00:22:04.684234 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040100000 Nov 1 00:22:04.684240 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Nov 1 00:22:04.684257 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Nov 1 00:22:04.684264 kernel: Detected PIPT I-cache on CPU3 Nov 1 00:22:04.684272 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Nov 1 00:22:04.684278 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040110000 Nov 1 00:22:04.684285 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Nov 1 00:22:04.684291 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Nov 1 00:22:04.684301 kernel: smp: Brought up 1 node, 4 CPUs Nov 1 00:22:04.684309 kernel: SMP: Total of 4 processors activated. Nov 1 00:22:04.684315 kernel: CPU features: detected: 32-bit EL0 Support Nov 1 00:22:04.684322 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Nov 1 00:22:04.684328 kernel: CPU features: detected: Common not Private translations Nov 1 00:22:04.684335 kernel: CPU features: detected: CRC32 instructions Nov 1 00:22:04.684341 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Nov 1 00:22:04.684347 kernel: CPU features: detected: LSE atomic instructions Nov 1 00:22:04.684355 kernel: CPU features: detected: Privileged Access Never Nov 1 00:22:04.684362 kernel: CPU features: detected: RAS Extension Support Nov 1 00:22:04.684368 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Nov 1 00:22:04.684375 kernel: CPU: All CPU(s) started at EL1 Nov 1 00:22:04.684381 kernel: alternatives: patching kernel code Nov 1 00:22:04.684388 kernel: devtmpfs: initialized Nov 1 00:22:04.684395 kernel: KASLR enabled Nov 1 00:22:04.684402 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Nov 1 00:22:04.684415 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Nov 1 00:22:04.684423 kernel: pinctrl core: initialized pinctrl subsystem Nov 1 00:22:04.684430 kernel: SMBIOS 3.0.0 present. Nov 1 00:22:04.684436 kernel: DMI: QEMU KVM Virtual Machine, BIOS 0.0.0 02/06/2015 Nov 1 00:22:04.684443 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Nov 1 00:22:04.684449 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Nov 1 00:22:04.684458 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Nov 1 00:22:04.684465 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Nov 1 00:22:04.684472 kernel: audit: initializing netlink subsys (disabled) Nov 1 00:22:04.684478 kernel: audit: type=2000 audit(0.041:1): state=initialized audit_enabled=0 res=1 Nov 1 00:22:04.684485 kernel: thermal_sys: Registered thermal governor 'step_wise' Nov 1 00:22:04.684491 kernel: cpuidle: using governor menu Nov 1 00:22:04.684498 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Nov 1 00:22:04.684504 kernel: ASID allocator initialised with 32768 entries Nov 1 00:22:04.684511 kernel: ACPI: bus type PCI registered Nov 1 00:22:04.684519 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Nov 1 00:22:04.684551 kernel: Serial: AMBA PL011 UART driver Nov 1 00:22:04.684559 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Nov 1 00:22:04.684566 kernel: HugeTLB registered 32.0 MiB page size, pre-allocated 0 pages Nov 1 00:22:04.684573 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Nov 1 00:22:04.684582 kernel: HugeTLB registered 64.0 KiB page size, pre-allocated 0 pages Nov 1 00:22:04.684588 kernel: cryptd: max_cpu_qlen set to 1000 Nov 1 00:22:04.684595 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Nov 1 00:22:04.684602 kernel: ACPI: Added _OSI(Module Device) Nov 1 00:22:04.684610 kernel: ACPI: Added _OSI(Processor Device) Nov 1 00:22:04.684617 kernel: ACPI: Added _OSI(Processor Aggregator Device) Nov 1 00:22:04.684624 kernel: ACPI: Added _OSI(Linux-Dell-Video) Nov 1 00:22:04.684631 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Nov 1 00:22:04.684638 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Nov 1 00:22:04.684645 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Nov 1 00:22:04.684651 kernel: ACPI: Interpreter enabled Nov 1 00:22:04.684658 kernel: ACPI: Using GIC for interrupt routing Nov 1 00:22:04.684665 kernel: ACPI: MCFG table detected, 1 entries Nov 1 00:22:04.684673 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Nov 1 00:22:04.684680 kernel: printk: console [ttyAMA0] enabled Nov 1 00:22:04.684686 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Nov 1 00:22:04.688561 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Nov 1 00:22:04.688658 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Nov 1 00:22:04.688736 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Nov 1 00:22:04.688809 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Nov 1 00:22:04.688894 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Nov 1 00:22:04.688904 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Nov 1 00:22:04.688914 kernel: PCI host bridge to bus 0000:00 Nov 1 00:22:04.688992 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Nov 1 00:22:04.689091 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Nov 1 00:22:04.689173 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Nov 1 00:22:04.689227 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Nov 1 00:22:04.689323 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Nov 1 00:22:04.689393 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 Nov 1 00:22:04.689465 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] Nov 1 00:22:04.689526 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] Nov 1 00:22:04.689584 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Nov 1 00:22:04.689641 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Nov 1 00:22:04.689700 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] Nov 1 00:22:04.689760 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] Nov 1 00:22:04.689813 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Nov 1 00:22:04.689864 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Nov 1 00:22:04.689917 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Nov 1 00:22:04.689926 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Nov 1 00:22:04.689932 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Nov 1 00:22:04.689939 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Nov 1 00:22:04.689946 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Nov 1 00:22:04.689954 kernel: iommu: Default domain type: Translated Nov 1 00:22:04.689961 kernel: iommu: DMA domain TLB invalidation policy: strict mode Nov 1 00:22:04.689967 kernel: vgaarb: loaded Nov 1 00:22:04.689974 kernel: pps_core: LinuxPPS API ver. 1 registered Nov 1 00:22:04.689981 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Nov 1 00:22:04.689987 kernel: PTP clock support registered Nov 1 00:22:04.689994 kernel: Registered efivars operations Nov 1 00:22:04.690000 kernel: clocksource: Switched to clocksource arch_sys_counter Nov 1 00:22:04.690007 kernel: VFS: Disk quotas dquot_6.6.0 Nov 1 00:22:04.690015 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Nov 1 00:22:04.690022 kernel: pnp: PnP ACPI init Nov 1 00:22:04.690087 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Nov 1 00:22:04.690097 kernel: pnp: PnP ACPI: found 1 devices Nov 1 00:22:04.690104 kernel: NET: Registered PF_INET protocol family Nov 1 00:22:04.690111 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Nov 1 00:22:04.690117 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Nov 1 00:22:04.690124 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Nov 1 00:22:04.690132 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Nov 1 00:22:04.690139 kernel: TCP bind hash table entries: 32768 (order: 7, 524288 bytes, linear) Nov 1 00:22:04.690146 kernel: TCP: Hash tables configured (established 32768 bind 32768) Nov 1 00:22:04.690152 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Nov 1 00:22:04.690159 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Nov 1 00:22:04.690166 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Nov 1 00:22:04.690172 kernel: PCI: CLS 0 bytes, default 64 Nov 1 00:22:04.690179 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Nov 1 00:22:04.690185 kernel: kvm [1]: HYP mode not available Nov 1 00:22:04.690193 kernel: Initialise system trusted keyrings Nov 1 00:22:04.690200 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Nov 1 00:22:04.690206 kernel: Key type asymmetric registered Nov 1 00:22:04.690213 kernel: Asymmetric key parser 'x509' registered Nov 1 00:22:04.690219 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Nov 1 00:22:04.690226 kernel: io scheduler mq-deadline registered Nov 1 00:22:04.690232 kernel: io scheduler kyber registered Nov 1 00:22:04.690239 kernel: io scheduler bfq registered Nov 1 00:22:04.690246 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Nov 1 00:22:04.690263 kernel: ACPI: button: Power Button [PWRB] Nov 1 00:22:04.690270 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Nov 1 00:22:04.690334 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Nov 1 00:22:04.690343 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Nov 1 00:22:04.690350 kernel: thunder_xcv, ver 1.0 Nov 1 00:22:04.690356 kernel: thunder_bgx, ver 1.0 Nov 1 00:22:04.690363 kernel: nicpf, ver 1.0 Nov 1 00:22:04.690370 kernel: nicvf, ver 1.0 Nov 1 00:22:04.690455 kernel: rtc-efi rtc-efi.0: registered as rtc0 Nov 1 00:22:04.690517 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-11-01T00:22:04 UTC (1761956524) Nov 1 00:22:04.690526 kernel: hid: raw HID events driver (C) Jiri Kosina Nov 1 00:22:04.690533 kernel: NET: Registered PF_INET6 protocol family Nov 1 00:22:04.690539 kernel: Segment Routing with IPv6 Nov 1 00:22:04.690546 kernel: In-situ OAM (IOAM) with IPv6 Nov 1 00:22:04.690552 kernel: NET: Registered PF_PACKET protocol family Nov 1 00:22:04.690559 kernel: Key type dns_resolver registered Nov 1 00:22:04.690565 kernel: registered taskstats version 1 Nov 1 00:22:04.690574 kernel: Loading compiled-in X.509 certificates Nov 1 00:22:04.690580 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.192-flatcar: 4aa5071b9a6f96878595e36d4bd5862a671c915d' Nov 1 00:22:04.690587 kernel: Key type .fscrypt registered Nov 1 00:22:04.690593 kernel: Key type fscrypt-provisioning registered Nov 1 00:22:04.690600 kernel: ima: No TPM chip found, activating TPM-bypass! Nov 1 00:22:04.690607 kernel: ima: Allocated hash algorithm: sha1 Nov 1 00:22:04.690613 kernel: ima: No architecture policies found Nov 1 00:22:04.690620 kernel: clk: Disabling unused clocks Nov 1 00:22:04.690626 kernel: Freeing unused kernel memory: 36416K Nov 1 00:22:04.690634 kernel: Run /init as init process Nov 1 00:22:04.690640 kernel: with arguments: Nov 1 00:22:04.690647 kernel: /init Nov 1 00:22:04.690653 kernel: with environment: Nov 1 00:22:04.690659 kernel: HOME=/ Nov 1 00:22:04.690665 kernel: TERM=linux Nov 1 00:22:04.690672 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Nov 1 00:22:04.690680 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Nov 1 00:22:04.690690 systemd[1]: Detected virtualization kvm. Nov 1 00:22:04.690698 systemd[1]: Detected architecture arm64. Nov 1 00:22:04.690704 systemd[1]: Running in initrd. Nov 1 00:22:04.690711 systemd[1]: No hostname configured, using default hostname. Nov 1 00:22:04.690718 systemd[1]: Hostname set to . Nov 1 00:22:04.690725 systemd[1]: Initializing machine ID from VM UUID. Nov 1 00:22:04.690732 systemd[1]: Queued start job for default target initrd.target. Nov 1 00:22:04.690739 systemd[1]: Started systemd-ask-password-console.path. Nov 1 00:22:04.690747 systemd[1]: Reached target cryptsetup.target. Nov 1 00:22:04.690754 systemd[1]: Reached target paths.target. Nov 1 00:22:04.690761 systemd[1]: Reached target slices.target. Nov 1 00:22:04.690768 systemd[1]: Reached target swap.target. Nov 1 00:22:04.690775 systemd[1]: Reached target timers.target. Nov 1 00:22:04.690782 systemd[1]: Listening on iscsid.socket. Nov 1 00:22:04.690789 systemd[1]: Listening on iscsiuio.socket. Nov 1 00:22:04.690798 systemd[1]: Listening on systemd-journald-audit.socket. Nov 1 00:22:04.690805 systemd[1]: Listening on systemd-journald-dev-log.socket. Nov 1 00:22:04.690812 systemd[1]: Listening on systemd-journald.socket. Nov 1 00:22:04.690819 systemd[1]: Listening on systemd-networkd.socket. Nov 1 00:22:04.690826 systemd[1]: Listening on systemd-udevd-control.socket. Nov 1 00:22:04.690833 systemd[1]: Listening on systemd-udevd-kernel.socket. Nov 1 00:22:04.690840 systemd[1]: Reached target sockets.target. Nov 1 00:22:04.690847 systemd[1]: Starting kmod-static-nodes.service... Nov 1 00:22:04.690854 systemd[1]: Finished network-cleanup.service. Nov 1 00:22:04.690862 systemd[1]: Starting systemd-fsck-usr.service... Nov 1 00:22:04.690869 systemd[1]: Starting systemd-journald.service... Nov 1 00:22:04.690876 systemd[1]: Starting systemd-modules-load.service... Nov 1 00:22:04.690883 systemd[1]: Starting systemd-resolved.service... Nov 1 00:22:04.690890 systemd[1]: Starting systemd-vconsole-setup.service... Nov 1 00:22:04.690897 systemd[1]: Finished kmod-static-nodes.service. Nov 1 00:22:04.690904 systemd[1]: Finished systemd-fsck-usr.service. Nov 1 00:22:04.690912 kernel: audit: type=1130 audit(1761956524.682:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:22:04.690919 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Nov 1 00:22:04.690927 systemd[1]: Finished systemd-vconsole-setup.service. Nov 1 00:22:04.690934 kernel: audit: type=1130 audit(1761956524.690:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:22:04.690944 systemd-journald[290]: Journal started Nov 1 00:22:04.690985 systemd-journald[290]: Runtime Journal (/run/log/journal/1b78c5679a2c4ff09e68b8a877db5315) is 6.0M, max 48.7M, 42.6M free. Nov 1 00:22:04.682000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:22:04.690000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:22:04.685109 systemd-modules-load[291]: Inserted module 'overlay' Nov 1 00:22:04.694402 systemd[1]: Started systemd-journald.service. Nov 1 00:22:04.694000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:22:04.697189 kernel: audit: type=1130 audit(1761956524.694:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:22:04.698564 systemd[1]: Starting dracut-cmdline-ask.service... Nov 1 00:22:04.699509 systemd-resolved[292]: Positive Trust Anchors: Nov 1 00:22:04.699516 systemd-resolved[292]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 1 00:22:04.699543 systemd-resolved[292]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Nov 1 00:22:04.708329 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Nov 1 00:22:04.703843 systemd-resolved[292]: Defaulting to hostname 'linux'. Nov 1 00:22:04.704641 systemd[1]: Started systemd-resolved.service. Nov 1 00:22:04.714797 kernel: audit: type=1130 audit(1761956524.710:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:22:04.714823 kernel: Bridge firewalling registered Nov 1 00:22:04.710000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:22:04.710447 systemd[1]: Reached target nss-lookup.target. Nov 1 00:22:04.718125 kernel: audit: type=1130 audit(1761956524.714:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:22:04.714000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:22:04.713470 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Nov 1 00:22:04.714722 systemd-modules-load[291]: Inserted module 'br_netfilter' Nov 1 00:22:04.723936 kernel: audit: type=1130 audit(1761956524.720:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:22:04.720000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:22:04.720097 systemd[1]: Finished dracut-cmdline-ask.service. Nov 1 00:22:04.721669 systemd[1]: Starting dracut-cmdline.service... Nov 1 00:22:04.728265 kernel: SCSI subsystem initialized Nov 1 00:22:04.730499 dracut-cmdline[308]: dracut-dracut-053 Nov 1 00:22:04.732557 dracut-cmdline[308]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=284392058f112e827cd7c521dcce1be27e1367d0030df494642d12e41e342e29 Nov 1 00:22:04.739729 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Nov 1 00:22:04.739753 kernel: device-mapper: uevent: version 1.0.3 Nov 1 00:22:04.739769 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Nov 1 00:22:04.741968 systemd-modules-load[291]: Inserted module 'dm_multipath' Nov 1 00:22:04.742916 systemd[1]: Finished systemd-modules-load.service. Nov 1 00:22:04.743000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:22:04.744440 systemd[1]: Starting systemd-sysctl.service... Nov 1 00:22:04.747765 kernel: audit: type=1130 audit(1761956524.743:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:22:04.753416 systemd[1]: Finished systemd-sysctl.service. Nov 1 00:22:04.753000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:22:04.757272 kernel: audit: type=1130 audit(1761956524.753:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:22:04.792276 kernel: Loading iSCSI transport class v2.0-870. Nov 1 00:22:04.804274 kernel: iscsi: registered transport (tcp) Nov 1 00:22:04.819298 kernel: iscsi: registered transport (qla4xxx) Nov 1 00:22:04.819345 kernel: QLogic iSCSI HBA Driver Nov 1 00:22:04.853751 systemd[1]: Finished dracut-cmdline.service. Nov 1 00:22:04.854000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:22:04.855314 systemd[1]: Starting dracut-pre-udev.service... Nov 1 00:22:04.858379 kernel: audit: type=1130 audit(1761956524.854:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:22:04.897280 kernel: raid6: neonx8 gen() 13724 MB/s Nov 1 00:22:04.914264 kernel: raid6: neonx8 xor() 10686 MB/s Nov 1 00:22:04.931260 kernel: raid6: neonx4 gen() 13538 MB/s Nov 1 00:22:04.948262 kernel: raid6: neonx4 xor() 11058 MB/s Nov 1 00:22:04.965262 kernel: raid6: neonx2 gen() 12900 MB/s Nov 1 00:22:04.982276 kernel: raid6: neonx2 xor() 10221 MB/s Nov 1 00:22:04.999263 kernel: raid6: neonx1 gen() 10498 MB/s Nov 1 00:22:05.016261 kernel: raid6: neonx1 xor() 8685 MB/s Nov 1 00:22:05.033264 kernel: raid6: int64x8 gen() 6269 MB/s Nov 1 00:22:05.050276 kernel: raid6: int64x8 xor() 3537 MB/s Nov 1 00:22:05.067281 kernel: raid6: int64x4 gen() 7193 MB/s Nov 1 00:22:05.084265 kernel: raid6: int64x4 xor() 3856 MB/s Nov 1 00:22:05.101271 kernel: raid6: int64x2 gen() 6150 MB/s Nov 1 00:22:05.118272 kernel: raid6: int64x2 xor() 3314 MB/s Nov 1 00:22:05.135263 kernel: raid6: int64x1 gen() 5047 MB/s Nov 1 00:22:05.152750 kernel: raid6: int64x1 xor() 2646 MB/s Nov 1 00:22:05.152763 kernel: raid6: using algorithm neonx8 gen() 13724 MB/s Nov 1 00:22:05.152772 kernel: raid6: .... xor() 10686 MB/s, rmw enabled Nov 1 00:22:05.152780 kernel: raid6: using neon recovery algorithm Nov 1 00:22:05.163268 kernel: xor: measuring software checksum speed Nov 1 00:22:05.163289 kernel: 8regs : 15656 MB/sec Nov 1 00:22:05.164322 kernel: 32regs : 20712 MB/sec Nov 1 00:22:05.165402 kernel: arm64_neon : 27123 MB/sec Nov 1 00:22:05.165418 kernel: xor: using function: arm64_neon (27123 MB/sec) Nov 1 00:22:05.218273 kernel: Btrfs loaded, crc32c=crc32c-generic, zoned=no, fsverity=no Nov 1 00:22:05.228206 systemd[1]: Finished dracut-pre-udev.service. Nov 1 00:22:05.228000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:22:05.229000 audit: BPF prog-id=7 op=LOAD Nov 1 00:22:05.229000 audit: BPF prog-id=8 op=LOAD Nov 1 00:22:05.229991 systemd[1]: Starting systemd-udevd.service... Nov 1 00:22:05.242401 systemd-udevd[492]: Using default interface naming scheme 'v252'. Nov 1 00:22:05.245697 systemd[1]: Started systemd-udevd.service. Nov 1 00:22:05.246000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:22:05.247446 systemd[1]: Starting dracut-pre-trigger.service... Nov 1 00:22:05.259639 dracut-pre-trigger[499]: rd.md=0: removing MD RAID activation Nov 1 00:22:05.287426 systemd[1]: Finished dracut-pre-trigger.service. Nov 1 00:22:05.288000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:22:05.289092 systemd[1]: Starting systemd-udev-trigger.service... Nov 1 00:22:05.329342 systemd[1]: Finished systemd-udev-trigger.service. Nov 1 00:22:05.329000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:22:05.360491 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Nov 1 00:22:05.365038 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Nov 1 00:22:05.365053 kernel: GPT:9289727 != 19775487 Nov 1 00:22:05.365070 kernel: GPT:Alternate GPT header not at the end of the disk. Nov 1 00:22:05.365079 kernel: GPT:9289727 != 19775487 Nov 1 00:22:05.365087 kernel: GPT: Use GNU Parted to correct GPT errors. Nov 1 00:22:05.365094 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Nov 1 00:22:05.378680 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Nov 1 00:22:05.388712 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (556) Nov 1 00:22:05.391755 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Nov 1 00:22:05.392683 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Nov 1 00:22:05.396687 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Nov 1 00:22:05.399130 systemd[1]: Starting disk-uuid.service... Nov 1 00:22:05.404140 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Nov 1 00:22:05.406003 disk-uuid[564]: Primary Header is updated. Nov 1 00:22:05.406003 disk-uuid[564]: Secondary Entries is updated. Nov 1 00:22:05.406003 disk-uuid[564]: Secondary Header is updated. Nov 1 00:22:05.410266 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Nov 1 00:22:05.413260 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Nov 1 00:22:05.416281 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Nov 1 00:22:06.417219 disk-uuid[565]: The operation has completed successfully. Nov 1 00:22:06.418289 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Nov 1 00:22:06.441675 systemd[1]: disk-uuid.service: Deactivated successfully. Nov 1 00:22:06.442000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:22:06.442000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:22:06.441771 systemd[1]: Finished disk-uuid.service. Nov 1 00:22:06.445849 systemd[1]: Starting verity-setup.service... Nov 1 00:22:06.459287 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Nov 1 00:22:06.479549 systemd[1]: Found device dev-mapper-usr.device. Nov 1 00:22:06.481633 systemd[1]: Mounting sysusr-usr.mount... Nov 1 00:22:06.483387 systemd[1]: Finished verity-setup.service. Nov 1 00:22:06.483000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:22:06.527270 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Nov 1 00:22:06.527381 systemd[1]: Mounted sysusr-usr.mount. Nov 1 00:22:06.528138 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Nov 1 00:22:06.528864 systemd[1]: Starting ignition-setup.service... Nov 1 00:22:06.530903 systemd[1]: Starting parse-ip-for-networkd.service... Nov 1 00:22:06.537281 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Nov 1 00:22:06.537315 kernel: BTRFS info (device vda6): using free space tree Nov 1 00:22:06.537325 kernel: BTRFS info (device vda6): has skinny extents Nov 1 00:22:06.545927 systemd[1]: mnt-oem.mount: Deactivated successfully. Nov 1 00:22:06.553176 systemd[1]: Finished ignition-setup.service. Nov 1 00:22:06.553000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:22:06.554753 systemd[1]: Starting ignition-fetch-offline.service... Nov 1 00:22:06.600453 ignition[652]: Ignition 2.14.0 Nov 1 00:22:06.600463 ignition[652]: Stage: fetch-offline Nov 1 00:22:06.600609 ignition[652]: no configs at "/usr/lib/ignition/base.d" Nov 1 00:22:06.600620 ignition[652]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 1 00:22:06.600750 ignition[652]: parsed url from cmdline: "" Nov 1 00:22:06.600753 ignition[652]: no config URL provided Nov 1 00:22:06.600758 ignition[652]: reading system config file "/usr/lib/ignition/user.ign" Nov 1 00:22:06.600765 ignition[652]: no config at "/usr/lib/ignition/user.ign" Nov 1 00:22:06.600782 ignition[652]: op(1): [started] loading QEMU firmware config module Nov 1 00:22:06.600789 ignition[652]: op(1): executing: "modprobe" "qemu_fw_cfg" Nov 1 00:22:06.608232 ignition[652]: op(1): [finished] loading QEMU firmware config module Nov 1 00:22:06.623104 systemd[1]: Finished parse-ip-for-networkd.service. Nov 1 00:22:06.623000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:22:06.624000 audit: BPF prog-id=9 op=LOAD Nov 1 00:22:06.625602 systemd[1]: Starting systemd-networkd.service... Nov 1 00:22:06.644341 systemd-networkd[743]: lo: Link UP Nov 1 00:22:06.644355 systemd-networkd[743]: lo: Gained carrier Nov 1 00:22:06.646000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:22:06.644762 systemd-networkd[743]: Enumeration completed Nov 1 00:22:06.644947 systemd-networkd[743]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 1 00:22:06.645057 systemd[1]: Started systemd-networkd.service. Nov 1 00:22:06.646575 systemd-networkd[743]: eth0: Link UP Nov 1 00:22:06.646578 systemd-networkd[743]: eth0: Gained carrier Nov 1 00:22:06.646963 systemd[1]: Reached target network.target. Nov 1 00:22:06.649322 systemd[1]: Starting iscsiuio.service... Nov 1 00:22:06.656387 systemd[1]: Started iscsiuio.service. Nov 1 00:22:06.657000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:22:06.658564 systemd[1]: Starting iscsid.service... Nov 1 00:22:06.661988 iscsid[748]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Nov 1 00:22:06.661988 iscsid[748]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Nov 1 00:22:06.661988 iscsid[748]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Nov 1 00:22:06.661988 iscsid[748]: If using hardware iscsi like qla4xxx this message can be ignored. Nov 1 00:22:06.661988 iscsid[748]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Nov 1 00:22:06.661988 iscsid[748]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Nov 1 00:22:06.670000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:22:06.664773 systemd[1]: Started iscsid.service. Nov 1 00:22:06.670749 ignition[652]: parsing config with SHA512: 9af91c8c2fe8e1a95b54a3f90c047e1ea3e4e803597d598327ec94a6b798fb94949a22f339c10c8cbbadde5225ed35f6f5dc641a9223e89a310e1e6d9b1c01d9 Nov 1 00:22:06.664804 systemd-networkd[743]: eth0: DHCPv4 address 10.0.0.92/16, gateway 10.0.0.1 acquired from 10.0.0.1 Nov 1 00:22:06.671650 systemd[1]: Starting dracut-initqueue.service... Nov 1 00:22:06.682740 unknown[652]: fetched base config from "system" Nov 1 00:22:06.682753 unknown[652]: fetched user config from "qemu" Nov 1 00:22:06.683307 ignition[652]: fetch-offline: fetch-offline passed Nov 1 00:22:06.684000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:22:06.683621 systemd[1]: Finished dracut-initqueue.service. Nov 1 00:22:06.685000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:22:06.683366 ignition[652]: Ignition finished successfully Nov 1 00:22:06.685064 systemd[1]: Finished ignition-fetch-offline.service. Nov 1 00:22:06.686275 systemd[1]: Reached target remote-fs-pre.target. Nov 1 00:22:06.687471 systemd[1]: Reached target remote-cryptsetup.target. Nov 1 00:22:06.688703 systemd[1]: Reached target remote-fs.target. Nov 1 00:22:06.690791 systemd[1]: Starting dracut-pre-mount.service... Nov 1 00:22:06.691907 systemd[1]: ignition-fetch.service was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Nov 1 00:22:06.692722 systemd[1]: Starting ignition-kargs.service... Nov 1 00:22:06.698695 systemd[1]: Finished dracut-pre-mount.service. Nov 1 00:22:06.699000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:22:06.702786 ignition[758]: Ignition 2.14.0 Nov 1 00:22:06.702794 ignition[758]: Stage: kargs Nov 1 00:22:06.702889 ignition[758]: no configs at "/usr/lib/ignition/base.d" Nov 1 00:22:06.704849 systemd[1]: Finished ignition-kargs.service. Nov 1 00:22:06.705000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:22:06.702898 ignition[758]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 1 00:22:06.703771 ignition[758]: kargs: kargs passed Nov 1 00:22:06.706927 systemd[1]: Starting ignition-disks.service... Nov 1 00:22:06.703813 ignition[758]: Ignition finished successfully Nov 1 00:22:06.713508 ignition[768]: Ignition 2.14.0 Nov 1 00:22:06.713522 ignition[768]: Stage: disks Nov 1 00:22:06.713612 ignition[768]: no configs at "/usr/lib/ignition/base.d" Nov 1 00:22:06.713622 ignition[768]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 1 00:22:06.715000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:22:06.715218 systemd[1]: Finished ignition-disks.service. Nov 1 00:22:06.714454 ignition[768]: disks: disks passed Nov 1 00:22:06.716167 systemd[1]: Reached target initrd-root-device.target. Nov 1 00:22:06.714496 ignition[768]: Ignition finished successfully Nov 1 00:22:06.717638 systemd[1]: Reached target local-fs-pre.target. Nov 1 00:22:06.718873 systemd[1]: Reached target local-fs.target. Nov 1 00:22:06.719996 systemd[1]: Reached target sysinit.target. Nov 1 00:22:06.721245 systemd[1]: Reached target basic.target. Nov 1 00:22:06.723289 systemd[1]: Starting systemd-fsck-root.service... Nov 1 00:22:06.734030 systemd-fsck[776]: ROOT: clean, 637/553520 files, 56031/553472 blocks Nov 1 00:22:06.737902 systemd[1]: Finished systemd-fsck-root.service. Nov 1 00:22:06.738000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:22:06.739541 systemd[1]: Mounting sysroot.mount... Nov 1 00:22:06.744279 kernel: EXT4-fs (vda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Nov 1 00:22:06.744894 systemd[1]: Mounted sysroot.mount. Nov 1 00:22:06.745615 systemd[1]: Reached target initrd-root-fs.target. Nov 1 00:22:06.748127 systemd[1]: Mounting sysroot-usr.mount... Nov 1 00:22:06.748993 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. Nov 1 00:22:06.749031 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Nov 1 00:22:06.749055 systemd[1]: Reached target ignition-diskful.target. Nov 1 00:22:06.750899 systemd[1]: Mounted sysroot-usr.mount. Nov 1 00:22:06.752585 systemd[1]: Starting initrd-setup-root.service... Nov 1 00:22:06.756740 initrd-setup-root[786]: cut: /sysroot/etc/passwd: No such file or directory Nov 1 00:22:06.760388 initrd-setup-root[794]: cut: /sysroot/etc/group: No such file or directory Nov 1 00:22:06.764154 initrd-setup-root[802]: cut: /sysroot/etc/shadow: No such file or directory Nov 1 00:22:06.767116 initrd-setup-root[810]: cut: /sysroot/etc/gshadow: No such file or directory Nov 1 00:22:06.792481 systemd[1]: Finished initrd-setup-root.service. Nov 1 00:22:06.792000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:22:06.793974 systemd[1]: Starting ignition-mount.service... Nov 1 00:22:06.795244 systemd[1]: Starting sysroot-boot.service... Nov 1 00:22:06.799708 bash[827]: umount: /sysroot/usr/share/oem: not mounted. Nov 1 00:22:06.807995 ignition[829]: INFO : Ignition 2.14.0 Nov 1 00:22:06.807995 ignition[829]: INFO : Stage: mount Nov 1 00:22:06.810041 ignition[829]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 1 00:22:06.810041 ignition[829]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 1 00:22:06.810041 ignition[829]: INFO : mount: mount passed Nov 1 00:22:06.810041 ignition[829]: INFO : Ignition finished successfully Nov 1 00:22:06.813000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:22:06.812312 systemd[1]: Finished ignition-mount.service. Nov 1 00:22:06.814000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:22:06.813704 systemd[1]: Finished sysroot-boot.service. Nov 1 00:22:07.490372 systemd[1]: Mounting sysroot-usr-share-oem.mount... Nov 1 00:22:07.496260 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (837) Nov 1 00:22:07.497768 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Nov 1 00:22:07.497784 kernel: BTRFS info (device vda6): using free space tree Nov 1 00:22:07.497794 kernel: BTRFS info (device vda6): has skinny extents Nov 1 00:22:07.501014 systemd[1]: Mounted sysroot-usr-share-oem.mount. Nov 1 00:22:07.502498 systemd[1]: Starting ignition-files.service... Nov 1 00:22:07.515540 ignition[857]: INFO : Ignition 2.14.0 Nov 1 00:22:07.515540 ignition[857]: INFO : Stage: files Nov 1 00:22:07.517093 ignition[857]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 1 00:22:07.517093 ignition[857]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 1 00:22:07.517093 ignition[857]: DEBUG : files: compiled without relabeling support, skipping Nov 1 00:22:07.520189 ignition[857]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Nov 1 00:22:07.520189 ignition[857]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Nov 1 00:22:07.523515 ignition[857]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Nov 1 00:22:07.524756 ignition[857]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Nov 1 00:22:07.526210 ignition[857]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Nov 1 00:22:07.526210 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Nov 1 00:22:07.526210 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Nov 1 00:22:07.526210 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Nov 1 00:22:07.526210 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.17.0-linux-arm64.tar.gz: attempt #1 Nov 1 00:22:07.524901 unknown[857]: wrote ssh authorized keys file for user: core Nov 1 00:22:07.909908 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Nov 1 00:22:08.097195 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Nov 1 00:22:08.098958 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Nov 1 00:22:08.098958 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Nov 1 00:22:08.098958 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Nov 1 00:22:08.098958 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Nov 1 00:22:08.098958 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 1 00:22:08.098958 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 1 00:22:08.098958 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 1 00:22:08.098958 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 1 00:22:08.098958 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Nov 1 00:22:08.098958 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Nov 1 00:22:08.098958 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Nov 1 00:22:08.117315 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Nov 1 00:22:08.117315 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Nov 1 00:22:08.117315 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-arm64.raw: attempt #1 Nov 1 00:22:08.420469 systemd-networkd[743]: eth0: Gained IPv6LL Nov 1 00:22:08.501684 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Nov 1 00:22:08.747573 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Nov 1 00:22:08.747573 ignition[857]: INFO : files: op(c): [started] processing unit "containerd.service" Nov 1 00:22:08.750754 ignition[857]: INFO : files: op(c): op(d): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Nov 1 00:22:08.750754 ignition[857]: INFO : files: op(c): op(d): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Nov 1 00:22:08.750754 ignition[857]: INFO : files: op(c): [finished] processing unit "containerd.service" Nov 1 00:22:08.750754 ignition[857]: INFO : files: op(e): [started] processing unit "prepare-helm.service" Nov 1 00:22:08.750754 ignition[857]: INFO : files: op(e): op(f): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 1 00:22:08.750754 ignition[857]: INFO : files: op(e): op(f): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 1 00:22:08.750754 ignition[857]: INFO : files: op(e): [finished] processing unit "prepare-helm.service" Nov 1 00:22:08.750754 ignition[857]: INFO : files: op(10): [started] processing unit "coreos-metadata.service" Nov 1 00:22:08.750754 ignition[857]: INFO : files: op(10): op(11): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Nov 1 00:22:08.750754 ignition[857]: INFO : files: op(10): op(11): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Nov 1 00:22:08.750754 ignition[857]: INFO : files: op(10): [finished] processing unit "coreos-metadata.service" Nov 1 00:22:08.750754 ignition[857]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Nov 1 00:22:08.750754 ignition[857]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Nov 1 00:22:08.750754 ignition[857]: INFO : files: op(13): [started] setting preset to disabled for "coreos-metadata.service" Nov 1 00:22:08.750754 ignition[857]: INFO : files: op(13): op(14): [started] removing enablement symlink(s) for "coreos-metadata.service" Nov 1 00:22:08.791443 ignition[857]: INFO : files: op(13): op(14): [finished] removing enablement symlink(s) for "coreos-metadata.service" Nov 1 00:22:08.793763 ignition[857]: INFO : files: op(13): [finished] setting preset to disabled for "coreos-metadata.service" Nov 1 00:22:08.793763 ignition[857]: INFO : files: createResultFile: createFiles: op(15): [started] writing file "/sysroot/etc/.ignition-result.json" Nov 1 00:22:08.793763 ignition[857]: INFO : files: createResultFile: createFiles: op(15): [finished] writing file "/sysroot/etc/.ignition-result.json" Nov 1 00:22:08.793763 ignition[857]: INFO : files: files passed Nov 1 00:22:08.793763 ignition[857]: INFO : Ignition finished successfully Nov 1 00:22:08.795000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:22:08.793766 systemd[1]: Finished ignition-files.service. Nov 1 00:22:08.796802 systemd[1]: Starting initrd-setup-root-after-ignition.service... Nov 1 00:22:08.802000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:22:08.802000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:22:08.803967 initrd-setup-root-after-ignition[883]: grep: /sysroot/usr/share/oem/oem-release: No such file or directory Nov 1 00:22:08.804000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:22:08.798046 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Nov 1 00:22:08.808291 initrd-setup-root-after-ignition[885]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 1 00:22:08.798724 systemd[1]: Starting ignition-quench.service... Nov 1 00:22:08.802290 systemd[1]: ignition-quench.service: Deactivated successfully. Nov 1 00:22:08.802374 systemd[1]: Finished ignition-quench.service. Nov 1 00:22:08.803981 systemd[1]: Finished initrd-setup-root-after-ignition.service. Nov 1 00:22:08.805003 systemd[1]: Reached target ignition-complete.target. Nov 1 00:22:08.807396 systemd[1]: Starting initrd-parse-etc.service... Nov 1 00:22:08.818966 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Nov 1 00:22:08.819062 systemd[1]: Finished initrd-parse-etc.service. Nov 1 00:22:08.820000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:22:08.820000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:22:08.820619 systemd[1]: Reached target initrd-fs.target. Nov 1 00:22:08.821719 systemd[1]: Reached target initrd.target. Nov 1 00:22:08.823018 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Nov 1 00:22:08.823691 systemd[1]: Starting dracut-pre-pivot.service... Nov 1 00:22:08.833341 systemd[1]: Finished dracut-pre-pivot.service. Nov 1 00:22:08.833000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:22:08.834738 systemd[1]: Starting initrd-cleanup.service... Nov 1 00:22:08.841931 systemd[1]: Stopped target nss-lookup.target. Nov 1 00:22:08.842777 systemd[1]: Stopped target remote-cryptsetup.target. Nov 1 00:22:08.844069 systemd[1]: Stopped target timers.target. Nov 1 00:22:08.845272 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Nov 1 00:22:08.846000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:22:08.845372 systemd[1]: Stopped dracut-pre-pivot.service. Nov 1 00:22:08.846537 systemd[1]: Stopped target initrd.target. Nov 1 00:22:08.847745 systemd[1]: Stopped target basic.target. Nov 1 00:22:08.848908 systemd[1]: Stopped target ignition-complete.target. Nov 1 00:22:08.850070 systemd[1]: Stopped target ignition-diskful.target. Nov 1 00:22:08.851293 systemd[1]: Stopped target initrd-root-device.target. Nov 1 00:22:08.852654 systemd[1]: Stopped target remote-fs.target. Nov 1 00:22:08.853886 systemd[1]: Stopped target remote-fs-pre.target. Nov 1 00:22:08.855164 systemd[1]: Stopped target sysinit.target. Nov 1 00:22:08.856319 systemd[1]: Stopped target local-fs.target. Nov 1 00:22:08.857542 systemd[1]: Stopped target local-fs-pre.target. Nov 1 00:22:08.858719 systemd[1]: Stopped target swap.target. Nov 1 00:22:08.860000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:22:08.859856 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Nov 1 00:22:08.859955 systemd[1]: Stopped dracut-pre-mount.service. Nov 1 00:22:08.863000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:22:08.861131 systemd[1]: Stopped target cryptsetup.target. Nov 1 00:22:08.864000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:22:08.862147 systemd[1]: dracut-initqueue.service: Deactivated successfully. Nov 1 00:22:08.862240 systemd[1]: Stopped dracut-initqueue.service. Nov 1 00:22:08.863597 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Nov 1 00:22:08.863689 systemd[1]: Stopped ignition-fetch-offline.service. Nov 1 00:22:08.864885 systemd[1]: Stopped target paths.target. Nov 1 00:22:08.865903 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Nov 1 00:22:08.869290 systemd[1]: Stopped systemd-ask-password-console.path. Nov 1 00:22:08.870463 systemd[1]: Stopped target slices.target. Nov 1 00:22:08.871843 systemd[1]: Stopped target sockets.target. Nov 1 00:22:08.873032 systemd[1]: iscsid.socket: Deactivated successfully. Nov 1 00:22:08.875000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:22:08.873103 systemd[1]: Closed iscsid.socket. Nov 1 00:22:08.876000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:22:08.874093 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Nov 1 00:22:08.874190 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Nov 1 00:22:08.875394 systemd[1]: ignition-files.service: Deactivated successfully. Nov 1 00:22:08.875487 systemd[1]: Stopped ignition-files.service. Nov 1 00:22:08.881000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:22:08.877396 systemd[1]: Stopping ignition-mount.service... Nov 1 00:22:08.884409 ignition[898]: INFO : Ignition 2.14.0 Nov 1 00:22:08.884409 ignition[898]: INFO : Stage: umount Nov 1 00:22:08.884409 ignition[898]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 1 00:22:08.884409 ignition[898]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 1 00:22:08.886000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:22:08.887000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:22:08.878835 systemd[1]: Stopping iscsiuio.service... Nov 1 00:22:08.890000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:22:08.891324 ignition[898]: INFO : umount: umount passed Nov 1 00:22:08.891324 ignition[898]: INFO : Ignition finished successfully Nov 1 00:22:08.891000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:22:08.880497 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Nov 1 00:22:08.880623 systemd[1]: Stopped kmod-static-nodes.service. Nov 1 00:22:08.882685 systemd[1]: Stopping sysroot-boot.service... Nov 1 00:22:08.885540 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Nov 1 00:22:08.885663 systemd[1]: Stopped systemd-udev-trigger.service. Nov 1 00:22:08.898000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:22:08.886875 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Nov 1 00:22:08.899000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:22:08.886967 systemd[1]: Stopped dracut-pre-trigger.service. Nov 1 00:22:08.902000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:22:08.889478 systemd[1]: iscsiuio.service: Deactivated successfully. Nov 1 00:22:08.889583 systemd[1]: Stopped iscsiuio.service. Nov 1 00:22:08.891006 systemd[1]: ignition-mount.service: Deactivated successfully. Nov 1 00:22:08.891088 systemd[1]: Stopped ignition-mount.service. Nov 1 00:22:08.906000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:22:08.906000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:22:08.893093 systemd[1]: sysroot-boot.mount: Deactivated successfully. Nov 1 00:22:08.894503 systemd[1]: Stopped target network.target. Nov 1 00:22:08.895345 systemd[1]: iscsiuio.socket: Deactivated successfully. Nov 1 00:22:08.895379 systemd[1]: Closed iscsiuio.socket. Nov 1 00:22:08.896508 systemd[1]: ignition-disks.service: Deactivated successfully. Nov 1 00:22:08.896548 systemd[1]: Stopped ignition-disks.service. Nov 1 00:22:08.898923 systemd[1]: ignition-kargs.service: Deactivated successfully. Nov 1 00:22:08.898965 systemd[1]: Stopped ignition-kargs.service. Nov 1 00:22:08.900076 systemd[1]: ignition-setup.service: Deactivated successfully. Nov 1 00:22:08.900111 systemd[1]: Stopped ignition-setup.service. Nov 1 00:22:08.902520 systemd[1]: Stopping systemd-networkd.service... Nov 1 00:22:08.916000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:22:08.903753 systemd[1]: Stopping systemd-resolved.service... Nov 1 00:22:08.917000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:22:08.905630 systemd[1]: initrd-cleanup.service: Deactivated successfully. Nov 1 00:22:08.919000 audit: BPF prog-id=6 op=UNLOAD Nov 1 00:22:08.905716 systemd[1]: Finished initrd-cleanup.service. Nov 1 00:22:08.915293 systemd[1]: systemd-resolved.service: Deactivated successfully. Nov 1 00:22:08.921000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:22:08.915302 systemd-networkd[743]: eth0: DHCPv6 lease lost Nov 1 00:22:08.922000 audit: BPF prog-id=9 op=UNLOAD Nov 1 00:22:08.923000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:22:08.924000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:22:08.915393 systemd[1]: Stopped systemd-resolved.service. Nov 1 00:22:08.916900 systemd[1]: systemd-networkd.service: Deactivated successfully. Nov 1 00:22:08.916991 systemd[1]: Stopped systemd-networkd.service. Nov 1 00:22:08.918184 systemd[1]: systemd-networkd.socket: Deactivated successfully. Nov 1 00:22:08.918212 systemd[1]: Closed systemd-networkd.socket. Nov 1 00:22:08.919941 systemd[1]: Stopping network-cleanup.service... Nov 1 00:22:08.932000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:22:08.920617 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Nov 1 00:22:08.920672 systemd[1]: Stopped parse-ip-for-networkd.service. Nov 1 00:22:08.935000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:22:08.922153 systemd[1]: systemd-sysctl.service: Deactivated successfully. Nov 1 00:22:08.936000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:22:08.922195 systemd[1]: Stopped systemd-sysctl.service. Nov 1 00:22:08.923812 systemd[1]: systemd-modules-load.service: Deactivated successfully. Nov 1 00:22:08.923851 systemd[1]: Stopped systemd-modules-load.service. Nov 1 00:22:08.925007 systemd[1]: Stopping systemd-udevd.service... Nov 1 00:22:08.929473 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Nov 1 00:22:08.931943 systemd[1]: network-cleanup.service: Deactivated successfully. Nov 1 00:22:08.932040 systemd[1]: Stopped network-cleanup.service. Nov 1 00:22:08.933910 systemd[1]: sysroot-boot.service: Deactivated successfully. Nov 1 00:22:08.933984 systemd[1]: Stopped sysroot-boot.service. Nov 1 00:22:08.935562 systemd[1]: initrd-setup-root.service: Deactivated successfully. Nov 1 00:22:08.935600 systemd[1]: Stopped initrd-setup-root.service. Nov 1 00:22:08.945000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:22:08.944449 systemd[1]: systemd-udevd.service: Deactivated successfully. Nov 1 00:22:08.944558 systemd[1]: Stopped systemd-udevd.service. Nov 1 00:22:08.945998 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Nov 1 00:22:08.949000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:22:08.946031 systemd[1]: Closed systemd-udevd-control.socket. Nov 1 00:22:08.950000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:22:08.947043 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Nov 1 00:22:08.951000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:22:08.947072 systemd[1]: Closed systemd-udevd-kernel.socket. Nov 1 00:22:08.948358 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Nov 1 00:22:08.954000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:22:08.948398 systemd[1]: Stopped dracut-pre-udev.service. Nov 1 00:22:08.949527 systemd[1]: dracut-cmdline.service: Deactivated successfully. Nov 1 00:22:08.949561 systemd[1]: Stopped dracut-cmdline.service. Nov 1 00:22:08.950929 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 1 00:22:08.950961 systemd[1]: Stopped dracut-cmdline-ask.service. Nov 1 00:22:08.958000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:22:08.958000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:22:08.952760 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Nov 1 00:22:08.953552 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 1 00:22:08.953599 systemd[1]: Stopped systemd-vconsole-setup.service. Nov 1 00:22:08.957591 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Nov 1 00:22:08.957668 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Nov 1 00:22:08.959239 systemd[1]: Reached target initrd-switch-root.target. Nov 1 00:22:08.961140 systemd[1]: Starting initrd-switch-root.service... Nov 1 00:22:08.967720 systemd[1]: Switching root. Nov 1 00:22:08.968000 audit: BPF prog-id=8 op=UNLOAD Nov 1 00:22:08.968000 audit: BPF prog-id=7 op=UNLOAD Nov 1 00:22:08.970000 audit: BPF prog-id=5 op=UNLOAD Nov 1 00:22:08.970000 audit: BPF prog-id=4 op=UNLOAD Nov 1 00:22:08.970000 audit: BPF prog-id=3 op=UNLOAD Nov 1 00:22:08.988660 iscsid[748]: iscsid shutting down. Nov 1 00:22:08.989265 systemd-journald[290]: Received SIGTERM from PID 1 (systemd). Nov 1 00:22:08.989324 systemd-journald[290]: Journal stopped Nov 1 00:22:10.966686 kernel: SELinux: Class mctp_socket not defined in policy. Nov 1 00:22:10.966740 kernel: SELinux: Class anon_inode not defined in policy. Nov 1 00:22:10.966752 kernel: SELinux: the above unknown classes and permissions will be allowed Nov 1 00:22:10.966762 kernel: SELinux: policy capability network_peer_controls=1 Nov 1 00:22:10.966775 kernel: SELinux: policy capability open_perms=1 Nov 1 00:22:10.966785 kernel: SELinux: policy capability extended_socket_class=1 Nov 1 00:22:10.966796 kernel: SELinux: policy capability always_check_network=0 Nov 1 00:22:10.966805 kernel: SELinux: policy capability cgroup_seclabel=1 Nov 1 00:22:10.966819 kernel: SELinux: policy capability nnp_nosuid_transition=1 Nov 1 00:22:10.966828 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Nov 1 00:22:10.966840 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Nov 1 00:22:10.966854 kernel: kauditd_printk_skb: 70 callbacks suppressed Nov 1 00:22:10.966863 kernel: audit: type=1403 audit(1761956529.069:81): auid=4294967295 ses=4294967295 lsm=selinux res=1 Nov 1 00:22:10.966875 systemd[1]: Successfully loaded SELinux policy in 34.626ms. Nov 1 00:22:10.966888 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 6.912ms. Nov 1 00:22:10.966901 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Nov 1 00:22:10.966912 systemd[1]: Detected virtualization kvm. Nov 1 00:22:10.966922 systemd[1]: Detected architecture arm64. Nov 1 00:22:10.966933 systemd[1]: Detected first boot. Nov 1 00:22:10.966944 systemd[1]: Initializing machine ID from VM UUID. Nov 1 00:22:10.966955 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Nov 1 00:22:10.966966 kernel: audit: type=1400 audit(1761956529.211:82): avc: denied { associate } for pid=949 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Nov 1 00:22:10.966978 kernel: audit: type=1300 audit(1761956529.211:82): arch=c00000b7 syscall=5 success=yes exit=0 a0=40001c766c a1=40000caae0 a2=40000d0a00 a3=32 items=0 ppid=932 pid=949 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:22:10.966990 kernel: audit: type=1327 audit(1761956529.211:82): proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Nov 1 00:22:10.967000 kernel: audit: type=1400 audit(1761956529.212:83): avc: denied { associate } for pid=949 comm="torcx-generator" name="bin" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Nov 1 00:22:10.967011 kernel: audit: type=1300 audit(1761956529.212:83): arch=c00000b7 syscall=34 success=yes exit=0 a0=ffffffffffffff9c a1=40001c7749 a2=1ed a3=0 items=2 ppid=932 pid=949 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:22:10.967020 kernel: audit: type=1307 audit(1761956529.212:83): cwd="/" Nov 1 00:22:10.967030 kernel: audit: type=1302 audit(1761956529.212:83): item=0 name=(null) inode=2 dev=00:2a mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:22:10.967042 kernel: audit: type=1302 audit(1761956529.212:83): item=1 name=(null) inode=3 dev=00:2a mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:22:10.967054 kernel: audit: type=1327 audit(1761956529.212:83): proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Nov 1 00:22:10.967066 systemd[1]: Populated /etc with preset unit settings. Nov 1 00:22:10.967077 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Nov 1 00:22:10.967089 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 1 00:22:10.967101 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 1 00:22:10.967113 systemd[1]: Queued start job for default target multi-user.target. Nov 1 00:22:10.967123 systemd[1]: Unnecessary job was removed for dev-vda6.device. Nov 1 00:22:10.967134 systemd[1]: Created slice system-addon\x2dconfig.slice. Nov 1 00:22:10.967144 systemd[1]: Created slice system-addon\x2drun.slice. Nov 1 00:22:10.967155 systemd[1]: Created slice system-getty.slice. Nov 1 00:22:10.967166 systemd[1]: Created slice system-modprobe.slice. Nov 1 00:22:10.967177 systemd[1]: Created slice system-serial\x2dgetty.slice. Nov 1 00:22:10.967189 systemd[1]: Created slice system-system\x2dcloudinit.slice. Nov 1 00:22:10.967200 systemd[1]: Created slice system-systemd\x2dfsck.slice. Nov 1 00:22:10.967210 systemd[1]: Created slice user.slice. Nov 1 00:22:10.967221 systemd[1]: Started systemd-ask-password-console.path. Nov 1 00:22:10.967232 systemd[1]: Started systemd-ask-password-wall.path. Nov 1 00:22:10.967242 systemd[1]: Set up automount boot.automount. Nov 1 00:22:10.967265 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Nov 1 00:22:10.967276 systemd[1]: Reached target integritysetup.target. Nov 1 00:22:10.967286 systemd[1]: Reached target remote-cryptsetup.target. Nov 1 00:22:10.967298 systemd[1]: Reached target remote-fs.target. Nov 1 00:22:10.967308 systemd[1]: Reached target slices.target. Nov 1 00:22:10.967319 systemd[1]: Reached target swap.target. Nov 1 00:22:10.967329 systemd[1]: Reached target torcx.target. Nov 1 00:22:10.967339 systemd[1]: Reached target veritysetup.target. Nov 1 00:22:10.967350 systemd[1]: Listening on systemd-coredump.socket. Nov 1 00:22:10.967361 systemd[1]: Listening on systemd-initctl.socket. Nov 1 00:22:10.967381 systemd[1]: Listening on systemd-journald-audit.socket. Nov 1 00:22:10.967393 systemd[1]: Listening on systemd-journald-dev-log.socket. Nov 1 00:22:10.967406 systemd[1]: Listening on systemd-journald.socket. Nov 1 00:22:10.967417 systemd[1]: Listening on systemd-networkd.socket. Nov 1 00:22:10.967427 systemd[1]: Listening on systemd-udevd-control.socket. Nov 1 00:22:10.967439 systemd[1]: Listening on systemd-udevd-kernel.socket. Nov 1 00:22:10.967449 systemd[1]: Listening on systemd-userdbd.socket. Nov 1 00:22:10.967459 systemd[1]: Mounting dev-hugepages.mount... Nov 1 00:22:10.967470 systemd[1]: Mounting dev-mqueue.mount... Nov 1 00:22:10.967480 systemd[1]: Mounting media.mount... Nov 1 00:22:10.967490 systemd[1]: Mounting sys-kernel-debug.mount... Nov 1 00:22:10.967501 systemd[1]: Mounting sys-kernel-tracing.mount... Nov 1 00:22:10.967512 systemd[1]: Mounting tmp.mount... Nov 1 00:22:10.967522 systemd[1]: Starting flatcar-tmpfiles.service... Nov 1 00:22:10.967533 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Nov 1 00:22:10.967543 systemd[1]: Starting kmod-static-nodes.service... Nov 1 00:22:10.967553 systemd[1]: Starting modprobe@configfs.service... Nov 1 00:22:10.967563 systemd[1]: Starting modprobe@dm_mod.service... Nov 1 00:22:10.967573 systemd[1]: Starting modprobe@drm.service... Nov 1 00:22:10.967583 systemd[1]: Starting modprobe@efi_pstore.service... Nov 1 00:22:10.967594 systemd[1]: Starting modprobe@fuse.service... Nov 1 00:22:10.967605 systemd[1]: Starting modprobe@loop.service... Nov 1 00:22:10.967620 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Nov 1 00:22:10.967631 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Nov 1 00:22:10.967641 systemd[1]: (This warning is only shown for the first unit using IP firewalling.) Nov 1 00:22:10.967651 systemd[1]: Starting systemd-journald.service... Nov 1 00:22:10.967662 kernel: fuse: init (API version 7.34) Nov 1 00:22:10.967672 systemd[1]: Starting systemd-modules-load.service... Nov 1 00:22:10.967683 kernel: loop: module loaded Nov 1 00:22:10.967692 systemd[1]: Starting systemd-network-generator.service... Nov 1 00:22:10.967704 systemd[1]: Starting systemd-remount-fs.service... Nov 1 00:22:10.967714 systemd[1]: Starting systemd-udev-trigger.service... Nov 1 00:22:10.967724 systemd[1]: Mounted dev-hugepages.mount. Nov 1 00:22:10.967734 systemd[1]: Mounted dev-mqueue.mount. Nov 1 00:22:10.967745 systemd[1]: Mounted media.mount. Nov 1 00:22:10.967756 systemd[1]: Mounted sys-kernel-debug.mount. Nov 1 00:22:10.967766 systemd[1]: Mounted sys-kernel-tracing.mount. Nov 1 00:22:10.967776 systemd[1]: Mounted tmp.mount. Nov 1 00:22:10.967786 systemd[1]: Finished kmod-static-nodes.service. Nov 1 00:22:10.967797 systemd[1]: modprobe@configfs.service: Deactivated successfully. Nov 1 00:22:10.967810 systemd-journald[1037]: Journal started Nov 1 00:22:10.967850 systemd-journald[1037]: Runtime Journal (/run/log/journal/1b78c5679a2c4ff09e68b8a877db5315) is 6.0M, max 48.7M, 42.6M free. Nov 1 00:22:10.894000 audit[1]: AVC avc: denied { audit_read } for pid=1 comm="systemd" capability=37 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Nov 1 00:22:10.894000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Nov 1 00:22:10.965000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Nov 1 00:22:10.965000 audit[1037]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=60 a0=4 a1=ffffeb61c070 a2=4000 a3=1 items=0 ppid=1 pid=1037 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:22:10.965000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Nov 1 00:22:10.967000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:22:10.969674 systemd[1]: Finished modprobe@configfs.service. Nov 1 00:22:10.969000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:22:10.969000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:22:10.971288 systemd[1]: Started systemd-journald.service. Nov 1 00:22:10.971000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:22:10.972057 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 1 00:22:10.972000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:22:10.972000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:22:10.972467 systemd[1]: Finished modprobe@dm_mod.service. Nov 1 00:22:10.973470 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 1 00:22:10.973650 systemd[1]: Finished modprobe@drm.service. Nov 1 00:22:10.974000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:22:10.974000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:22:10.974551 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 1 00:22:10.975450 systemd[1]: Finished modprobe@efi_pstore.service. Nov 1 00:22:10.975000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:22:10.975000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:22:10.976414 systemd[1]: modprobe@fuse.service: Deactivated successfully. Nov 1 00:22:10.976600 systemd[1]: Finished modprobe@fuse.service. Nov 1 00:22:10.977000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:22:10.977000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:22:10.977619 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 1 00:22:10.977997 systemd[1]: Finished modprobe@loop.service. Nov 1 00:22:10.978000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:22:10.978000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:22:10.979180 systemd[1]: Finished systemd-modules-load.service. Nov 1 00:22:10.979000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:22:10.981367 systemd[1]: Finished systemd-network-generator.service. Nov 1 00:22:10.981000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:22:10.982555 systemd[1]: Finished systemd-remount-fs.service. Nov 1 00:22:10.982000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:22:10.983704 systemd[1]: Reached target network-pre.target. Nov 1 00:22:10.985576 systemd[1]: Mounting sys-fs-fuse-connections.mount... Nov 1 00:22:10.987206 systemd[1]: Mounting sys-kernel-config.mount... Nov 1 00:22:10.990101 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Nov 1 00:22:10.991508 systemd[1]: Starting systemd-hwdb-update.service... Nov 1 00:22:10.993275 systemd[1]: Starting systemd-journal-flush.service... Nov 1 00:22:10.994080 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 1 00:22:10.995193 systemd[1]: Starting systemd-random-seed.service... Nov 1 00:22:10.996080 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Nov 1 00:22:10.997129 systemd[1]: Starting systemd-sysctl.service... Nov 1 00:22:10.999282 systemd-journald[1037]: Time spent on flushing to /var/log/journal/1b78c5679a2c4ff09e68b8a877db5315 is 12.034ms for 928 entries. Nov 1 00:22:10.999282 systemd-journald[1037]: System Journal (/var/log/journal/1b78c5679a2c4ff09e68b8a877db5315) is 8.0M, max 195.6M, 187.6M free. Nov 1 00:22:11.016362 systemd-journald[1037]: Received client request to flush runtime journal. Nov 1 00:22:11.005000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:22:11.006000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:22:11.000741 systemd[1]: Mounted sys-fs-fuse-connections.mount. Nov 1 00:22:11.001904 systemd[1]: Mounted sys-kernel-config.mount. Nov 1 00:22:11.004751 systemd[1]: Finished flatcar-tmpfiles.service. Nov 1 00:22:11.005938 systemd[1]: Finished systemd-random-seed.service. Nov 1 00:22:11.006816 systemd[1]: Reached target first-boot-complete.target. Nov 1 00:22:11.008764 systemd[1]: Starting systemd-sysusers.service... Nov 1 00:22:11.015447 systemd[1]: Finished systemd-sysctl.service. Nov 1 00:22:11.016537 systemd[1]: Finished systemd-udev-trigger.service. Nov 1 00:22:11.017646 systemd[1]: Finished systemd-journal-flush.service. Nov 1 00:22:11.015000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:22:11.017000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:22:11.019477 systemd[1]: Starting systemd-udev-settle.service... Nov 1 00:22:11.018000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:22:11.029000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:22:11.028731 systemd[1]: Finished systemd-sysusers.service. Nov 1 00:22:11.030426 udevadm[1084]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Nov 1 00:22:11.030532 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Nov 1 00:22:11.046967 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Nov 1 00:22:11.047000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:22:11.366199 systemd[1]: Finished systemd-hwdb-update.service. Nov 1 00:22:11.366000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:22:11.368310 systemd[1]: Starting systemd-udevd.service... Nov 1 00:22:11.383735 systemd-udevd[1090]: Using default interface naming scheme 'v252'. Nov 1 00:22:11.395908 systemd[1]: Started systemd-udevd.service. Nov 1 00:22:11.396000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:22:11.398512 systemd[1]: Starting systemd-networkd.service... Nov 1 00:22:11.408607 systemd[1]: Starting systemd-userdbd.service... Nov 1 00:22:11.419126 systemd[1]: Found device dev-ttyAMA0.device. Nov 1 00:22:11.439539 systemd[1]: Started systemd-userdbd.service. Nov 1 00:22:11.440000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:22:11.486233 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Nov 1 00:22:11.493674 systemd[1]: Finished systemd-udev-settle.service. Nov 1 00:22:11.494000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:22:11.495909 systemd[1]: Starting lvm2-activation-early.service... Nov 1 00:22:11.497047 systemd-networkd[1098]: lo: Link UP Nov 1 00:22:11.497059 systemd-networkd[1098]: lo: Gained carrier Nov 1 00:22:11.497907 systemd-networkd[1098]: Enumeration completed Nov 1 00:22:11.498057 systemd[1]: Started systemd-networkd.service. Nov 1 00:22:11.498568 systemd-networkd[1098]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 1 00:22:11.498000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:22:11.501865 systemd-networkd[1098]: eth0: Link UP Nov 1 00:22:11.501874 systemd-networkd[1098]: eth0: Gained carrier Nov 1 00:22:11.507878 lvm[1124]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Nov 1 00:22:11.529391 systemd-networkd[1098]: eth0: DHCPv4 address 10.0.0.92/16, gateway 10.0.0.1 acquired from 10.0.0.1 Nov 1 00:22:11.534211 systemd[1]: Finished lvm2-activation-early.service. Nov 1 00:22:11.534000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:22:11.535097 systemd[1]: Reached target cryptsetup.target. Nov 1 00:22:11.537014 systemd[1]: Starting lvm2-activation.service... Nov 1 00:22:11.540583 lvm[1126]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Nov 1 00:22:11.583088 systemd[1]: Finished lvm2-activation.service. Nov 1 00:22:11.583000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:22:11.583986 systemd[1]: Reached target local-fs-pre.target. Nov 1 00:22:11.584808 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Nov 1 00:22:11.584841 systemd[1]: Reached target local-fs.target. Nov 1 00:22:11.585556 systemd[1]: Reached target machines.target. Nov 1 00:22:11.587448 systemd[1]: Starting ldconfig.service... Nov 1 00:22:11.588403 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Nov 1 00:22:11.588455 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Nov 1 00:22:11.589484 systemd[1]: Starting systemd-boot-update.service... Nov 1 00:22:11.591435 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Nov 1 00:22:11.593615 systemd[1]: Starting systemd-machine-id-commit.service... Nov 1 00:22:11.595525 systemd[1]: Starting systemd-sysext.service... Nov 1 00:22:11.596596 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1129 (bootctl) Nov 1 00:22:11.597623 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Nov 1 00:22:11.603273 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Nov 1 00:22:11.604000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:22:11.606623 systemd[1]: Unmounting usr-share-oem.mount... Nov 1 00:22:11.612388 systemd[1]: usr-share-oem.mount: Deactivated successfully. Nov 1 00:22:11.612643 systemd[1]: Unmounted usr-share-oem.mount. Nov 1 00:22:11.666274 kernel: loop0: detected capacity change from 0 to 207008 Nov 1 00:22:11.670648 systemd[1]: Finished systemd-machine-id-commit.service. Nov 1 00:22:11.671000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:22:11.675534 systemd-fsck[1139]: fsck.fat 4.2 (2021-01-31) Nov 1 00:22:11.675534 systemd-fsck[1139]: /dev/vda1: 236 files, 117310/258078 clusters Nov 1 00:22:11.679065 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Nov 1 00:22:11.679279 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Nov 1 00:22:11.680000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:22:11.707292 kernel: loop1: detected capacity change from 0 to 207008 Nov 1 00:22:11.714119 (sd-sysext)[1147]: Using extensions 'kubernetes'. Nov 1 00:22:11.714491 (sd-sysext)[1147]: Merged extensions into '/usr'. Nov 1 00:22:11.731492 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Nov 1 00:22:11.733029 systemd[1]: Starting modprobe@dm_mod.service... Nov 1 00:22:11.735434 systemd[1]: Starting modprobe@efi_pstore.service... Nov 1 00:22:11.737698 systemd[1]: Starting modprobe@loop.service... Nov 1 00:22:11.738574 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Nov 1 00:22:11.738731 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Nov 1 00:22:11.740000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:22:11.740000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:22:11.741000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:22:11.741000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:22:11.739570 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 1 00:22:11.739744 systemd[1]: Finished modprobe@dm_mod.service. Nov 1 00:22:11.740964 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 1 00:22:11.741122 systemd[1]: Finished modprobe@efi_pstore.service. Nov 1 00:22:11.742415 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 1 00:22:11.742714 systemd[1]: Finished modprobe@loop.service. Nov 1 00:22:11.743000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:22:11.743000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:22:11.744132 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 1 00:22:11.744241 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Nov 1 00:22:11.786899 ldconfig[1128]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Nov 1 00:22:11.790268 systemd[1]: Finished ldconfig.service. Nov 1 00:22:11.790000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:22:11.957534 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Nov 1 00:22:11.959346 systemd[1]: Mounting boot.mount... Nov 1 00:22:11.961141 systemd[1]: Mounting usr-share-oem.mount... Nov 1 00:22:11.967392 systemd[1]: Mounted boot.mount. Nov 1 00:22:11.968213 systemd[1]: Mounted usr-share-oem.mount. Nov 1 00:22:11.970209 systemd[1]: Finished systemd-sysext.service. Nov 1 00:22:11.971000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:22:11.972951 systemd[1]: Starting ensure-sysext.service... Nov 1 00:22:11.974663 systemd[1]: Starting systemd-tmpfiles-setup.service... Nov 1 00:22:11.977631 systemd[1]: Finished systemd-boot-update.service. Nov 1 00:22:11.978000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:22:11.979908 systemd[1]: Reloading. Nov 1 00:22:11.982905 systemd-tmpfiles[1165]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Nov 1 00:22:11.984049 systemd-tmpfiles[1165]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Nov 1 00:22:11.985314 systemd-tmpfiles[1165]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Nov 1 00:22:12.011177 /usr/lib/systemd/system-generators/torcx-generator[1187]: time="2025-11-01T00:22:12Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Nov 1 00:22:12.011551 /usr/lib/systemd/system-generators/torcx-generator[1187]: time="2025-11-01T00:22:12Z" level=info msg="torcx already run" Nov 1 00:22:12.072436 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Nov 1 00:22:12.072456 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 1 00:22:12.087660 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 1 00:22:12.132551 systemd[1]: Finished systemd-tmpfiles-setup.service. Nov 1 00:22:12.133000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:22:12.136070 systemd[1]: Starting audit-rules.service... Nov 1 00:22:12.137792 systemd[1]: Starting clean-ca-certificates.service... Nov 1 00:22:12.139739 systemd[1]: Starting systemd-journal-catalog-update.service... Nov 1 00:22:12.141983 systemd[1]: Starting systemd-resolved.service... Nov 1 00:22:12.143986 systemd[1]: Starting systemd-timesyncd.service... Nov 1 00:22:12.145968 systemd[1]: Starting systemd-update-utmp.service... Nov 1 00:22:12.147306 systemd[1]: Finished clean-ca-certificates.service. Nov 1 00:22:12.147000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:22:12.151000 audit[1244]: SYSTEM_BOOT pid=1244 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Nov 1 00:22:12.150127 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Nov 1 00:22:12.154703 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Nov 1 00:22:12.155956 systemd[1]: Starting modprobe@dm_mod.service... Nov 1 00:22:12.157739 systemd[1]: Starting modprobe@efi_pstore.service... Nov 1 00:22:12.159555 systemd[1]: Starting modprobe@loop.service... Nov 1 00:22:12.160235 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Nov 1 00:22:12.160419 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Nov 1 00:22:12.160571 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Nov 1 00:22:12.161632 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 1 00:22:12.161771 systemd[1]: Finished modprobe@dm_mod.service. Nov 1 00:22:12.162000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:22:12.162000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:22:12.166358 systemd[1]: Finished systemd-journal-catalog-update.service. Nov 1 00:22:12.167000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:22:12.167710 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 1 00:22:12.167840 systemd[1]: Finished modprobe@efi_pstore.service. Nov 1 00:22:12.168000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:22:12.168000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:22:12.169173 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 1 00:22:12.169329 systemd[1]: Finished modprobe@loop.service. Nov 1 00:22:12.169000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:22:12.169000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:22:12.170495 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 1 00:22:12.170612 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Nov 1 00:22:12.171769 systemd[1]: Starting systemd-update-done.service... Nov 1 00:22:12.172995 systemd[1]: Finished systemd-update-utmp.service. Nov 1 00:22:12.173000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:22:12.176169 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Nov 1 00:22:12.177431 systemd[1]: Starting modprobe@dm_mod.service... Nov 1 00:22:12.181724 systemd[1]: Starting modprobe@efi_pstore.service... Nov 1 00:22:12.183493 systemd[1]: Starting modprobe@loop.service... Nov 1 00:22:12.184134 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Nov 1 00:22:12.184287 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Nov 1 00:22:12.184389 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Nov 1 00:22:12.185191 systemd[1]: Finished systemd-update-done.service. Nov 1 00:22:12.185000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-done comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:22:12.186382 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 1 00:22:12.186514 systemd[1]: Finished modprobe@dm_mod.service. Nov 1 00:22:12.187000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:22:12.187000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:22:12.187743 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 1 00:22:12.191296 systemd[1]: Finished modprobe@efi_pstore.service. Nov 1 00:22:12.191000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:22:12.191000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:22:12.192495 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 1 00:22:12.192638 systemd[1]: Finished modprobe@loop.service. Nov 1 00:22:12.193000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:22:12.193000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:22:12.193662 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 1 00:22:12.193749 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Nov 1 00:22:12.195985 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Nov 1 00:22:12.197061 systemd[1]: Starting modprobe@dm_mod.service... Nov 1 00:22:12.199711 systemd[1]: Starting modprobe@drm.service... Nov 1 00:22:12.201434 systemd[1]: Starting modprobe@efi_pstore.service... Nov 1 00:22:12.203126 systemd[1]: Starting modprobe@loop.service... Nov 1 00:22:12.203985 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Nov 1 00:22:12.204119 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Nov 1 00:22:12.205408 systemd[1]: Starting systemd-networkd-wait-online.service... Nov 1 00:22:12.206222 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Nov 1 00:22:12.207391 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 1 00:22:12.207659 systemd[1]: Finished modprobe@dm_mod.service. Nov 1 00:22:12.208000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:22:12.208000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:22:12.208825 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 1 00:22:12.208952 systemd[1]: Finished modprobe@efi_pstore.service. Nov 1 00:22:12.209060 systemd-resolved[1239]: Positive Trust Anchors: Nov 1 00:22:12.209070 systemd-resolved[1239]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 1 00:22:12.209097 systemd-resolved[1239]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Nov 1 00:22:12.209000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:22:12.209000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:22:12.210304 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 1 00:22:12.210446 systemd[1]: Finished modprobe@loop.service. Nov 1 00:22:12.210000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:22:12.210000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:22:12.211720 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 1 00:22:12.211806 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Nov 1 00:22:12.212000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Nov 1 00:22:12.212000 audit[1271]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffe76bce70 a2=420 a3=0 items=0 ppid=1232 pid=1271 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:22:12.212000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Nov 1 00:22:12.212659 augenrules[1271]: No rules Nov 1 00:22:12.212713 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 1 00:22:12.212864 systemd[1]: Finished modprobe@drm.service. Nov 1 00:22:12.214161 systemd[1]: Finished ensure-sysext.service. Nov 1 00:22:12.215226 systemd[1]: Finished audit-rules.service. Nov 1 00:22:12.223387 systemd-resolved[1239]: Defaulting to hostname 'linux'. Nov 1 00:22:12.224783 systemd[1]: Started systemd-resolved.service. Nov 1 00:22:11.790811 systemd-resolved[1239]: Clock change detected. Flushing caches. Nov 1 00:22:11.807275 systemd-journald[1037]: Time jumped backwards, rotating. Nov 1 00:22:11.790933 systemd[1]: Started systemd-timesyncd.service. Nov 1 00:22:11.791502 systemd-timesyncd[1241]: Contacted time server 10.0.0.1:123 (10.0.0.1). Nov 1 00:22:11.791569 systemd-timesyncd[1241]: Initial clock synchronization to Sat 2025-11-01 00:22:11.790773 UTC. Nov 1 00:22:11.792624 systemd[1]: Reached target network.target. Nov 1 00:22:11.793313 systemd[1]: Reached target nss-lookup.target. Nov 1 00:22:11.794230 systemd[1]: Reached target sysinit.target. Nov 1 00:22:11.795258 systemd[1]: Started motdgen.path. Nov 1 00:22:11.795985 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Nov 1 00:22:11.796946 systemd[1]: Started systemd-tmpfiles-clean.timer. Nov 1 00:22:11.797674 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Nov 1 00:22:11.797695 systemd[1]: Reached target paths.target. Nov 1 00:22:11.798300 systemd[1]: Reached target time-set.target. Nov 1 00:22:11.800274 systemd[1]: Started logrotate.timer. Nov 1 00:22:11.801155 systemd[1]: Started mdadm.timer. Nov 1 00:22:11.801769 systemd[1]: Reached target timers.target. Nov 1 00:22:11.802756 systemd[1]: Listening on dbus.socket. Nov 1 00:22:11.804561 systemd[1]: Starting docker.socket... Nov 1 00:22:11.806106 systemd[1]: Listening on sshd.socket. Nov 1 00:22:11.807301 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Nov 1 00:22:11.807600 systemd[1]: Listening on docker.socket. Nov 1 00:22:11.808254 systemd[1]: Reached target sockets.target. Nov 1 00:22:11.808993 systemd[1]: Reached target basic.target. Nov 1 00:22:11.809801 systemd[1]: System is tainted: cgroupsv1 Nov 1 00:22:11.809849 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Nov 1 00:22:11.809868 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Nov 1 00:22:11.810816 systemd[1]: Starting containerd.service... Nov 1 00:22:11.812517 systemd[1]: Starting dbus.service... Nov 1 00:22:11.814239 systemd[1]: Starting enable-oem-cloudinit.service... Nov 1 00:22:11.816501 systemd[1]: Starting extend-filesystems.service... Nov 1 00:22:11.817375 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Nov 1 00:22:11.818489 systemd[1]: Starting motdgen.service... Nov 1 00:22:11.819610 jq[1296]: false Nov 1 00:22:11.820971 systemd[1]: Starting prepare-helm.service... Nov 1 00:22:11.822795 systemd[1]: Starting ssh-key-proc-cmdline.service... Nov 1 00:22:11.824889 systemd[1]: Starting sshd-keygen.service... Nov 1 00:22:11.827493 systemd[1]: Starting systemd-logind.service... Nov 1 00:22:11.828547 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Nov 1 00:22:11.828619 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Nov 1 00:22:11.829684 systemd[1]: Starting update-engine.service... Nov 1 00:22:11.831788 systemd[1]: Starting update-ssh-keys-after-ignition.service... Nov 1 00:22:11.838945 jq[1315]: true Nov 1 00:22:11.834289 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Nov 1 00:22:11.836876 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Nov 1 00:22:11.837755 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Nov 1 00:22:11.837965 systemd[1]: Finished ssh-key-proc-cmdline.service. Nov 1 00:22:11.850055 tar[1320]: linux-arm64/LICENSE Nov 1 00:22:11.850055 tar[1320]: linux-arm64/helm Nov 1 00:22:11.850343 jq[1323]: true Nov 1 00:22:11.856454 dbus-daemon[1295]: [system] SELinux support is enabled Nov 1 00:22:11.862289 extend-filesystems[1297]: Found loop1 Nov 1 00:22:11.862289 extend-filesystems[1297]: Found vda Nov 1 00:22:11.862289 extend-filesystems[1297]: Found vda1 Nov 1 00:22:11.862289 extend-filesystems[1297]: Found vda2 Nov 1 00:22:11.862289 extend-filesystems[1297]: Found vda3 Nov 1 00:22:11.862289 extend-filesystems[1297]: Found usr Nov 1 00:22:11.862289 extend-filesystems[1297]: Found vda4 Nov 1 00:22:11.862289 extend-filesystems[1297]: Found vda6 Nov 1 00:22:11.862289 extend-filesystems[1297]: Found vda7 Nov 1 00:22:11.862289 extend-filesystems[1297]: Found vda9 Nov 1 00:22:11.862289 extend-filesystems[1297]: Checking size of /dev/vda9 Nov 1 00:22:11.856625 systemd[1]: Started dbus.service. Nov 1 00:22:11.878702 extend-filesystems[1297]: Resized partition /dev/vda9 Nov 1 00:22:11.859157 systemd[1]: motdgen.service: Deactivated successfully. Nov 1 00:22:11.880037 extend-filesystems[1351]: resize2fs 1.46.5 (30-Dec-2021) Nov 1 00:22:11.859369 systemd[1]: Finished motdgen.service. Nov 1 00:22:11.860363 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Nov 1 00:22:11.860383 systemd[1]: Reached target system-config.target. Nov 1 00:22:11.861144 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Nov 1 00:22:11.861163 systemd[1]: Reached target user-config.target. Nov 1 00:22:11.887650 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Nov 1 00:22:11.894619 update_engine[1312]: I1101 00:22:11.894311 1312 main.cc:92] Flatcar Update Engine starting Nov 1 00:22:11.907358 update_engine[1312]: I1101 00:22:11.899869 1312 update_check_scheduler.cc:74] Next update check in 11m31s Nov 1 00:22:11.897594 systemd[1]: Started update-engine.service. Nov 1 00:22:11.904275 systemd-logind[1310]: Watching system buttons on /dev/input/event0 (Power Button) Nov 1 00:22:11.904471 systemd-logind[1310]: New seat seat0. Nov 1 00:22:11.904565 systemd[1]: Started locksmithd.service. Nov 1 00:22:11.905794 systemd[1]: Started systemd-logind.service. Nov 1 00:22:11.912424 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Nov 1 00:22:11.923930 extend-filesystems[1351]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Nov 1 00:22:11.923930 extend-filesystems[1351]: old_desc_blocks = 1, new_desc_blocks = 1 Nov 1 00:22:11.923930 extend-filesystems[1351]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Nov 1 00:22:11.930728 extend-filesystems[1297]: Resized filesystem in /dev/vda9 Nov 1 00:22:11.925743 systemd[1]: Finished update-ssh-keys-after-ignition.service. Nov 1 00:22:11.931715 bash[1350]: Updated "/home/core/.ssh/authorized_keys" Nov 1 00:22:11.927163 systemd[1]: extend-filesystems.service: Deactivated successfully. Nov 1 00:22:11.927368 systemd[1]: Finished extend-filesystems.service. Nov 1 00:22:11.936224 env[1324]: time="2025-11-01T00:22:11.935621583Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Nov 1 00:22:11.957293 env[1324]: time="2025-11-01T00:22:11.957178063Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Nov 1 00:22:11.957658 env[1324]: time="2025-11-01T00:22:11.957634863Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Nov 1 00:22:11.959072 env[1324]: time="2025-11-01T00:22:11.959038303Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.192-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Nov 1 00:22:11.959153 env[1324]: time="2025-11-01T00:22:11.959138543Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Nov 1 00:22:11.959507 env[1324]: time="2025-11-01T00:22:11.959485103Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Nov 1 00:22:11.959600 env[1324]: time="2025-11-01T00:22:11.959584223Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Nov 1 00:22:11.959657 env[1324]: time="2025-11-01T00:22:11.959642823Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Nov 1 00:22:11.959705 env[1324]: time="2025-11-01T00:22:11.959692543Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Nov 1 00:22:11.959847 env[1324]: time="2025-11-01T00:22:11.959831543Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Nov 1 00:22:11.960472 env[1324]: time="2025-11-01T00:22:11.960450863Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Nov 1 00:22:11.960784 env[1324]: time="2025-11-01T00:22:11.960760383Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Nov 1 00:22:11.960962 env[1324]: time="2025-11-01T00:22:11.960944663Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Nov 1 00:22:11.961068 locksmithd[1354]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Nov 1 00:22:11.961314 env[1324]: time="2025-11-01T00:22:11.961066343Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Nov 1 00:22:11.961376 env[1324]: time="2025-11-01T00:22:11.961360743Z" level=info msg="metadata content store policy set" policy=shared Nov 1 00:22:11.964478 env[1324]: time="2025-11-01T00:22:11.964452583Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Nov 1 00:22:11.964605 env[1324]: time="2025-11-01T00:22:11.964587983Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Nov 1 00:22:11.964666 env[1324]: time="2025-11-01T00:22:11.964652663Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Nov 1 00:22:11.964752 env[1324]: time="2025-11-01T00:22:11.964736303Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Nov 1 00:22:11.964813 env[1324]: time="2025-11-01T00:22:11.964799463Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Nov 1 00:22:11.964869 env[1324]: time="2025-11-01T00:22:11.964856623Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Nov 1 00:22:11.964932 env[1324]: time="2025-11-01T00:22:11.964917983Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Nov 1 00:22:11.965315 env[1324]: time="2025-11-01T00:22:11.965289423Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Nov 1 00:22:11.965398 env[1324]: time="2025-11-01T00:22:11.965381423Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Nov 1 00:22:11.965484 env[1324]: time="2025-11-01T00:22:11.965467903Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Nov 1 00:22:11.965552 env[1324]: time="2025-11-01T00:22:11.965536863Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Nov 1 00:22:11.965611 env[1324]: time="2025-11-01T00:22:11.965596663Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Nov 1 00:22:11.965770 env[1324]: time="2025-11-01T00:22:11.965750543Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Nov 1 00:22:11.965928 env[1324]: time="2025-11-01T00:22:11.965910903Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Nov 1 00:22:11.966297 env[1324]: time="2025-11-01T00:22:11.966275823Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Nov 1 00:22:11.966391 env[1324]: time="2025-11-01T00:22:11.966375943Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Nov 1 00:22:11.966506 env[1324]: time="2025-11-01T00:22:11.966489943Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Nov 1 00:22:11.966671 env[1324]: time="2025-11-01T00:22:11.966653823Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Nov 1 00:22:11.966739 env[1324]: time="2025-11-01T00:22:11.966724223Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Nov 1 00:22:11.966797 env[1324]: time="2025-11-01T00:22:11.966782263Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Nov 1 00:22:11.966853 env[1324]: time="2025-11-01T00:22:11.966838703Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Nov 1 00:22:11.966912 env[1324]: time="2025-11-01T00:22:11.966899743Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Nov 1 00:22:11.966970 env[1324]: time="2025-11-01T00:22:11.966955943Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Nov 1 00:22:11.967033 env[1324]: time="2025-11-01T00:22:11.967019023Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Nov 1 00:22:11.967095 env[1324]: time="2025-11-01T00:22:11.967082023Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Nov 1 00:22:11.967166 env[1324]: time="2025-11-01T00:22:11.967151823Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Nov 1 00:22:11.967348 env[1324]: time="2025-11-01T00:22:11.967327943Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Nov 1 00:22:11.967449 env[1324]: time="2025-11-01T00:22:11.967432063Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Nov 1 00:22:11.967515 env[1324]: time="2025-11-01T00:22:11.967501423Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Nov 1 00:22:11.967600 env[1324]: time="2025-11-01T00:22:11.967584943Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Nov 1 00:22:11.967662 env[1324]: time="2025-11-01T00:22:11.967646623Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Nov 1 00:22:11.967714 env[1324]: time="2025-11-01T00:22:11.967700863Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Nov 1 00:22:11.967775 env[1324]: time="2025-11-01T00:22:11.967759823Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Nov 1 00:22:11.967856 env[1324]: time="2025-11-01T00:22:11.967839543Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Nov 1 00:22:11.968125 env[1324]: time="2025-11-01T00:22:11.968073703Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Nov 1 00:22:11.968795 env[1324]: time="2025-11-01T00:22:11.968498263Z" level=info msg="Connect containerd service" Nov 1 00:22:11.968795 env[1324]: time="2025-11-01T00:22:11.968582743Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Nov 1 00:22:11.969458 env[1324]: time="2025-11-01T00:22:11.969431343Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Nov 1 00:22:11.969851 env[1324]: time="2025-11-01T00:22:11.969828183Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Nov 1 00:22:11.969960 env[1324]: time="2025-11-01T00:22:11.969944823Z" level=info msg=serving... address=/run/containerd/containerd.sock Nov 1 00:22:11.971393 env[1324]: time="2025-11-01T00:22:11.969939663Z" level=info msg="Start subscribing containerd event" Nov 1 00:22:11.971393 env[1324]: time="2025-11-01T00:22:11.970450143Z" level=info msg="Start recovering state" Nov 1 00:22:11.971393 env[1324]: time="2025-11-01T00:22:11.970515223Z" level=info msg="Start event monitor" Nov 1 00:22:11.971393 env[1324]: time="2025-11-01T00:22:11.970546543Z" level=info msg="Start snapshots syncer" Nov 1 00:22:11.971393 env[1324]: time="2025-11-01T00:22:11.970557863Z" level=info msg="Start cni network conf syncer for default" Nov 1 00:22:11.971393 env[1324]: time="2025-11-01T00:22:11.970565303Z" level=info msg="Start streaming server" Nov 1 00:22:11.970518 systemd[1]: Started containerd.service. Nov 1 00:22:11.971735 env[1324]: time="2025-11-01T00:22:11.971709143Z" level=info msg="containerd successfully booted in 0.036722s" Nov 1 00:22:12.265809 tar[1320]: linux-arm64/README.md Nov 1 00:22:12.270618 systemd[1]: Finished prepare-helm.service. Nov 1 00:22:12.401646 systemd-networkd[1098]: eth0: Gained IPv6LL Nov 1 00:22:12.403605 systemd[1]: Finished systemd-networkd-wait-online.service. Nov 1 00:22:12.404696 systemd[1]: Reached target network-online.target. Nov 1 00:22:12.408059 systemd[1]: Starting kubelet.service... Nov 1 00:22:13.016036 systemd[1]: Started kubelet.service. Nov 1 00:22:13.409189 kubelet[1379]: E1101 00:22:13.409102 1379 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 1 00:22:13.411088 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 1 00:22:13.411218 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 1 00:22:13.746909 sshd_keygen[1321]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Nov 1 00:22:13.764548 systemd[1]: Finished sshd-keygen.service. Nov 1 00:22:13.766776 systemd[1]: Starting issuegen.service... Nov 1 00:22:13.771353 systemd[1]: issuegen.service: Deactivated successfully. Nov 1 00:22:13.771673 systemd[1]: Finished issuegen.service. Nov 1 00:22:13.773737 systemd[1]: Starting systemd-user-sessions.service... Nov 1 00:22:13.779285 systemd[1]: Finished systemd-user-sessions.service. Nov 1 00:22:13.781583 systemd[1]: Started getty@tty1.service. Nov 1 00:22:13.783399 systemd[1]: Started serial-getty@ttyAMA0.service. Nov 1 00:22:13.784419 systemd[1]: Reached target getty.target. Nov 1 00:22:13.785162 systemd[1]: Reached target multi-user.target. Nov 1 00:22:13.787090 systemd[1]: Starting systemd-update-utmp-runlevel.service... Nov 1 00:22:13.793183 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Nov 1 00:22:13.793385 systemd[1]: Finished systemd-update-utmp-runlevel.service. Nov 1 00:22:13.794334 systemd[1]: Startup finished in 5.060s (kernel) + 5.195s (userspace) = 10.255s. Nov 1 00:22:17.097627 systemd[1]: Created slice system-sshd.slice. Nov 1 00:22:17.098780 systemd[1]: Started sshd@0-10.0.0.92:22-10.0.0.1:41974.service. Nov 1 00:22:17.145261 sshd[1405]: Accepted publickey for core from 10.0.0.1 port 41974 ssh2: RSA SHA256:kb3suJ2QTjzwtL4e8dR0lwVlSg216vRjckn55cG0Sc4 Nov 1 00:22:17.147160 sshd[1405]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:22:17.154623 systemd[1]: Created slice user-500.slice. Nov 1 00:22:17.155530 systemd[1]: Starting user-runtime-dir@500.service... Nov 1 00:22:17.158375 systemd-logind[1310]: New session 1 of user core. Nov 1 00:22:17.163575 systemd[1]: Finished user-runtime-dir@500.service. Nov 1 00:22:17.164632 systemd[1]: Starting user@500.service... Nov 1 00:22:17.167401 (systemd)[1409]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:22:17.224515 systemd[1409]: Queued start job for default target default.target. Nov 1 00:22:17.224717 systemd[1409]: Reached target paths.target. Nov 1 00:22:17.224731 systemd[1409]: Reached target sockets.target. Nov 1 00:22:17.224742 systemd[1409]: Reached target timers.target. Nov 1 00:22:17.224752 systemd[1409]: Reached target basic.target. Nov 1 00:22:17.224792 systemd[1409]: Reached target default.target. Nov 1 00:22:17.224815 systemd[1409]: Startup finished in 52ms. Nov 1 00:22:17.224879 systemd[1]: Started user@500.service. Nov 1 00:22:17.225696 systemd[1]: Started session-1.scope. Nov 1 00:22:17.274625 systemd[1]: Started sshd@1-10.0.0.92:22-10.0.0.1:41980.service. Nov 1 00:22:17.322306 sshd[1419]: Accepted publickey for core from 10.0.0.1 port 41980 ssh2: RSA SHA256:kb3suJ2QTjzwtL4e8dR0lwVlSg216vRjckn55cG0Sc4 Nov 1 00:22:17.323504 sshd[1419]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:22:17.327216 systemd-logind[1310]: New session 2 of user core. Nov 1 00:22:17.327567 systemd[1]: Started session-2.scope. Nov 1 00:22:17.380803 sshd[1419]: pam_unix(sshd:session): session closed for user core Nov 1 00:22:17.382575 systemd[1]: Started sshd@2-10.0.0.92:22-10.0.0.1:41988.service. Nov 1 00:22:17.384127 systemd[1]: sshd@1-10.0.0.92:22-10.0.0.1:41980.service: Deactivated successfully. Nov 1 00:22:17.384923 systemd-logind[1310]: Session 2 logged out. Waiting for processes to exit. Nov 1 00:22:17.384964 systemd[1]: session-2.scope: Deactivated successfully. Nov 1 00:22:17.388013 systemd-logind[1310]: Removed session 2. Nov 1 00:22:17.424575 sshd[1424]: Accepted publickey for core from 10.0.0.1 port 41988 ssh2: RSA SHA256:kb3suJ2QTjzwtL4e8dR0lwVlSg216vRjckn55cG0Sc4 Nov 1 00:22:17.425719 sshd[1424]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:22:17.428842 systemd-logind[1310]: New session 3 of user core. Nov 1 00:22:17.429601 systemd[1]: Started session-3.scope. Nov 1 00:22:17.477775 sshd[1424]: pam_unix(sshd:session): session closed for user core Nov 1 00:22:17.480005 systemd[1]: Started sshd@3-10.0.0.92:22-10.0.0.1:41996.service. Nov 1 00:22:17.481027 systemd[1]: sshd@2-10.0.0.92:22-10.0.0.1:41988.service: Deactivated successfully. Nov 1 00:22:17.481879 systemd[1]: session-3.scope: Deactivated successfully. Nov 1 00:22:17.481893 systemd-logind[1310]: Session 3 logged out. Waiting for processes to exit. Nov 1 00:22:17.482769 systemd-logind[1310]: Removed session 3. Nov 1 00:22:17.521172 sshd[1431]: Accepted publickey for core from 10.0.0.1 port 41996 ssh2: RSA SHA256:kb3suJ2QTjzwtL4e8dR0lwVlSg216vRjckn55cG0Sc4 Nov 1 00:22:17.522258 sshd[1431]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:22:17.529705 systemd-logind[1310]: New session 4 of user core. Nov 1 00:22:17.530038 systemd[1]: Started session-4.scope. Nov 1 00:22:17.582171 sshd[1431]: pam_unix(sshd:session): session closed for user core Nov 1 00:22:17.583999 systemd[1]: Started sshd@4-10.0.0.92:22-10.0.0.1:42008.service. Nov 1 00:22:17.584505 systemd[1]: sshd@3-10.0.0.92:22-10.0.0.1:41996.service: Deactivated successfully. Nov 1 00:22:17.585272 systemd-logind[1310]: Session 4 logged out. Waiting for processes to exit. Nov 1 00:22:17.585273 systemd[1]: session-4.scope: Deactivated successfully. Nov 1 00:22:17.586150 systemd-logind[1310]: Removed session 4. Nov 1 00:22:17.625279 sshd[1438]: Accepted publickey for core from 10.0.0.1 port 42008 ssh2: RSA SHA256:kb3suJ2QTjzwtL4e8dR0lwVlSg216vRjckn55cG0Sc4 Nov 1 00:22:17.626649 sshd[1438]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:22:17.629993 systemd-logind[1310]: New session 5 of user core. Nov 1 00:22:17.630341 systemd[1]: Started session-5.scope. Nov 1 00:22:17.687819 sudo[1444]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Nov 1 00:22:17.688042 sudo[1444]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Nov 1 00:22:17.699613 dbus-daemon[1295]: avc: received setenforce notice (enforcing=1) Nov 1 00:22:17.700368 sudo[1444]: pam_unix(sudo:session): session closed for user root Nov 1 00:22:17.702223 sshd[1438]: pam_unix(sshd:session): session closed for user core Nov 1 00:22:17.704381 systemd[1]: Started sshd@5-10.0.0.92:22-10.0.0.1:42022.service. Nov 1 00:22:17.704921 systemd[1]: sshd@4-10.0.0.92:22-10.0.0.1:42008.service: Deactivated successfully. Nov 1 00:22:17.705858 systemd-logind[1310]: Session 5 logged out. Waiting for processes to exit. Nov 1 00:22:17.705896 systemd[1]: session-5.scope: Deactivated successfully. Nov 1 00:22:17.706628 systemd-logind[1310]: Removed session 5. Nov 1 00:22:17.747694 sshd[1446]: Accepted publickey for core from 10.0.0.1 port 42022 ssh2: RSA SHA256:kb3suJ2QTjzwtL4e8dR0lwVlSg216vRjckn55cG0Sc4 Nov 1 00:22:17.748889 sshd[1446]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:22:17.752114 systemd-logind[1310]: New session 6 of user core. Nov 1 00:22:17.752905 systemd[1]: Started session-6.scope. Nov 1 00:22:17.804710 sudo[1453]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Nov 1 00:22:17.804920 sudo[1453]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Nov 1 00:22:17.807639 sudo[1453]: pam_unix(sudo:session): session closed for user root Nov 1 00:22:17.811553 sudo[1452]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Nov 1 00:22:17.811973 sudo[1452]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Nov 1 00:22:17.820153 systemd[1]: Stopping audit-rules.service... Nov 1 00:22:17.819000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Nov 1 00:22:17.821509 auditctl[1456]: No rules Nov 1 00:22:17.821793 systemd[1]: audit-rules.service: Deactivated successfully. Nov 1 00:22:17.822022 kernel: kauditd_printk_skb: 75 callbacks suppressed Nov 1 00:22:17.822052 kernel: audit: type=1305 audit(1761956537.819:155): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Nov 1 00:22:17.822001 systemd[1]: Stopped audit-rules.service. Nov 1 00:22:17.819000 audit[1456]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=fffff2fe65b0 a2=420 a3=0 items=0 ppid=1 pid=1456 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:22:17.823516 systemd[1]: Starting audit-rules.service... Nov 1 00:22:17.827264 kernel: audit: type=1300 audit(1761956537.819:155): arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=fffff2fe65b0 a2=420 a3=0 items=0 ppid=1 pid=1456 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:22:17.827334 kernel: audit: type=1327 audit(1761956537.819:155): proctitle=2F7362696E2F617564697463746C002D44 Nov 1 00:22:17.819000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D44 Nov 1 00:22:17.829051 kernel: audit: type=1131 audit(1761956537.820:156): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:22:17.820000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:22:17.839626 augenrules[1474]: No rules Nov 1 00:22:17.840265 systemd[1]: Finished audit-rules.service. Nov 1 00:22:17.839000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:22:17.841285 sudo[1452]: pam_unix(sudo:session): session closed for user root Nov 1 00:22:17.839000 audit[1452]: USER_END pid=1452 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Nov 1 00:22:17.847158 kernel: audit: type=1130 audit(1761956537.839:157): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:22:17.847205 kernel: audit: type=1106 audit(1761956537.839:158): pid=1452 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Nov 1 00:22:17.847142 sshd[1446]: pam_unix(sshd:session): session closed for user core Nov 1 00:22:17.839000 audit[1452]: CRED_DISP pid=1452 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Nov 1 00:22:17.846941 systemd[1]: Started sshd@6-10.0.0.92:22-10.0.0.1:42036.service. Nov 1 00:22:17.849810 kernel: audit: type=1104 audit(1761956537.839:159): pid=1452 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Nov 1 00:22:17.845000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.0.0.92:22-10.0.0.1:42036 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:22:17.852653 kernel: audit: type=1130 audit(1761956537.845:160): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.0.0.92:22-10.0.0.1:42036 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:22:17.851000 audit[1446]: USER_END pid=1446 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Nov 1 00:22:17.854622 systemd[1]: sshd@5-10.0.0.92:22-10.0.0.1:42022.service: Deactivated successfully. Nov 1 00:22:17.855300 systemd[1]: session-6.scope: Deactivated successfully. Nov 1 00:22:17.851000 audit[1446]: CRED_DISP pid=1446 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Nov 1 00:22:17.859763 kernel: audit: type=1106 audit(1761956537.851:161): pid=1446 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Nov 1 00:22:17.859838 kernel: audit: type=1104 audit(1761956537.851:162): pid=1446 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Nov 1 00:22:17.859788 systemd-logind[1310]: Session 6 logged out. Waiting for processes to exit. Nov 1 00:22:17.853000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-10.0.0.92:22-10.0.0.1:42022 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:22:17.860445 systemd-logind[1310]: Removed session 6. Nov 1 00:22:17.890000 audit[1479]: USER_ACCT pid=1479 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Nov 1 00:22:17.891832 sshd[1479]: Accepted publickey for core from 10.0.0.1 port 42036 ssh2: RSA SHA256:kb3suJ2QTjzwtL4e8dR0lwVlSg216vRjckn55cG0Sc4 Nov 1 00:22:17.891000 audit[1479]: CRED_ACQ pid=1479 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Nov 1 00:22:17.891000 audit[1479]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffdafad920 a2=3 a3=1 items=0 ppid=1 pid=1479 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=7 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:22:17.891000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Nov 1 00:22:17.893317 sshd[1479]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:22:17.896844 systemd-logind[1310]: New session 7 of user core. Nov 1 00:22:17.897640 systemd[1]: Started session-7.scope. Nov 1 00:22:17.899000 audit[1479]: USER_START pid=1479 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Nov 1 00:22:17.900000 audit[1484]: CRED_ACQ pid=1484 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Nov 1 00:22:17.947000 audit[1485]: USER_ACCT pid=1485 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Nov 1 00:22:17.948682 sudo[1485]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Nov 1 00:22:17.947000 audit[1485]: CRED_REFR pid=1485 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Nov 1 00:22:17.949621 sudo[1485]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Nov 1 00:22:17.950000 audit[1485]: USER_START pid=1485 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Nov 1 00:22:17.987797 systemd[1]: Starting docker.service... Nov 1 00:22:18.042809 env[1497]: time="2025-11-01T00:22:18.042750463Z" level=info msg="Starting up" Nov 1 00:22:18.044215 env[1497]: time="2025-11-01T00:22:18.044195903Z" level=info msg="parsed scheme: \"unix\"" module=grpc Nov 1 00:22:18.044215 env[1497]: time="2025-11-01T00:22:18.044213743Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Nov 1 00:22:18.044284 env[1497]: time="2025-11-01T00:22:18.044238863Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Nov 1 00:22:18.044284 env[1497]: time="2025-11-01T00:22:18.044249383Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Nov 1 00:22:18.046173 env[1497]: time="2025-11-01T00:22:18.046146743Z" level=info msg="parsed scheme: \"unix\"" module=grpc Nov 1 00:22:18.046258 env[1497]: time="2025-11-01T00:22:18.046244143Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Nov 1 00:22:18.046332 env[1497]: time="2025-11-01T00:22:18.046316703Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Nov 1 00:22:18.046385 env[1497]: time="2025-11-01T00:22:18.046371943Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Nov 1 00:22:18.245397 env[1497]: time="2025-11-01T00:22:18.245268223Z" level=warning msg="Your kernel does not support cgroup blkio weight" Nov 1 00:22:18.245397 env[1497]: time="2025-11-01T00:22:18.245317783Z" level=warning msg="Your kernel does not support cgroup blkio weight_device" Nov 1 00:22:18.245797 env[1497]: time="2025-11-01T00:22:18.245492063Z" level=info msg="Loading containers: start." Nov 1 00:22:18.299000 audit[1530]: NETFILTER_CFG table=nat:2 family=2 entries=2 op=nft_register_chain pid=1530 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 00:22:18.299000 audit[1530]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=116 a0=3 a1=fffffd012970 a2=0 a3=1 items=0 ppid=1497 pid=1530 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:22:18.299000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Nov 1 00:22:18.301000 audit[1532]: NETFILTER_CFG table=filter:3 family=2 entries=2 op=nft_register_chain pid=1532 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 00:22:18.301000 audit[1532]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=124 a0=3 a1=ffffec6c3230 a2=0 a3=1 items=0 ppid=1497 pid=1532 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:22:18.301000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Nov 1 00:22:18.303000 audit[1534]: NETFILTER_CFG table=filter:4 family=2 entries=1 op=nft_register_chain pid=1534 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 00:22:18.303000 audit[1534]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=112 a0=3 a1=ffffd4c93840 a2=0 a3=1 items=0 ppid=1497 pid=1534 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:22:18.303000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Nov 1 00:22:18.305000 audit[1536]: NETFILTER_CFG table=filter:5 family=2 entries=1 op=nft_register_chain pid=1536 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 00:22:18.305000 audit[1536]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=112 a0=3 a1=ffffe4991e70 a2=0 a3=1 items=0 ppid=1497 pid=1536 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:22:18.305000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Nov 1 00:22:18.307000 audit[1538]: NETFILTER_CFG table=filter:6 family=2 entries=1 op=nft_register_rule pid=1538 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 00:22:18.307000 audit[1538]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=228 a0=3 a1=ffffff6cd6c0 a2=0 a3=1 items=0 ppid=1497 pid=1538 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:22:18.307000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D49534F4C4154494F4E2D53544147452D31002D6A0052455455524E Nov 1 00:22:18.337000 audit[1543]: NETFILTER_CFG table=filter:7 family=2 entries=1 op=nft_register_rule pid=1543 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 00:22:18.337000 audit[1543]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=228 a0=3 a1=ffffd7e56c50 a2=0 a3=1 items=0 ppid=1497 pid=1543 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:22:18.337000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D49534F4C4154494F4E2D53544147452D32002D6A0052455455524E Nov 1 00:22:18.343000 audit[1546]: NETFILTER_CFG table=filter:8 family=2 entries=1 op=nft_register_chain pid=1546 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 00:22:18.343000 audit[1546]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffe73341f0 a2=0 a3=1 items=0 ppid=1497 pid=1546 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:22:18.343000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Nov 1 00:22:18.345000 audit[1548]: NETFILTER_CFG table=filter:9 family=2 entries=1 op=nft_register_rule pid=1548 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 00:22:18.345000 audit[1548]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=212 a0=3 a1=ffffef7e8530 a2=0 a3=1 items=0 ppid=1497 pid=1548 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:22:18.345000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Nov 1 00:22:18.347000 audit[1550]: NETFILTER_CFG table=filter:10 family=2 entries=2 op=nft_register_chain pid=1550 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 00:22:18.347000 audit[1550]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=308 a0=3 a1=ffffd432ead0 a2=0 a3=1 items=0 ppid=1497 pid=1550 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:22:18.347000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Nov 1 00:22:18.359000 audit[1554]: NETFILTER_CFG table=filter:11 family=2 entries=1 op=nft_unregister_rule pid=1554 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 00:22:18.359000 audit[1554]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=216 a0=3 a1=fffff72b5f90 a2=0 a3=1 items=0 ppid=1497 pid=1554 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:22:18.359000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4400464F5257415244002D6A00444F434B45522D55534552 Nov 1 00:22:18.375000 audit[1555]: NETFILTER_CFG table=filter:12 family=2 entries=1 op=nft_register_rule pid=1555 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 00:22:18.375000 audit[1555]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=224 a0=3 a1=fffffbea0b70 a2=0 a3=1 items=0 ppid=1497 pid=1555 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:22:18.375000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Nov 1 00:22:18.385461 kernel: Initializing XFRM netlink socket Nov 1 00:22:18.407196 env[1497]: time="2025-11-01T00:22:18.407163183Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Nov 1 00:22:18.420000 audit[1563]: NETFILTER_CFG table=nat:13 family=2 entries=2 op=nft_register_chain pid=1563 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 00:22:18.420000 audit[1563]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=492 a0=3 a1=fffffd03e3c0 a2=0 a3=1 items=0 ppid=1497 pid=1563 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:22:18.420000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4900504F5354524F5554494E47002D73003137322E31372E302E302F31360000002D6F00646F636B657230002D6A004D415351554552414445 Nov 1 00:22:18.441000 audit[1566]: NETFILTER_CFG table=nat:14 family=2 entries=1 op=nft_register_rule pid=1566 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 00:22:18.441000 audit[1566]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=288 a0=3 a1=fffff2248000 a2=0 a3=1 items=0 ppid=1497 pid=1566 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:22:18.441000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4900444F434B4552002D6900646F636B657230002D6A0052455455524E Nov 1 00:22:18.444000 audit[1569]: NETFILTER_CFG table=filter:15 family=2 entries=1 op=nft_register_rule pid=1569 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 00:22:18.444000 audit[1569]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=376 a0=3 a1=fffff5f6b560 a2=0 a3=1 items=0 ppid=1497 pid=1569 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:22:18.444000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6900646F636B657230002D6F00646F636B657230002D6A00414343455054 Nov 1 00:22:18.445000 audit[1571]: NETFILTER_CFG table=filter:16 family=2 entries=1 op=nft_register_rule pid=1571 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 00:22:18.445000 audit[1571]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=376 a0=3 a1=ffffc6e447c0 a2=0 a3=1 items=0 ppid=1497 pid=1571 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:22:18.445000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6900646F636B6572300000002D6F00646F636B657230002D6A00414343455054 Nov 1 00:22:18.447000 audit[1573]: NETFILTER_CFG table=nat:17 family=2 entries=2 op=nft_register_chain pid=1573 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 00:22:18.447000 audit[1573]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=356 a0=3 a1=ffffd864c760 a2=0 a3=1 items=0 ppid=1497 pid=1573 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:22:18.447000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Nov 1 00:22:18.449000 audit[1575]: NETFILTER_CFG table=nat:18 family=2 entries=2 op=nft_register_chain pid=1575 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 00:22:18.449000 audit[1575]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=444 a0=3 a1=fffff89aae50 a2=0 a3=1 items=0 ppid=1497 pid=1575 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:22:18.449000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003132372E302E302E302F38 Nov 1 00:22:18.451000 audit[1577]: NETFILTER_CFG table=filter:19 family=2 entries=1 op=nft_register_rule pid=1577 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 00:22:18.451000 audit[1577]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=304 a0=3 a1=ffffdc47a080 a2=0 a3=1 items=0 ppid=1497 pid=1577 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:22:18.451000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6F00646F636B657230002D6A00444F434B4552 Nov 1 00:22:18.457000 audit[1580]: NETFILTER_CFG table=filter:20 family=2 entries=1 op=nft_register_rule pid=1580 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 00:22:18.457000 audit[1580]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=508 a0=3 a1=ffffe479ce50 a2=0 a3=1 items=0 ppid=1497 pid=1580 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:22:18.457000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6F00646F636B657230002D6D00636F6E6E747261636B002D2D637473746174650052454C415445442C45535441424C4953484544002D6A00414343455054 Nov 1 00:22:18.459000 audit[1582]: NETFILTER_CFG table=filter:21 family=2 entries=1 op=nft_register_rule pid=1582 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 00:22:18.459000 audit[1582]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=240 a0=3 a1=ffffe431c3b0 a2=0 a3=1 items=0 ppid=1497 pid=1582 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:22:18.459000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Nov 1 00:22:18.461000 audit[1584]: NETFILTER_CFG table=filter:22 family=2 entries=1 op=nft_register_rule pid=1584 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 00:22:18.461000 audit[1584]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=428 a0=3 a1=fffff0aaaab0 a2=0 a3=1 items=0 ppid=1497 pid=1584 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:22:18.461000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D31002D6900646F636B6572300000002D6F00646F636B657230002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D32 Nov 1 00:22:18.463000 audit[1586]: NETFILTER_CFG table=filter:23 family=2 entries=1 op=nft_register_rule pid=1586 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 00:22:18.463000 audit[1586]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=312 a0=3 a1=fffffddbef50 a2=0 a3=1 items=0 ppid=1497 pid=1586 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:22:18.463000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D32002D6F00646F636B657230002D6A0044524F50 Nov 1 00:22:18.465323 systemd-networkd[1098]: docker0: Link UP Nov 1 00:22:18.471000 audit[1590]: NETFILTER_CFG table=filter:24 family=2 entries=1 op=nft_unregister_rule pid=1590 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 00:22:18.471000 audit[1590]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=228 a0=3 a1=ffffd18f0ea0 a2=0 a3=1 items=0 ppid=1497 pid=1590 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:22:18.471000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4400464F5257415244002D6A00444F434B45522D55534552 Nov 1 00:22:18.479000 audit[1591]: NETFILTER_CFG table=filter:25 family=2 entries=1 op=nft_register_rule pid=1591 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 00:22:18.479000 audit[1591]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=224 a0=3 a1=ffffd24840b0 a2=0 a3=1 items=0 ppid=1497 pid=1591 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:22:18.479000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Nov 1 00:22:18.481858 env[1497]: time="2025-11-01T00:22:18.481807183Z" level=info msg="Loading containers: done." Nov 1 00:22:18.495770 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2911395-merged.mount: Deactivated successfully. Nov 1 00:22:18.503318 env[1497]: time="2025-11-01T00:22:18.503270503Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Nov 1 00:22:18.503469 env[1497]: time="2025-11-01T00:22:18.503451343Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 Nov 1 00:22:18.503569 env[1497]: time="2025-11-01T00:22:18.503554263Z" level=info msg="Daemon has completed initialization" Nov 1 00:22:18.516578 systemd[1]: Started docker.service. Nov 1 00:22:18.515000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=docker comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:22:18.522536 env[1497]: time="2025-11-01T00:22:18.522495423Z" level=info msg="API listen on /run/docker.sock" Nov 1 00:22:19.145018 env[1324]: time="2025-11-01T00:22:19.144974983Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.9\"" Nov 1 00:22:19.735922 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3688878614.mount: Deactivated successfully. Nov 1 00:22:21.054228 env[1324]: time="2025-11-01T00:22:21.054173183Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.32.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:22:21.055514 env[1324]: time="2025-11-01T00:22:21.055481903Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:02ea53851f07db91ed471dab1ab11541f5c294802371cd8f0cfd423cd5c71002,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:22:21.057470 env[1324]: time="2025-11-01T00:22:21.057435943Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.32.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:22:21.059398 env[1324]: time="2025-11-01T00:22:21.059361223Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:6df11cc2ad9679b1117be34d3a0230add88bc0a08fd7a3ebc26b680575e8de97,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:22:21.060113 env[1324]: time="2025-11-01T00:22:21.060072663Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.9\" returns image reference \"sha256:02ea53851f07db91ed471dab1ab11541f5c294802371cd8f0cfd423cd5c71002\"" Nov 1 00:22:21.060687 env[1324]: time="2025-11-01T00:22:21.060663463Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.9\"" Nov 1 00:22:22.673163 env[1324]: time="2025-11-01T00:22:22.673119983Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.32.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:22:22.674808 env[1324]: time="2025-11-01T00:22:22.674785023Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:f0bcbad5082c944520b370596a2384affda710b9d7daf84e8a48352699af8e4b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:22:22.676655 env[1324]: time="2025-11-01T00:22:22.676635303Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.32.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:22:22.678397 env[1324]: time="2025-11-01T00:22:22.678365983Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:243c4b8e3bce271fcb1b78008ab996ab6976b1a20096deac08338fcd17979922,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:22:22.679317 env[1324]: time="2025-11-01T00:22:22.679287503Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.9\" returns image reference \"sha256:f0bcbad5082c944520b370596a2384affda710b9d7daf84e8a48352699af8e4b\"" Nov 1 00:22:22.679933 env[1324]: time="2025-11-01T00:22:22.679905543Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.9\"" Nov 1 00:22:23.663305 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Nov 1 00:22:23.663499 systemd[1]: Stopped kubelet.service. Nov 1 00:22:23.664957 systemd[1]: Starting kubelet.service... Nov 1 00:22:23.662000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:22:23.665885 kernel: kauditd_printk_skb: 84 callbacks suppressed Nov 1 00:22:23.665930 kernel: audit: type=1130 audit(1761956543.662:197): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:22:23.662000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:22:23.670419 kernel: audit: type=1131 audit(1761956543.662:198): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:22:23.766318 systemd[1]: Started kubelet.service. Nov 1 00:22:23.765000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:22:23.769476 kernel: audit: type=1130 audit(1761956543.765:199): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:22:23.805744 kubelet[1639]: E1101 00:22:23.805704 1639 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 1 00:22:23.808272 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 1 00:22:23.808423 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 1 00:22:23.807000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Nov 1 00:22:23.811445 kernel: audit: type=1131 audit(1761956543.807:200): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Nov 1 00:22:24.089958 env[1324]: time="2025-11-01T00:22:24.089846983Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.32.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:22:24.092319 env[1324]: time="2025-11-01T00:22:24.092272783Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:1d625baf81b59592006d97a6741bc947698ed222b612ac10efa57b7aa96d2a27,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:22:24.094436 env[1324]: time="2025-11-01T00:22:24.093907263Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.32.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:22:24.096248 env[1324]: time="2025-11-01T00:22:24.096206143Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:50c49520dbd0e8b4076b6a5c77d8014df09ea3d59a73e8bafd2678d51ebb92d5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:22:24.096988 env[1324]: time="2025-11-01T00:22:24.096943543Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.9\" returns image reference \"sha256:1d625baf81b59592006d97a6741bc947698ed222b612ac10efa57b7aa96d2a27\"" Nov 1 00:22:24.098077 env[1324]: time="2025-11-01T00:22:24.098037903Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.9\"" Nov 1 00:22:25.216695 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount364758939.mount: Deactivated successfully. Nov 1 00:22:25.791647 env[1324]: time="2025-11-01T00:22:25.791586223Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.32.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:22:25.793105 env[1324]: time="2025-11-01T00:22:25.793068703Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:72b57ec14d31e8422925ef4c3eff44822cdc04a11fd30d13824f1897d83a16d4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:22:25.794399 env[1324]: time="2025-11-01T00:22:25.794363863Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.32.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:22:25.795690 env[1324]: time="2025-11-01T00:22:25.795653543Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:886af02535dc34886e4618b902f8c140d89af57233a245621d29642224516064,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:22:25.796137 env[1324]: time="2025-11-01T00:22:25.796094503Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.9\" returns image reference \"sha256:72b57ec14d31e8422925ef4c3eff44822cdc04a11fd30d13824f1897d83a16d4\"" Nov 1 00:22:25.796910 env[1324]: time="2025-11-01T00:22:25.796886463Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Nov 1 00:22:26.301353 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2640999446.mount: Deactivated successfully. Nov 1 00:22:27.168259 env[1324]: time="2025-11-01T00:22:27.168203143Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.11.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:22:27.169851 env[1324]: time="2025-11-01T00:22:27.169811783Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:22:27.171521 env[1324]: time="2025-11-01T00:22:27.171485543Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.11.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:22:27.173775 env[1324]: time="2025-11-01T00:22:27.173742743Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:22:27.174481 env[1324]: time="2025-11-01T00:22:27.174448663Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" Nov 1 00:22:27.175052 env[1324]: time="2025-11-01T00:22:27.175028143Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Nov 1 00:22:27.671300 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount543741305.mount: Deactivated successfully. Nov 1 00:22:27.674875 env[1324]: time="2025-11-01T00:22:27.674824583Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:22:27.676199 env[1324]: time="2025-11-01T00:22:27.676176303Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:22:27.677722 env[1324]: time="2025-11-01T00:22:27.677686983Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:22:27.679059 env[1324]: time="2025-11-01T00:22:27.679034783Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:22:27.679518 env[1324]: time="2025-11-01T00:22:27.679497863Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Nov 1 00:22:27.679920 env[1324]: time="2025-11-01T00:22:27.679891583Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Nov 1 00:22:28.160016 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3431014193.mount: Deactivated successfully. Nov 1 00:22:30.430574 env[1324]: time="2025-11-01T00:22:30.430526823Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.16-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:22:30.432014 env[1324]: time="2025-11-01T00:22:30.431980743Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:22:30.434792 env[1324]: time="2025-11-01T00:22:30.434759743Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.16-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:22:30.437333 env[1324]: time="2025-11-01T00:22:30.437293743Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:22:30.438113 env[1324]: time="2025-11-01T00:22:30.438080663Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\"" Nov 1 00:22:33.934753 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Nov 1 00:22:33.934929 systemd[1]: Stopped kubelet.service. Nov 1 00:22:33.936361 systemd[1]: Starting kubelet.service... Nov 1 00:22:33.933000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:22:33.933000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:22:33.940854 kernel: audit: type=1130 audit(1761956553.933:201): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:22:33.940924 kernel: audit: type=1131 audit(1761956553.933:202): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:22:34.034439 kernel: audit: type=1130 audit(1761956554.030:203): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:22:34.030000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:22:34.031445 systemd[1]: Started kubelet.service. Nov 1 00:22:34.067999 kubelet[1676]: E1101 00:22:34.067960 1676 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 1 00:22:34.070137 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 1 00:22:34.070268 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 1 00:22:34.069000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Nov 1 00:22:34.073437 kernel: audit: type=1131 audit(1761956554.069:204): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Nov 1 00:22:36.406167 systemd[1]: Stopped kubelet.service. Nov 1 00:22:36.405000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:22:36.408432 systemd[1]: Starting kubelet.service... Nov 1 00:22:36.405000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:22:36.410897 kernel: audit: type=1130 audit(1761956556.405:205): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:22:36.410960 kernel: audit: type=1131 audit(1761956556.405:206): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:22:36.428166 systemd[1]: Reloading. Nov 1 00:22:36.471625 /usr/lib/systemd/system-generators/torcx-generator[1715]: time="2025-11-01T00:22:36Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Nov 1 00:22:36.471952 /usr/lib/systemd/system-generators/torcx-generator[1715]: time="2025-11-01T00:22:36Z" level=info msg="torcx already run" Nov 1 00:22:36.594095 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Nov 1 00:22:36.594117 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 1 00:22:36.609497 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 1 00:22:36.672397 systemd[1]: Started kubelet.service. Nov 1 00:22:36.671000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:22:36.675421 kernel: audit: type=1130 audit(1761956556.671:207): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:22:36.678155 systemd[1]: Stopping kubelet.service... Nov 1 00:22:36.679028 systemd[1]: kubelet.service: Deactivated successfully. Nov 1 00:22:36.679326 systemd[1]: Stopped kubelet.service. Nov 1 00:22:36.678000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:22:36.681688 systemd[1]: Starting kubelet.service... Nov 1 00:22:36.682432 kernel: audit: type=1131 audit(1761956556.678:208): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:22:36.774452 systemd[1]: Started kubelet.service. Nov 1 00:22:36.773000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:22:36.779422 kernel: audit: type=1130 audit(1761956556.773:209): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:22:36.809053 kubelet[1776]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 1 00:22:36.809053 kubelet[1776]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 1 00:22:36.809053 kubelet[1776]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 1 00:22:36.809361 kubelet[1776]: I1101 00:22:36.809108 1776 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 1 00:22:37.877781 kubelet[1776]: I1101 00:22:37.877742 1776 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Nov 1 00:22:37.878137 kubelet[1776]: I1101 00:22:37.878122 1776 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 1 00:22:37.878493 kubelet[1776]: I1101 00:22:37.878472 1776 server.go:954] "Client rotation is on, will bootstrap in background" Nov 1 00:22:37.899993 kubelet[1776]: I1101 00:22:37.899953 1776 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 1 00:22:37.901259 kubelet[1776]: E1101 00:22:37.901229 1776 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.92:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.92:6443: connect: connection refused" logger="UnhandledError" Nov 1 00:22:37.906134 kubelet[1776]: E1101 00:22:37.906098 1776 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Nov 1 00:22:37.906134 kubelet[1776]: I1101 00:22:37.906128 1776 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Nov 1 00:22:37.908944 kubelet[1776]: I1101 00:22:37.908924 1776 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Nov 1 00:22:37.909835 kubelet[1776]: I1101 00:22:37.909795 1776 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 1 00:22:37.909990 kubelet[1776]: I1101 00:22:37.909837 1776 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Nov 1 00:22:37.910079 kubelet[1776]: I1101 00:22:37.910065 1776 topology_manager.go:138] "Creating topology manager with none policy" Nov 1 00:22:37.910079 kubelet[1776]: I1101 00:22:37.910078 1776 container_manager_linux.go:304] "Creating device plugin manager" Nov 1 00:22:37.910276 kubelet[1776]: I1101 00:22:37.910264 1776 state_mem.go:36] "Initialized new in-memory state store" Nov 1 00:22:37.913090 kubelet[1776]: I1101 00:22:37.913067 1776 kubelet.go:446] "Attempting to sync node with API server" Nov 1 00:22:37.913138 kubelet[1776]: I1101 00:22:37.913109 1776 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 1 00:22:37.913138 kubelet[1776]: I1101 00:22:37.913135 1776 kubelet.go:352] "Adding apiserver pod source" Nov 1 00:22:37.913192 kubelet[1776]: I1101 00:22:37.913145 1776 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 1 00:22:37.919228 kubelet[1776]: W1101 00:22:37.919181 1776 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.92:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.92:6443: connect: connection refused Nov 1 00:22:37.919384 kubelet[1776]: W1101 00:22:37.919316 1776 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.92:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.92:6443: connect: connection refused Nov 1 00:22:37.919490 kubelet[1776]: E1101 00:22:37.919396 1776 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.92:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.92:6443: connect: connection refused" logger="UnhandledError" Nov 1 00:22:37.920141 kubelet[1776]: E1101 00:22:37.920115 1776 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.92:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.92:6443: connect: connection refused" logger="UnhandledError" Nov 1 00:22:37.921710 kubelet[1776]: I1101 00:22:37.921684 1776 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Nov 1 00:22:37.923233 kubelet[1776]: I1101 00:22:37.923197 1776 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Nov 1 00:22:37.923330 kubelet[1776]: W1101 00:22:37.923317 1776 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Nov 1 00:22:37.924136 kubelet[1776]: I1101 00:22:37.924121 1776 watchdog_linux.go:99] "Systemd watchdog is not enabled" Nov 1 00:22:37.924170 kubelet[1776]: I1101 00:22:37.924150 1776 server.go:1287] "Started kubelet" Nov 1 00:22:37.924439 kubelet[1776]: I1101 00:22:37.924389 1776 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Nov 1 00:22:37.925337 kubelet[1776]: I1101 00:22:37.925201 1776 server.go:479] "Adding debug handlers to kubelet server" Nov 1 00:22:37.925449 kubelet[1776]: I1101 00:22:37.925382 1776 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 1 00:22:37.925949 kubelet[1776]: I1101 00:22:37.925705 1776 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 1 00:22:37.925000 audit[1776]: AVC avc: denied { mac_admin } for pid=1776 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:22:37.926798 kubelet[1776]: I1101 00:22:37.926610 1776 kubelet.go:1507] "Unprivileged containerized plugins might not work, could not set selinux context on plugin registration dir" path="/var/lib/kubelet/plugins_registry" err="setxattr(label=system_u:object_r:container_file_t:s0) /var/lib/kubelet/plugins_registry: invalid argument" Nov 1 00:22:37.926798 kubelet[1776]: I1101 00:22:37.926643 1776 kubelet.go:1511] "Unprivileged containerized plugins might not work, could not set selinux context on plugins dir" path="/var/lib/kubelet/plugins" err="setxattr(label=system_u:object_r:container_file_t:s0) /var/lib/kubelet/plugins: invalid argument" Nov 1 00:22:37.926798 kubelet[1776]: I1101 00:22:37.926711 1776 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 1 00:22:37.927007 kubelet[1776]: I1101 00:22:37.926982 1776 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 1 00:22:37.925000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Nov 1 00:22:37.925000 audit[1776]: SYSCALL arch=c00000b7 syscall=5 success=no exit=-22 a0=4000be43c0 a1=4000b89128 a2=4000be4390 a3=25 items=0 ppid=1 pid=1776 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:22:37.925000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Nov 1 00:22:37.925000 audit[1776]: AVC avc: denied { mac_admin } for pid=1776 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:22:37.925000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Nov 1 00:22:37.925000 audit[1776]: SYSCALL arch=c00000b7 syscall=5 success=no exit=-22 a0=4000b8e9c0 a1=4000b89140 a2=4000be4450 a3=25 items=0 ppid=1 pid=1776 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:22:37.925000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Nov 1 00:22:37.927000 audit[1789]: NETFILTER_CFG table=mangle:26 family=2 entries=2 op=nft_register_chain pid=1789 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 00:22:37.929443 kernel: audit: type=1400 audit(1761956557.925:210): avc: denied { mac_admin } for pid=1776 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:22:37.927000 audit[1789]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=136 a0=3 a1=ffffcbbdf390 a2=0 a3=1 items=0 ppid=1776 pid=1789 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:22:37.927000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Nov 1 00:22:37.929540 kubelet[1776]: E1101 00:22:37.928732 1776 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.92:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.92:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1873ba2778bce507 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-11-01 00:22:37.924132103 +0000 UTC m=+1.146466881,LastTimestamp:2025-11-01 00:22:37.924132103 +0000 UTC m=+1.146466881,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Nov 1 00:22:37.929540 kubelet[1776]: E1101 00:22:37.929221 1776 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 1 00:22:37.929540 kubelet[1776]: I1101 00:22:37.929245 1776 volume_manager.go:297] "Starting Kubelet Volume Manager" Nov 1 00:22:37.929540 kubelet[1776]: I1101 00:22:37.929317 1776 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Nov 1 00:22:37.929540 kubelet[1776]: I1101 00:22:37.929355 1776 reconciler.go:26] "Reconciler: start to sync state" Nov 1 00:22:37.929731 kubelet[1776]: W1101 00:22:37.929613 1776 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.92:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.92:6443: connect: connection refused Nov 1 00:22:37.929731 kubelet[1776]: E1101 00:22:37.929652 1776 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.92:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.92:6443: connect: connection refused" logger="UnhandledError" Nov 1 00:22:37.928000 audit[1790]: NETFILTER_CFG table=filter:27 family=2 entries=1 op=nft_register_chain pid=1790 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 00:22:37.928000 audit[1790]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffffde22010 a2=0 a3=1 items=0 ppid=1776 pid=1790 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:22:37.928000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Nov 1 00:22:37.930660 kubelet[1776]: E1101 00:22:37.930608 1776 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 1 00:22:37.930740 kubelet[1776]: E1101 00:22:37.930712 1776 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.92:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.92:6443: connect: connection refused" interval="200ms" Nov 1 00:22:37.931134 kubelet[1776]: I1101 00:22:37.931117 1776 factory.go:221] Registration of the containerd container factory successfully Nov 1 00:22:37.931134 kubelet[1776]: I1101 00:22:37.931134 1776 factory.go:221] Registration of the systemd container factory successfully Nov 1 00:22:37.931233 kubelet[1776]: I1101 00:22:37.931215 1776 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 1 00:22:37.930000 audit[1792]: NETFILTER_CFG table=filter:28 family=2 entries=2 op=nft_register_chain pid=1792 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 00:22:37.930000 audit[1792]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=312 a0=3 a1=ffffeadb6610 a2=0 a3=1 items=0 ppid=1776 pid=1792 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:22:37.930000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Nov 1 00:22:37.932000 audit[1794]: NETFILTER_CFG table=filter:29 family=2 entries=2 op=nft_register_chain pid=1794 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 00:22:37.932000 audit[1794]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=312 a0=3 a1=ffffd8508cd0 a2=0 a3=1 items=0 ppid=1776 pid=1794 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:22:37.932000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Nov 1 00:22:37.938000 audit[1798]: NETFILTER_CFG table=filter:30 family=2 entries=1 op=nft_register_rule pid=1798 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 00:22:37.938000 audit[1798]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=924 a0=3 a1=ffffeb1cd040 a2=0 a3=1 items=0 ppid=1776 pid=1798 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:22:37.938000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E7400626C6F636B20696E636F6D696E67206C6F63616C6E657420636F6E6E656374696F6E73002D2D647374003132372E302E302E302F38 Nov 1 00:22:37.941026 kubelet[1776]: I1101 00:22:37.940971 1776 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Nov 1 00:22:37.940000 audit[1802]: NETFILTER_CFG table=mangle:31 family=2 entries=1 op=nft_register_chain pid=1802 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 00:22:37.940000 audit[1802]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffd793b4b0 a2=0 a3=1 items=0 ppid=1776 pid=1802 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:22:37.940000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Nov 1 00:22:37.940000 audit[1801]: NETFILTER_CFG table=mangle:32 family=10 entries=2 op=nft_register_chain pid=1801 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Nov 1 00:22:37.940000 audit[1801]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=136 a0=3 a1=ffffc4827a90 a2=0 a3=1 items=0 ppid=1776 pid=1801 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:22:37.940000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Nov 1 00:22:37.942500 kubelet[1776]: I1101 00:22:37.942475 1776 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Nov 1 00:22:37.942500 kubelet[1776]: I1101 00:22:37.942501 1776 status_manager.go:227] "Starting to sync pod status with apiserver" Nov 1 00:22:37.942562 kubelet[1776]: I1101 00:22:37.942516 1776 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 1 00:22:37.942562 kubelet[1776]: I1101 00:22:37.942524 1776 kubelet.go:2382] "Starting kubelet main sync loop" Nov 1 00:22:37.942615 kubelet[1776]: E1101 00:22:37.942565 1776 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 1 00:22:37.942000 audit[1804]: NETFILTER_CFG table=mangle:33 family=10 entries=1 op=nft_register_chain pid=1804 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Nov 1 00:22:37.942000 audit[1804]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffc52a2eb0 a2=0 a3=1 items=0 ppid=1776 pid=1804 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:22:37.942000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Nov 1 00:22:37.944289 kubelet[1776]: W1101 00:22:37.944247 1776 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.92:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.92:6443: connect: connection refused Nov 1 00:22:37.944424 kubelet[1776]: E1101 00:22:37.944390 1776 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.92:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.92:6443: connect: connection refused" logger="UnhandledError" Nov 1 00:22:37.943000 audit[1805]: NETFILTER_CFG table=nat:34 family=2 entries=1 op=nft_register_chain pid=1805 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 00:22:37.943000 audit[1805]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffffde56900 a2=0 a3=1 items=0 ppid=1776 pid=1805 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:22:37.943000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Nov 1 00:22:37.943000 audit[1806]: NETFILTER_CFG table=nat:35 family=10 entries=2 op=nft_register_chain pid=1806 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Nov 1 00:22:37.943000 audit[1806]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=128 a0=3 a1=ffffccd33c00 a2=0 a3=1 items=0 ppid=1776 pid=1806 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:22:37.943000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Nov 1 00:22:37.944000 audit[1807]: NETFILTER_CFG table=filter:36 family=2 entries=1 op=nft_register_chain pid=1807 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 00:22:37.944000 audit[1807]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=fffffda68650 a2=0 a3=1 items=0 ppid=1776 pid=1807 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:22:37.944000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Nov 1 00:22:37.945000 audit[1808]: NETFILTER_CFG table=filter:37 family=10 entries=2 op=nft_register_chain pid=1808 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Nov 1 00:22:37.945000 audit[1808]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=136 a0=3 a1=ffffdea56ed0 a2=0 a3=1 items=0 ppid=1776 pid=1808 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:22:37.945000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Nov 1 00:22:37.949264 kubelet[1776]: I1101 00:22:37.949242 1776 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 1 00:22:37.949264 kubelet[1776]: I1101 00:22:37.949260 1776 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 1 00:22:37.949344 kubelet[1776]: I1101 00:22:37.949277 1776 state_mem.go:36] "Initialized new in-memory state store" Nov 1 00:22:38.028244 kubelet[1776]: I1101 00:22:38.028202 1776 policy_none.go:49] "None policy: Start" Nov 1 00:22:38.028244 kubelet[1776]: I1101 00:22:38.028246 1776 memory_manager.go:186] "Starting memorymanager" policy="None" Nov 1 00:22:38.028432 kubelet[1776]: I1101 00:22:38.028274 1776 state_mem.go:35] "Initializing new in-memory state store" Nov 1 00:22:38.029635 kubelet[1776]: E1101 00:22:38.029605 1776 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 1 00:22:38.035332 kubelet[1776]: I1101 00:22:38.034493 1776 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Nov 1 00:22:38.033000 audit[1776]: AVC avc: denied { mac_admin } for pid=1776 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:22:38.033000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Nov 1 00:22:38.033000 audit[1776]: SYSCALL arch=c00000b7 syscall=5 success=no exit=-22 a0=4000f50a20 a1=400091f1d0 a2=4000f509f0 a3=25 items=0 ppid=1 pid=1776 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:22:38.033000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Nov 1 00:22:38.035601 kubelet[1776]: I1101 00:22:38.035362 1776 server.go:94] "Unprivileged containerized plugins might not work. Could not set selinux context on socket dir" path="/var/lib/kubelet/device-plugins/" err="setxattr(label=system_u:object_r:container_file_t:s0) /var/lib/kubelet/device-plugins/: invalid argument" Nov 1 00:22:38.035601 kubelet[1776]: I1101 00:22:38.035514 1776 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 1 00:22:38.035601 kubelet[1776]: I1101 00:22:38.035525 1776 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 1 00:22:38.036003 kubelet[1776]: I1101 00:22:38.035976 1776 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 1 00:22:38.036637 kubelet[1776]: E1101 00:22:38.036615 1776 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 1 00:22:38.036703 kubelet[1776]: E1101 00:22:38.036673 1776 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Nov 1 00:22:38.047631 kubelet[1776]: E1101 00:22:38.047595 1776 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 1 00:22:38.049011 kubelet[1776]: E1101 00:22:38.048988 1776 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 1 00:22:38.050826 kubelet[1776]: E1101 00:22:38.050806 1776 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 1 00:22:38.131625 kubelet[1776]: E1101 00:22:38.131534 1776 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.92:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.92:6443: connect: connection refused" interval="400ms" Nov 1 00:22:38.137551 kubelet[1776]: I1101 00:22:38.137534 1776 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 1 00:22:38.138033 kubelet[1776]: E1101 00:22:38.138002 1776 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.92:6443/api/v1/nodes\": dial tcp 10.0.0.92:6443: connect: connection refused" node="localhost" Nov 1 00:22:38.230319 kubelet[1776]: I1101 00:22:38.230297 1776 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/0bd79a7bdf6d693d48729d2b5d11e801-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"0bd79a7bdf6d693d48729d2b5d11e801\") " pod="kube-system/kube-apiserver-localhost" Nov 1 00:22:38.230427 kubelet[1776]: I1101 00:22:38.230328 1776 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/0bd79a7bdf6d693d48729d2b5d11e801-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"0bd79a7bdf6d693d48729d2b5d11e801\") " pod="kube-system/kube-apiserver-localhost" Nov 1 00:22:38.230427 kubelet[1776]: I1101 00:22:38.230350 1776 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Nov 1 00:22:38.230427 kubelet[1776]: I1101 00:22:38.230365 1776 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a1d51be1ff02022474f2598f6e43038f-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"a1d51be1ff02022474f2598f6e43038f\") " pod="kube-system/kube-scheduler-localhost" Nov 1 00:22:38.230427 kubelet[1776]: I1101 00:22:38.230393 1776 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/0bd79a7bdf6d693d48729d2b5d11e801-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"0bd79a7bdf6d693d48729d2b5d11e801\") " pod="kube-system/kube-apiserver-localhost" Nov 1 00:22:38.230427 kubelet[1776]: I1101 00:22:38.230422 1776 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Nov 1 00:22:38.230560 kubelet[1776]: I1101 00:22:38.230439 1776 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Nov 1 00:22:38.230560 kubelet[1776]: I1101 00:22:38.230453 1776 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Nov 1 00:22:38.230560 kubelet[1776]: I1101 00:22:38.230468 1776 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Nov 1 00:22:38.339619 kubelet[1776]: I1101 00:22:38.339593 1776 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 1 00:22:38.340029 kubelet[1776]: E1101 00:22:38.339985 1776 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.92:6443/api/v1/nodes\": dial tcp 10.0.0.92:6443: connect: connection refused" node="localhost" Nov 1 00:22:38.348204 kubelet[1776]: E1101 00:22:38.348185 1776 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:22:38.348727 env[1324]: time="2025-11-01T00:22:38.348673823Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:0bd79a7bdf6d693d48729d2b5d11e801,Namespace:kube-system,Attempt:0,}" Nov 1 00:22:38.349790 kubelet[1776]: E1101 00:22:38.349755 1776 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:22:38.350188 env[1324]: time="2025-11-01T00:22:38.350114783Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:a1d51be1ff02022474f2598f6e43038f,Namespace:kube-system,Attempt:0,}" Nov 1 00:22:38.351883 kubelet[1776]: E1101 00:22:38.351846 1776 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:22:38.352181 env[1324]: time="2025-11-01T00:22:38.352154463Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:4654b122dbb389158fe3c0766e603624,Namespace:kube-system,Attempt:0,}" Nov 1 00:22:38.532089 kubelet[1776]: E1101 00:22:38.532043 1776 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.92:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.92:6443: connect: connection refused" interval="800ms" Nov 1 00:22:38.741947 kubelet[1776]: I1101 00:22:38.741901 1776 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 1 00:22:38.742240 kubelet[1776]: E1101 00:22:38.742215 1776 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.92:6443/api/v1/nodes\": dial tcp 10.0.0.92:6443: connect: connection refused" node="localhost" Nov 1 00:22:38.780966 kubelet[1776]: W1101 00:22:38.780901 1776 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.92:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.92:6443: connect: connection refused Nov 1 00:22:38.780966 kubelet[1776]: E1101 00:22:38.780960 1776 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.92:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.92:6443: connect: connection refused" logger="UnhandledError" Nov 1 00:22:38.855351 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4022194036.mount: Deactivated successfully. Nov 1 00:22:38.861189 env[1324]: time="2025-11-01T00:22:38.861151863Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:22:38.862053 env[1324]: time="2025-11-01T00:22:38.862025743Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:22:38.864077 env[1324]: time="2025-11-01T00:22:38.864042103Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:22:38.865657 env[1324]: time="2025-11-01T00:22:38.865625743Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:22:38.867383 env[1324]: time="2025-11-01T00:22:38.867348343Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:22:38.870036 env[1324]: time="2025-11-01T00:22:38.870000743Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:22:38.871088 env[1324]: time="2025-11-01T00:22:38.871050103Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:22:38.874014 env[1324]: time="2025-11-01T00:22:38.873976103Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:22:38.876227 env[1324]: time="2025-11-01T00:22:38.876201063Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:22:38.878055 env[1324]: time="2025-11-01T00:22:38.878025783Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:22:38.879036 env[1324]: time="2025-11-01T00:22:38.879010183Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:22:38.879968 env[1324]: time="2025-11-01T00:22:38.879937903Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:22:38.903508 env[1324]: time="2025-11-01T00:22:38.903435303Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:22:38.903508 env[1324]: time="2025-11-01T00:22:38.903483223Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:22:38.903508 env[1324]: time="2025-11-01T00:22:38.903497023Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:22:38.903708 env[1324]: time="2025-11-01T00:22:38.903676463Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/b44489bd1a6ad0e453a0d3f7f39292a861d2a243c22797887b5dfe318194ad61 pid=1827 runtime=io.containerd.runc.v2 Nov 1 00:22:38.905335 env[1324]: time="2025-11-01T00:22:38.905276983Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:22:38.905438 env[1324]: time="2025-11-01T00:22:38.905347583Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:22:38.905438 env[1324]: time="2025-11-01T00:22:38.905384223Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:22:38.905635 env[1324]: time="2025-11-01T00:22:38.905576983Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/0e8dc7ffb5fa590e35e8115718c4e99a11b4aa4e07faa1c237da11c8869f79fe pid=1828 runtime=io.containerd.runc.v2 Nov 1 00:22:38.908569 env[1324]: time="2025-11-01T00:22:38.908210583Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:22:38.908569 env[1324]: time="2025-11-01T00:22:38.908246143Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:22:38.908569 env[1324]: time="2025-11-01T00:22:38.908256063Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:22:38.908569 env[1324]: time="2025-11-01T00:22:38.908459743Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/3869f0e5025e3a5fa3b6c765f59bed3b44ca7f3ae0f7226da65ac96d65a88ea7 pid=1847 runtime=io.containerd.runc.v2 Nov 1 00:22:38.961236 env[1324]: time="2025-11-01T00:22:38.961175783Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:0bd79a7bdf6d693d48729d2b5d11e801,Namespace:kube-system,Attempt:0,} returns sandbox id \"3869f0e5025e3a5fa3b6c765f59bed3b44ca7f3ae0f7226da65ac96d65a88ea7\"" Nov 1 00:22:38.962973 kubelet[1776]: E1101 00:22:38.962939 1776 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:22:38.964880 env[1324]: time="2025-11-01T00:22:38.964204423Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:4654b122dbb389158fe3c0766e603624,Namespace:kube-system,Attempt:0,} returns sandbox id \"b44489bd1a6ad0e453a0d3f7f39292a861d2a243c22797887b5dfe318194ad61\"" Nov 1 00:22:38.965550 kubelet[1776]: E1101 00:22:38.965383 1776 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:22:38.967386 env[1324]: time="2025-11-01T00:22:38.967347103Z" level=info msg="CreateContainer within sandbox \"3869f0e5025e3a5fa3b6c765f59bed3b44ca7f3ae0f7226da65ac96d65a88ea7\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Nov 1 00:22:38.967614 env[1324]: time="2025-11-01T00:22:38.967432223Z" level=info msg="CreateContainer within sandbox \"b44489bd1a6ad0e453a0d3f7f39292a861d2a243c22797887b5dfe318194ad61\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Nov 1 00:22:38.969765 env[1324]: time="2025-11-01T00:22:38.969734863Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:a1d51be1ff02022474f2598f6e43038f,Namespace:kube-system,Attempt:0,} returns sandbox id \"0e8dc7ffb5fa590e35e8115718c4e99a11b4aa4e07faa1c237da11c8869f79fe\"" Nov 1 00:22:38.970236 kubelet[1776]: E1101 00:22:38.970216 1776 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:22:38.971771 env[1324]: time="2025-11-01T00:22:38.971741503Z" level=info msg="CreateContainer within sandbox \"0e8dc7ffb5fa590e35e8115718c4e99a11b4aa4e07faa1c237da11c8869f79fe\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Nov 1 00:22:38.984162 env[1324]: time="2025-11-01T00:22:38.984120623Z" level=info msg="CreateContainer within sandbox \"3869f0e5025e3a5fa3b6c765f59bed3b44ca7f3ae0f7226da65ac96d65a88ea7\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"f8d9e84588f1b369b8507d5756fbd97ceec6376f3c48fa9bc39ad7d84e39810e\"" Nov 1 00:22:38.984756 env[1324]: time="2025-11-01T00:22:38.984726543Z" level=info msg="StartContainer for \"f8d9e84588f1b369b8507d5756fbd97ceec6376f3c48fa9bc39ad7d84e39810e\"" Nov 1 00:22:38.989446 env[1324]: time="2025-11-01T00:22:38.989395983Z" level=info msg="CreateContainer within sandbox \"b44489bd1a6ad0e453a0d3f7f39292a861d2a243c22797887b5dfe318194ad61\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"f9398d3298b7d43d85aba0b68da55d7d170bcd41dab03fb1fb7c5de174cc6463\"" Nov 1 00:22:38.990262 env[1324]: time="2025-11-01T00:22:38.990229103Z" level=info msg="StartContainer for \"f9398d3298b7d43d85aba0b68da55d7d170bcd41dab03fb1fb7c5de174cc6463\"" Nov 1 00:22:38.991635 env[1324]: time="2025-11-01T00:22:38.991591223Z" level=info msg="CreateContainer within sandbox \"0e8dc7ffb5fa590e35e8115718c4e99a11b4aa4e07faa1c237da11c8869f79fe\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"f3c0f0b8120a8e06254f7db21b43d7595414b0e0de1baed84c2ab93861e85c5b\"" Nov 1 00:22:38.991939 env[1324]: time="2025-11-01T00:22:38.991906383Z" level=info msg="StartContainer for \"f3c0f0b8120a8e06254f7db21b43d7595414b0e0de1baed84c2ab93861e85c5b\"" Nov 1 00:22:39.023023 kubelet[1776]: W1101 00:22:39.022945 1776 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.92:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.92:6443: connect: connection refused Nov 1 00:22:39.023023 kubelet[1776]: E1101 00:22:39.022989 1776 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.92:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.92:6443: connect: connection refused" logger="UnhandledError" Nov 1 00:22:39.043772 kubelet[1776]: W1101 00:22:39.043713 1776 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.92:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.92:6443: connect: connection refused Nov 1 00:22:39.043772 kubelet[1776]: E1101 00:22:39.043775 1776 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.92:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.92:6443: connect: connection refused" logger="UnhandledError" Nov 1 00:22:39.055765 env[1324]: time="2025-11-01T00:22:39.053624583Z" level=info msg="StartContainer for \"f8d9e84588f1b369b8507d5756fbd97ceec6376f3c48fa9bc39ad7d84e39810e\" returns successfully" Nov 1 00:22:39.062428 env[1324]: time="2025-11-01T00:22:39.062316383Z" level=info msg="StartContainer for \"f3c0f0b8120a8e06254f7db21b43d7595414b0e0de1baed84c2ab93861e85c5b\" returns successfully" Nov 1 00:22:39.067104 env[1324]: time="2025-11-01T00:22:39.067061623Z" level=info msg="StartContainer for \"f9398d3298b7d43d85aba0b68da55d7d170bcd41dab03fb1fb7c5de174cc6463\" returns successfully" Nov 1 00:22:39.544426 kubelet[1776]: I1101 00:22:39.544277 1776 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 1 00:22:39.950835 kubelet[1776]: E1101 00:22:39.950797 1776 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 1 00:22:39.950965 kubelet[1776]: E1101 00:22:39.950945 1776 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:22:39.952338 kubelet[1776]: E1101 00:22:39.952306 1776 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 1 00:22:39.952464 kubelet[1776]: E1101 00:22:39.952445 1776 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:22:39.954083 kubelet[1776]: E1101 00:22:39.954049 1776 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 1 00:22:39.954176 kubelet[1776]: E1101 00:22:39.954156 1776 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:22:40.236346 kubelet[1776]: E1101 00:22:40.236255 1776 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Nov 1 00:22:40.323140 kubelet[1776]: I1101 00:22:40.323102 1776 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Nov 1 00:22:40.329478 kubelet[1776]: I1101 00:22:40.329447 1776 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Nov 1 00:22:40.337790 kubelet[1776]: E1101 00:22:40.337767 1776 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Nov 1 00:22:40.337907 kubelet[1776]: I1101 00:22:40.337895 1776 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Nov 1 00:22:40.341164 kubelet[1776]: E1101 00:22:40.341139 1776 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Nov 1 00:22:40.341252 kubelet[1776]: I1101 00:22:40.341241 1776 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Nov 1 00:22:40.344086 kubelet[1776]: E1101 00:22:40.344062 1776 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Nov 1 00:22:40.373077 kubelet[1776]: E1101 00:22:40.372988 1776 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.1873ba2778bce507 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-11-01 00:22:37.924132103 +0000 UTC m=+1.146466881,LastTimestamp:2025-11-01 00:22:37.924132103 +0000 UTC m=+1.146466881,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Nov 1 00:22:40.914938 kubelet[1776]: I1101 00:22:40.914895 1776 apiserver.go:52] "Watching apiserver" Nov 1 00:22:40.930706 kubelet[1776]: I1101 00:22:40.930640 1776 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Nov 1 00:22:40.954886 kubelet[1776]: I1101 00:22:40.954847 1776 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Nov 1 00:22:40.954960 kubelet[1776]: I1101 00:22:40.954944 1776 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Nov 1 00:22:40.958210 kubelet[1776]: E1101 00:22:40.958182 1776 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Nov 1 00:22:40.958329 kubelet[1776]: E1101 00:22:40.958312 1776 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:22:40.958438 kubelet[1776]: E1101 00:22:40.958190 1776 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Nov 1 00:22:40.958639 kubelet[1776]: E1101 00:22:40.958623 1776 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:22:42.219856 systemd[1]: Reloading. Nov 1 00:22:42.264249 /usr/lib/systemd/system-generators/torcx-generator[2071]: time="2025-11-01T00:22:42Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Nov 1 00:22:42.264284 /usr/lib/systemd/system-generators/torcx-generator[2071]: time="2025-11-01T00:22:42Z" level=info msg="torcx already run" Nov 1 00:22:42.326426 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Nov 1 00:22:42.326447 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 1 00:22:42.341731 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 1 00:22:42.412433 systemd[1]: Stopping kubelet.service... Nov 1 00:22:42.435938 systemd[1]: kubelet.service: Deactivated successfully. Nov 1 00:22:42.436241 systemd[1]: Stopped kubelet.service. Nov 1 00:22:42.434000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:22:42.436971 kernel: kauditd_printk_skb: 47 callbacks suppressed Nov 1 00:22:42.437010 kernel: audit: type=1131 audit(1761956562.434:225): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:22:42.438005 systemd[1]: Starting kubelet.service... Nov 1 00:22:42.530614 systemd[1]: Started kubelet.service. Nov 1 00:22:42.529000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:22:42.534545 kernel: audit: type=1130 audit(1761956562.529:226): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:22:42.571200 kubelet[2124]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 1 00:22:42.571543 kubelet[2124]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 1 00:22:42.571587 kubelet[2124]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 1 00:22:42.571750 kubelet[2124]: I1101 00:22:42.571722 2124 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 1 00:22:42.577479 kubelet[2124]: I1101 00:22:42.577449 2124 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Nov 1 00:22:42.577479 kubelet[2124]: I1101 00:22:42.577477 2124 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 1 00:22:42.577794 kubelet[2124]: I1101 00:22:42.577760 2124 server.go:954] "Client rotation is on, will bootstrap in background" Nov 1 00:22:42.578966 kubelet[2124]: I1101 00:22:42.578940 2124 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Nov 1 00:22:42.581074 kubelet[2124]: I1101 00:22:42.581056 2124 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 1 00:22:42.584744 kubelet[2124]: E1101 00:22:42.584698 2124 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Nov 1 00:22:42.584744 kubelet[2124]: I1101 00:22:42.584736 2124 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Nov 1 00:22:42.587220 kubelet[2124]: I1101 00:22:42.587192 2124 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Nov 1 00:22:42.587681 kubelet[2124]: I1101 00:22:42.587652 2124 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 1 00:22:42.587822 kubelet[2124]: I1101 00:22:42.587677 2124 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Nov 1 00:22:42.587895 kubelet[2124]: I1101 00:22:42.587832 2124 topology_manager.go:138] "Creating topology manager with none policy" Nov 1 00:22:42.587895 kubelet[2124]: I1101 00:22:42.587841 2124 container_manager_linux.go:304] "Creating device plugin manager" Nov 1 00:22:42.587895 kubelet[2124]: I1101 00:22:42.587878 2124 state_mem.go:36] "Initialized new in-memory state store" Nov 1 00:22:42.588027 kubelet[2124]: I1101 00:22:42.588015 2124 kubelet.go:446] "Attempting to sync node with API server" Nov 1 00:22:42.588053 kubelet[2124]: I1101 00:22:42.588043 2124 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 1 00:22:42.588099 kubelet[2124]: I1101 00:22:42.588089 2124 kubelet.go:352] "Adding apiserver pod source" Nov 1 00:22:42.589454 kubelet[2124]: I1101 00:22:42.589436 2124 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 1 00:22:42.589989 kubelet[2124]: I1101 00:22:42.589971 2124 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Nov 1 00:22:42.590517 kubelet[2124]: I1101 00:22:42.590502 2124 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Nov 1 00:22:42.590952 kubelet[2124]: I1101 00:22:42.590935 2124 watchdog_linux.go:99] "Systemd watchdog is not enabled" Nov 1 00:22:42.591043 kubelet[2124]: I1101 00:22:42.591033 2124 server.go:1287] "Started kubelet" Nov 1 00:22:42.591577 kubelet[2124]: I1101 00:22:42.591518 2124 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 1 00:22:42.591886 kubelet[2124]: I1101 00:22:42.591856 2124 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 1 00:22:42.591935 kubelet[2124]: I1101 00:22:42.591911 2124 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Nov 1 00:22:42.590000 audit[2124]: AVC avc: denied { mac_admin } for pid=2124 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:22:42.592338 kubelet[2124]: I1101 00:22:42.592317 2124 kubelet.go:1507] "Unprivileged containerized plugins might not work, could not set selinux context on plugin registration dir" path="/var/lib/kubelet/plugins_registry" err="setxattr(label=system_u:object_r:container_file_t:s0) /var/lib/kubelet/plugins_registry: invalid argument" Nov 1 00:22:42.592450 kubelet[2124]: I1101 00:22:42.592433 2124 kubelet.go:1511] "Unprivileged containerized plugins might not work, could not set selinux context on plugins dir" path="/var/lib/kubelet/plugins" err="setxattr(label=system_u:object_r:container_file_t:s0) /var/lib/kubelet/plugins: invalid argument" Nov 1 00:22:42.592537 kubelet[2124]: I1101 00:22:42.592526 2124 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 1 00:22:42.599732 kubelet[2124]: E1101 00:22:42.598472 2124 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 1 00:22:42.590000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Nov 1 00:22:42.602109 kernel: audit: type=1400 audit(1761956562.590:227): avc: denied { mac_admin } for pid=2124 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:22:42.602176 kernel: audit: type=1401 audit(1761956562.590:227): op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Nov 1 00:22:42.602271 kernel: audit: type=1300 audit(1761956562.590:227): arch=c00000b7 syscall=5 success=no exit=-22 a0=40009a3950 a1=40009ab068 a2=40009a3920 a3=25 items=0 ppid=1 pid=2124 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:22:42.590000 audit[2124]: SYSCALL arch=c00000b7 syscall=5 success=no exit=-22 a0=40009a3950 a1=40009ab068 a2=40009a3920 a3=25 items=0 ppid=1 pid=2124 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:22:42.590000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Nov 1 00:22:42.609380 kernel: audit: type=1327 audit(1761956562.590:227): proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Nov 1 00:22:42.609489 kubelet[2124]: I1101 00:22:42.608854 2124 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 1 00:22:42.590000 audit[2124]: AVC avc: denied { mac_admin } for pid=2124 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:22:42.611790 kubelet[2124]: I1101 00:22:42.611765 2124 volume_manager.go:297] "Starting Kubelet Volume Manager" Nov 1 00:22:42.612433 kernel: audit: type=1400 audit(1761956562.590:228): avc: denied { mac_admin } for pid=2124 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:22:42.613021 kubelet[2124]: I1101 00:22:42.613002 2124 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Nov 1 00:22:42.590000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Nov 1 00:22:42.613694 kubelet[2124]: I1101 00:22:42.612806 2124 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 1 00:22:42.613878 kubelet[2124]: I1101 00:22:42.613619 2124 reconciler.go:26] "Reconciler: start to sync state" Nov 1 00:22:42.614496 kubelet[2124]: I1101 00:22:42.614475 2124 server.go:479] "Adding debug handlers to kubelet server" Nov 1 00:22:42.614706 kernel: audit: type=1401 audit(1761956562.590:228): op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Nov 1 00:22:42.615204 kubelet[2124]: I1101 00:22:42.615175 2124 factory.go:221] Registration of the containerd container factory successfully Nov 1 00:22:42.615204 kubelet[2124]: I1101 00:22:42.615195 2124 factory.go:221] Registration of the systemd container factory successfully Nov 1 00:22:42.590000 audit[2124]: SYSCALL arch=c00000b7 syscall=5 success=no exit=-22 a0=40003a2380 a1=40009ab080 a2=40009a39e0 a3=25 items=0 ppid=1 pid=2124 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:22:42.618987 kernel: audit: type=1300 audit(1761956562.590:228): arch=c00000b7 syscall=5 success=no exit=-22 a0=40003a2380 a1=40009ab080 a2=40009a39e0 a3=25 items=0 ppid=1 pid=2124 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:22:42.590000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Nov 1 00:22:42.622431 kernel: audit: type=1327 audit(1761956562.590:228): proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Nov 1 00:22:42.637419 kubelet[2124]: I1101 00:22:42.637361 2124 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Nov 1 00:22:42.639218 kubelet[2124]: I1101 00:22:42.639197 2124 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Nov 1 00:22:42.639277 kubelet[2124]: I1101 00:22:42.639224 2124 status_manager.go:227] "Starting to sync pod status with apiserver" Nov 1 00:22:42.639277 kubelet[2124]: I1101 00:22:42.639242 2124 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 1 00:22:42.639277 kubelet[2124]: I1101 00:22:42.639250 2124 kubelet.go:2382] "Starting kubelet main sync loop" Nov 1 00:22:42.639373 kubelet[2124]: E1101 00:22:42.639290 2124 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 1 00:22:42.657599 kubelet[2124]: I1101 00:22:42.657575 2124 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 1 00:22:42.657678 kubelet[2124]: I1101 00:22:42.657652 2124 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 1 00:22:42.657678 kubelet[2124]: I1101 00:22:42.657674 2124 state_mem.go:36] "Initialized new in-memory state store" Nov 1 00:22:42.657814 kubelet[2124]: I1101 00:22:42.657799 2124 state_mem.go:88] "Updated default CPUSet" cpuSet="" Nov 1 00:22:42.657857 kubelet[2124]: I1101 00:22:42.657817 2124 state_mem.go:96] "Updated CPUSet assignments" assignments={} Nov 1 00:22:42.657857 kubelet[2124]: I1101 00:22:42.657835 2124 policy_none.go:49] "None policy: Start" Nov 1 00:22:42.657857 kubelet[2124]: I1101 00:22:42.657844 2124 memory_manager.go:186] "Starting memorymanager" policy="None" Nov 1 00:22:42.657857 kubelet[2124]: I1101 00:22:42.657854 2124 state_mem.go:35] "Initializing new in-memory state store" Nov 1 00:22:42.657968 kubelet[2124]: I1101 00:22:42.657956 2124 state_mem.go:75] "Updated machine memory state" Nov 1 00:22:42.659031 kubelet[2124]: I1101 00:22:42.659010 2124 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Nov 1 00:22:42.657000 audit[2124]: AVC avc: denied { mac_admin } for pid=2124 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:22:42.657000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Nov 1 00:22:42.657000 audit[2124]: SYSCALL arch=c00000b7 syscall=5 success=no exit=-22 a0=40011cacf0 a1=40011c6f90 a2=40011cacc0 a3=25 items=0 ppid=1 pid=2124 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:22:42.657000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Nov 1 00:22:42.659227 kubelet[2124]: I1101 00:22:42.659068 2124 server.go:94] "Unprivileged containerized plugins might not work. Could not set selinux context on socket dir" path="/var/lib/kubelet/device-plugins/" err="setxattr(label=system_u:object_r:container_file_t:s0) /var/lib/kubelet/device-plugins/: invalid argument" Nov 1 00:22:42.659227 kubelet[2124]: I1101 00:22:42.659204 2124 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 1 00:22:42.659277 kubelet[2124]: I1101 00:22:42.659215 2124 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 1 00:22:42.659758 kubelet[2124]: I1101 00:22:42.659734 2124 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 1 00:22:42.662205 kubelet[2124]: E1101 00:22:42.662172 2124 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 1 00:22:42.740666 kubelet[2124]: I1101 00:22:42.740604 2124 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Nov 1 00:22:42.740798 kubelet[2124]: I1101 00:22:42.740645 2124 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Nov 1 00:22:42.740886 kubelet[2124]: I1101 00:22:42.740740 2124 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Nov 1 00:22:42.766167 kubelet[2124]: I1101 00:22:42.766147 2124 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 1 00:22:42.772119 kubelet[2124]: I1101 00:22:42.771456 2124 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Nov 1 00:22:42.772119 kubelet[2124]: I1101 00:22:42.771523 2124 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Nov 1 00:22:42.815718 kubelet[2124]: I1101 00:22:42.815643 2124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Nov 1 00:22:42.815718 kubelet[2124]: I1101 00:22:42.815674 2124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Nov 1 00:22:42.815718 kubelet[2124]: I1101 00:22:42.815705 2124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a1d51be1ff02022474f2598f6e43038f-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"a1d51be1ff02022474f2598f6e43038f\") " pod="kube-system/kube-scheduler-localhost" Nov 1 00:22:42.815835 kubelet[2124]: I1101 00:22:42.815729 2124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/0bd79a7bdf6d693d48729d2b5d11e801-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"0bd79a7bdf6d693d48729d2b5d11e801\") " pod="kube-system/kube-apiserver-localhost" Nov 1 00:22:42.815835 kubelet[2124]: I1101 00:22:42.815746 2124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Nov 1 00:22:42.815835 kubelet[2124]: I1101 00:22:42.815760 2124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Nov 1 00:22:42.815835 kubelet[2124]: I1101 00:22:42.815784 2124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/0bd79a7bdf6d693d48729d2b5d11e801-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"0bd79a7bdf6d693d48729d2b5d11e801\") " pod="kube-system/kube-apiserver-localhost" Nov 1 00:22:42.815835 kubelet[2124]: I1101 00:22:42.815801 2124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/0bd79a7bdf6d693d48729d2b5d11e801-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"0bd79a7bdf6d693d48729d2b5d11e801\") " pod="kube-system/kube-apiserver-localhost" Nov 1 00:22:42.815946 kubelet[2124]: I1101 00:22:42.815818 2124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Nov 1 00:22:43.051084 kubelet[2124]: E1101 00:22:43.051050 2124 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:22:43.051261 kubelet[2124]: E1101 00:22:43.051050 2124 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:22:43.051353 kubelet[2124]: E1101 00:22:43.051055 2124 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:22:43.589989 kubelet[2124]: I1101 00:22:43.589944 2124 apiserver.go:52] "Watching apiserver" Nov 1 00:22:43.614118 kubelet[2124]: I1101 00:22:43.614062 2124 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Nov 1 00:22:43.649460 kubelet[2124]: I1101 00:22:43.649429 2124 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Nov 1 00:22:43.649761 kubelet[2124]: E1101 00:22:43.649741 2124 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:22:43.649855 kubelet[2124]: E1101 00:22:43.649823 2124 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:22:43.656586 kubelet[2124]: E1101 00:22:43.656554 2124 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Nov 1 00:22:43.656827 kubelet[2124]: E1101 00:22:43.656809 2124 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:22:43.676844 kubelet[2124]: I1101 00:22:43.676784 2124 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.676746783 podStartE2EDuration="1.676746783s" podCreationTimestamp="2025-11-01 00:22:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 00:22:43.673681823 +0000 UTC m=+1.137575001" watchObservedRunningTime="2025-11-01 00:22:43.676746783 +0000 UTC m=+1.140639961" Nov 1 00:22:43.698662 kubelet[2124]: I1101 00:22:43.698602 2124 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.6985842629999999 podStartE2EDuration="1.698584263s" podCreationTimestamp="2025-11-01 00:22:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 00:22:43.687137743 +0000 UTC m=+1.151030921" watchObservedRunningTime="2025-11-01 00:22:43.698584263 +0000 UTC m=+1.162477441" Nov 1 00:22:43.707923 kubelet[2124]: I1101 00:22:43.707839 2124 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.707820063 podStartE2EDuration="1.707820063s" podCreationTimestamp="2025-11-01 00:22:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 00:22:43.698715463 +0000 UTC m=+1.162608601" watchObservedRunningTime="2025-11-01 00:22:43.707820063 +0000 UTC m=+1.171713241" Nov 1 00:22:44.650717 kubelet[2124]: E1101 00:22:44.650665 2124 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:22:44.651213 kubelet[2124]: E1101 00:22:44.651174 2124 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:22:44.651522 kubelet[2124]: E1101 00:22:44.651499 2124 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:22:45.652613 kubelet[2124]: E1101 00:22:45.652565 2124 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:22:47.366791 kubelet[2124]: I1101 00:22:47.366739 2124 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Nov 1 00:22:47.367121 env[1324]: time="2025-11-01T00:22:47.367095177Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Nov 1 00:22:47.367388 kubelet[2124]: I1101 00:22:47.367330 2124 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Nov 1 00:22:48.048420 kubelet[2124]: I1101 00:22:48.048364 2124 status_manager.go:890] "Failed to get status for pod" podUID="e96239c6-d7da-498e-9e61-a07a1f5222cb" pod="kube-system/kube-proxy-pr94k" err="pods \"kube-proxy-pr94k\" is forbidden: User \"system:node:localhost\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'localhost' and this object" Nov 1 00:22:48.053851 kubelet[2124]: I1101 00:22:48.053813 2124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/e96239c6-d7da-498e-9e61-a07a1f5222cb-kube-proxy\") pod \"kube-proxy-pr94k\" (UID: \"e96239c6-d7da-498e-9e61-a07a1f5222cb\") " pod="kube-system/kube-proxy-pr94k" Nov 1 00:22:48.053851 kubelet[2124]: I1101 00:22:48.053849 2124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e96239c6-d7da-498e-9e61-a07a1f5222cb-xtables-lock\") pod \"kube-proxy-pr94k\" (UID: \"e96239c6-d7da-498e-9e61-a07a1f5222cb\") " pod="kube-system/kube-proxy-pr94k" Nov 1 00:22:48.053972 kubelet[2124]: I1101 00:22:48.053866 2124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e96239c6-d7da-498e-9e61-a07a1f5222cb-lib-modules\") pod \"kube-proxy-pr94k\" (UID: \"e96239c6-d7da-498e-9e61-a07a1f5222cb\") " pod="kube-system/kube-proxy-pr94k" Nov 1 00:22:48.053972 kubelet[2124]: I1101 00:22:48.053882 2124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qtlsm\" (UniqueName: \"kubernetes.io/projected/e96239c6-d7da-498e-9e61-a07a1f5222cb-kube-api-access-qtlsm\") pod \"kube-proxy-pr94k\" (UID: \"e96239c6-d7da-498e-9e61-a07a1f5222cb\") " pod="kube-system/kube-proxy-pr94k" Nov 1 00:22:48.161247 kubelet[2124]: E1101 00:22:48.161179 2124 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Nov 1 00:22:48.161247 kubelet[2124]: E1101 00:22:48.161219 2124 projected.go:194] Error preparing data for projected volume kube-api-access-qtlsm for pod kube-system/kube-proxy-pr94k: configmap "kube-root-ca.crt" not found Nov 1 00:22:48.161426 kubelet[2124]: E1101 00:22:48.161269 2124 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e96239c6-d7da-498e-9e61-a07a1f5222cb-kube-api-access-qtlsm podName:e96239c6-d7da-498e-9e61-a07a1f5222cb nodeName:}" failed. No retries permitted until 2025-11-01 00:22:48.661251499 +0000 UTC m=+6.125144637 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-qtlsm" (UniqueName: "kubernetes.io/projected/e96239c6-d7da-498e-9e61-a07a1f5222cb-kube-api-access-qtlsm") pod "kube-proxy-pr94k" (UID: "e96239c6-d7da-498e-9e61-a07a1f5222cb") : configmap "kube-root-ca.crt" not found Nov 1 00:22:48.659198 kubelet[2124]: I1101 00:22:48.659164 2124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s2r9n\" (UniqueName: \"kubernetes.io/projected/272fdff3-5c9f-4247-ad54-473b415d87ec-kube-api-access-s2r9n\") pod \"tigera-operator-7dcd859c48-sz2tt\" (UID: \"272fdff3-5c9f-4247-ad54-473b415d87ec\") " pod="tigera-operator/tigera-operator-7dcd859c48-sz2tt" Nov 1 00:22:48.659653 kubelet[2124]: I1101 00:22:48.659619 2124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/272fdff3-5c9f-4247-ad54-473b415d87ec-var-lib-calico\") pod \"tigera-operator-7dcd859c48-sz2tt\" (UID: \"272fdff3-5c9f-4247-ad54-473b415d87ec\") " pod="tigera-operator/tigera-operator-7dcd859c48-sz2tt" Nov 1 00:22:48.760641 kubelet[2124]: I1101 00:22:48.760608 2124 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Nov 1 00:22:48.859912 env[1324]: time="2025-11-01T00:22:48.859853777Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-sz2tt,Uid:272fdff3-5c9f-4247-ad54-473b415d87ec,Namespace:tigera-operator,Attempt:0,}" Nov 1 00:22:48.875252 env[1324]: time="2025-11-01T00:22:48.875195311Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:22:48.875399 env[1324]: time="2025-11-01T00:22:48.875232272Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:22:48.875399 env[1324]: time="2025-11-01T00:22:48.875242232Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:22:48.875480 env[1324]: time="2025-11-01T00:22:48.875421713Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/087518bd59dac92f7c245f9f01197dc9d06665598b9f581ccf3757dfe83adb1a pid=2182 runtime=io.containerd.runc.v2 Nov 1 00:22:48.923379 env[1324]: time="2025-11-01T00:22:48.923267366Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-sz2tt,Uid:272fdff3-5c9f-4247-ad54-473b415d87ec,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"087518bd59dac92f7c245f9f01197dc9d06665598b9f581ccf3757dfe83adb1a\"" Nov 1 00:22:48.925664 env[1324]: time="2025-11-01T00:22:48.925634820Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Nov 1 00:22:48.949149 kubelet[2124]: E1101 00:22:48.949117 2124 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:22:48.949878 env[1324]: time="2025-11-01T00:22:48.949600967Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-pr94k,Uid:e96239c6-d7da-498e-9e61-a07a1f5222cb,Namespace:kube-system,Attempt:0,}" Nov 1 00:22:48.962543 env[1324]: time="2025-11-01T00:22:48.962466486Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:22:48.962543 env[1324]: time="2025-11-01T00:22:48.962517886Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:22:48.962672 env[1324]: time="2025-11-01T00:22:48.962529086Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:22:48.962896 env[1324]: time="2025-11-01T00:22:48.962862768Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/57d58dcbe1fb3b603cde809fb7a12251307c74fbd92716cf377303ebf4813db7 pid=2221 runtime=io.containerd.runc.v2 Nov 1 00:22:48.999490 env[1324]: time="2025-11-01T00:22:48.999448032Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-pr94k,Uid:e96239c6-d7da-498e-9e61-a07a1f5222cb,Namespace:kube-system,Attempt:0,} returns sandbox id \"57d58dcbe1fb3b603cde809fb7a12251307c74fbd92716cf377303ebf4813db7\"" Nov 1 00:22:49.000025 kubelet[2124]: E1101 00:22:49.000003 2124 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:22:49.003378 env[1324]: time="2025-11-01T00:22:49.003335935Z" level=info msg="CreateContainer within sandbox \"57d58dcbe1fb3b603cde809fb7a12251307c74fbd92716cf377303ebf4813db7\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Nov 1 00:22:49.016783 env[1324]: time="2025-11-01T00:22:49.016727652Z" level=info msg="CreateContainer within sandbox \"57d58dcbe1fb3b603cde809fb7a12251307c74fbd92716cf377303ebf4813db7\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"a31ee008ee719b31ddac410b4b2ce4572505d12a18b8b7106d2435e86028a5dd\"" Nov 1 00:22:49.017843 env[1324]: time="2025-11-01T00:22:49.017809018Z" level=info msg="StartContainer for \"a31ee008ee719b31ddac410b4b2ce4572505d12a18b8b7106d2435e86028a5dd\"" Nov 1 00:22:49.074732 env[1324]: time="2025-11-01T00:22:49.074689865Z" level=info msg="StartContainer for \"a31ee008ee719b31ddac410b4b2ce4572505d12a18b8b7106d2435e86028a5dd\" returns successfully" Nov 1 00:22:49.220000 audit[2324]: NETFILTER_CFG table=mangle:38 family=2 entries=1 op=nft_register_chain pid=2324 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 00:22:49.224632 kernel: kauditd_printk_skb: 4 callbacks suppressed Nov 1 00:22:49.224675 kernel: audit: type=1325 audit(1761956569.220:230): table=mangle:38 family=2 entries=1 op=nft_register_chain pid=2324 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 00:22:49.224692 kernel: audit: type=1300 audit(1761956569.220:230): arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffe8f03690 a2=0 a3=1 items=0 ppid=2274 pid=2324 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:22:49.220000 audit[2324]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffe8f03690 a2=0 a3=1 items=0 ppid=2274 pid=2324 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:22:49.220000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Nov 1 00:22:49.230006 kernel: audit: type=1327 audit(1761956569.220:230): proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Nov 1 00:22:49.230078 kernel: audit: type=1325 audit(1761956569.221:231): table=mangle:39 family=10 entries=1 op=nft_register_chain pid=2325 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Nov 1 00:22:49.221000 audit[2325]: NETFILTER_CFG table=mangle:39 family=10 entries=1 op=nft_register_chain pid=2325 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Nov 1 00:22:49.232064 kernel: audit: type=1300 audit(1761956569.221:231): arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffdf1c17f0 a2=0 a3=1 items=0 ppid=2274 pid=2325 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:22:49.221000 audit[2325]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffdf1c17f0 a2=0 a3=1 items=0 ppid=2274 pid=2325 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:22:49.235395 kernel: audit: type=1327 audit(1761956569.221:231): proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Nov 1 00:22:49.221000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Nov 1 00:22:49.237114 kernel: audit: type=1325 audit(1761956569.223:232): table=nat:40 family=10 entries=1 op=nft_register_chain pid=2327 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Nov 1 00:22:49.223000 audit[2327]: NETFILTER_CFG table=nat:40 family=10 entries=1 op=nft_register_chain pid=2327 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Nov 1 00:22:49.223000 audit[2327]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffd21ea7f0 a2=0 a3=1 items=0 ppid=2274 pid=2327 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:22:49.242116 kernel: audit: type=1300 audit(1761956569.223:232): arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffd21ea7f0 a2=0 a3=1 items=0 ppid=2274 pid=2327 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:22:49.242154 kernel: audit: type=1327 audit(1761956569.223:232): proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Nov 1 00:22:49.223000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Nov 1 00:22:49.224000 audit[2328]: NETFILTER_CFG table=filter:41 family=10 entries=1 op=nft_register_chain pid=2328 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Nov 1 00:22:49.245492 kernel: audit: type=1325 audit(1761956569.224:233): table=filter:41 family=10 entries=1 op=nft_register_chain pid=2328 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Nov 1 00:22:49.224000 audit[2328]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=fffffa0182f0 a2=0 a3=1 items=0 ppid=2274 pid=2328 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:22:49.224000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Nov 1 00:22:49.229000 audit[2329]: NETFILTER_CFG table=nat:42 family=2 entries=1 op=nft_register_chain pid=2329 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 00:22:49.229000 audit[2329]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffc6ab50d0 a2=0 a3=1 items=0 ppid=2274 pid=2329 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:22:49.229000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Nov 1 00:22:49.230000 audit[2330]: NETFILTER_CFG table=filter:43 family=2 entries=1 op=nft_register_chain pid=2330 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 00:22:49.230000 audit[2330]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffcf0bc2c0 a2=0 a3=1 items=0 ppid=2274 pid=2330 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:22:49.230000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Nov 1 00:22:49.323000 audit[2331]: NETFILTER_CFG table=filter:44 family=2 entries=1 op=nft_register_chain pid=2331 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 00:22:49.323000 audit[2331]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=108 a0=3 a1=ffffd9f9a7e0 a2=0 a3=1 items=0 ppid=2274 pid=2331 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:22:49.323000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Nov 1 00:22:49.326000 audit[2333]: NETFILTER_CFG table=filter:45 family=2 entries=1 op=nft_register_rule pid=2333 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 00:22:49.326000 audit[2333]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=752 a0=3 a1=ffffe162bcc0 a2=0 a3=1 items=0 ppid=2274 pid=2333 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:22:49.326000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276696365 Nov 1 00:22:49.329000 audit[2336]: NETFILTER_CFG table=filter:46 family=2 entries=1 op=nft_register_rule pid=2336 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 00:22:49.329000 audit[2336]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=752 a0=3 a1=ffffe333f3a0 a2=0 a3=1 items=0 ppid=2274 pid=2336 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:22:49.329000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669 Nov 1 00:22:49.330000 audit[2337]: NETFILTER_CFG table=filter:47 family=2 entries=1 op=nft_register_chain pid=2337 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 00:22:49.330000 audit[2337]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffcad72510 a2=0 a3=1 items=0 ppid=2274 pid=2337 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:22:49.330000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Nov 1 00:22:49.332000 audit[2339]: NETFILTER_CFG table=filter:48 family=2 entries=1 op=nft_register_rule pid=2339 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 00:22:49.332000 audit[2339]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=fffff3754c60 a2=0 a3=1 items=0 ppid=2274 pid=2339 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:22:49.332000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Nov 1 00:22:49.333000 audit[2340]: NETFILTER_CFG table=filter:49 family=2 entries=1 op=nft_register_chain pid=2340 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 00:22:49.333000 audit[2340]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffff7cea660 a2=0 a3=1 items=0 ppid=2274 pid=2340 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:22:49.333000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Nov 1 00:22:49.335000 audit[2342]: NETFILTER_CFG table=filter:50 family=2 entries=1 op=nft_register_rule pid=2342 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 00:22:49.335000 audit[2342]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=744 a0=3 a1=ffffe9771e30 a2=0 a3=1 items=0 ppid=2274 pid=2342 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:22:49.335000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Nov 1 00:22:49.338000 audit[2345]: NETFILTER_CFG table=filter:51 family=2 entries=1 op=nft_register_rule pid=2345 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 00:22:49.338000 audit[2345]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=744 a0=3 a1=ffffc10e7cf0 a2=0 a3=1 items=0 ppid=2274 pid=2345 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:22:49.338000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D53 Nov 1 00:22:49.339000 audit[2346]: NETFILTER_CFG table=filter:52 family=2 entries=1 op=nft_register_chain pid=2346 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 00:22:49.339000 audit[2346]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffffc244ac0 a2=0 a3=1 items=0 ppid=2274 pid=2346 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:22:49.339000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Nov 1 00:22:49.341000 audit[2348]: NETFILTER_CFG table=filter:53 family=2 entries=1 op=nft_register_rule pid=2348 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 00:22:49.341000 audit[2348]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffdb822af0 a2=0 a3=1 items=0 ppid=2274 pid=2348 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:22:49.341000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Nov 1 00:22:49.342000 audit[2349]: NETFILTER_CFG table=filter:54 family=2 entries=1 op=nft_register_chain pid=2349 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 00:22:49.342000 audit[2349]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffdf4d1060 a2=0 a3=1 items=0 ppid=2274 pid=2349 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:22:49.342000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Nov 1 00:22:49.344000 audit[2351]: NETFILTER_CFG table=filter:55 family=2 entries=1 op=nft_register_rule pid=2351 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 00:22:49.344000 audit[2351]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=fffff4218ac0 a2=0 a3=1 items=0 ppid=2274 pid=2351 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:22:49.344000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Nov 1 00:22:49.347000 audit[2354]: NETFILTER_CFG table=filter:56 family=2 entries=1 op=nft_register_rule pid=2354 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 00:22:49.347000 audit[2354]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffea447580 a2=0 a3=1 items=0 ppid=2274 pid=2354 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:22:49.347000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Nov 1 00:22:49.351000 audit[2357]: NETFILTER_CFG table=filter:57 family=2 entries=1 op=nft_register_rule pid=2357 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 00:22:49.351000 audit[2357]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffcfd14400 a2=0 a3=1 items=0 ppid=2274 pid=2357 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:22:49.351000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Nov 1 00:22:49.352000 audit[2358]: NETFILTER_CFG table=nat:58 family=2 entries=1 op=nft_register_chain pid=2358 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 00:22:49.352000 audit[2358]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffd30f6e20 a2=0 a3=1 items=0 ppid=2274 pid=2358 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:22:49.352000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Nov 1 00:22:49.354000 audit[2360]: NETFILTER_CFG table=nat:59 family=2 entries=1 op=nft_register_rule pid=2360 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 00:22:49.354000 audit[2360]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=524 a0=3 a1=ffffcdf6ca70 a2=0 a3=1 items=0 ppid=2274 pid=2360 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:22:49.354000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Nov 1 00:22:49.357000 audit[2363]: NETFILTER_CFG table=nat:60 family=2 entries=1 op=nft_register_rule pid=2363 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 00:22:49.357000 audit[2363]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffcbdd2b20 a2=0 a3=1 items=0 ppid=2274 pid=2363 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:22:49.357000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Nov 1 00:22:49.358000 audit[2364]: NETFILTER_CFG table=nat:61 family=2 entries=1 op=nft_register_chain pid=2364 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 00:22:49.358000 audit[2364]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffe12461a0 a2=0 a3=1 items=0 ppid=2274 pid=2364 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:22:49.358000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Nov 1 00:22:49.360000 audit[2366]: NETFILTER_CFG table=nat:62 family=2 entries=1 op=nft_register_rule pid=2366 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 00:22:49.360000 audit[2366]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=532 a0=3 a1=ffffe3f0b420 a2=0 a3=1 items=0 ppid=2274 pid=2366 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:22:49.360000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Nov 1 00:22:49.385000 audit[2372]: NETFILTER_CFG table=filter:63 family=2 entries=8 op=nft_register_rule pid=2372 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 00:22:49.385000 audit[2372]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=ffffc1662700 a2=0 a3=1 items=0 ppid=2274 pid=2372 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:22:49.385000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 00:22:49.398000 audit[2372]: NETFILTER_CFG table=nat:64 family=2 entries=14 op=nft_register_chain pid=2372 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 00:22:49.398000 audit[2372]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5508 a0=3 a1=ffffc1662700 a2=0 a3=1 items=0 ppid=2274 pid=2372 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:22:49.398000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 00:22:49.399000 audit[2377]: NETFILTER_CFG table=filter:65 family=10 entries=1 op=nft_register_chain pid=2377 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Nov 1 00:22:49.399000 audit[2377]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=108 a0=3 a1=ffffe4b34c60 a2=0 a3=1 items=0 ppid=2274 pid=2377 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:22:49.399000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Nov 1 00:22:49.401000 audit[2379]: NETFILTER_CFG table=filter:66 family=10 entries=2 op=nft_register_chain pid=2379 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Nov 1 00:22:49.401000 audit[2379]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=836 a0=3 a1=ffffc9a2d8c0 a2=0 a3=1 items=0 ppid=2274 pid=2379 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:22:49.401000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C6520736572766963 Nov 1 00:22:49.405000 audit[2382]: NETFILTER_CFG table=filter:67 family=10 entries=2 op=nft_register_chain pid=2382 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Nov 1 00:22:49.405000 audit[2382]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=836 a0=3 a1=ffffc08f80a0 a2=0 a3=1 items=0 ppid=2274 pid=2382 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:22:49.405000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276 Nov 1 00:22:49.406000 audit[2383]: NETFILTER_CFG table=filter:68 family=10 entries=1 op=nft_register_chain pid=2383 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Nov 1 00:22:49.406000 audit[2383]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffdf4634b0 a2=0 a3=1 items=0 ppid=2274 pid=2383 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:22:49.406000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Nov 1 00:22:49.408000 audit[2385]: NETFILTER_CFG table=filter:69 family=10 entries=1 op=nft_register_rule pid=2385 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Nov 1 00:22:49.408000 audit[2385]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffe5063b60 a2=0 a3=1 items=0 ppid=2274 pid=2385 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:22:49.408000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Nov 1 00:22:49.409000 audit[2386]: NETFILTER_CFG table=filter:70 family=10 entries=1 op=nft_register_chain pid=2386 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Nov 1 00:22:49.409000 audit[2386]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffffb907530 a2=0 a3=1 items=0 ppid=2274 pid=2386 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:22:49.409000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Nov 1 00:22:49.411000 audit[2388]: NETFILTER_CFG table=filter:71 family=10 entries=1 op=nft_register_rule pid=2388 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Nov 1 00:22:49.411000 audit[2388]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=744 a0=3 a1=fffffec3b1d0 a2=0 a3=1 items=0 ppid=2274 pid=2388 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:22:49.411000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B554245 Nov 1 00:22:49.414000 audit[2391]: NETFILTER_CFG table=filter:72 family=10 entries=2 op=nft_register_chain pid=2391 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Nov 1 00:22:49.414000 audit[2391]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=828 a0=3 a1=fffff351f000 a2=0 a3=1 items=0 ppid=2274 pid=2391 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:22:49.414000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Nov 1 00:22:49.415000 audit[2392]: NETFILTER_CFG table=filter:73 family=10 entries=1 op=nft_register_chain pid=2392 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Nov 1 00:22:49.415000 audit[2392]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffd6788870 a2=0 a3=1 items=0 ppid=2274 pid=2392 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:22:49.415000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Nov 1 00:22:49.417000 audit[2394]: NETFILTER_CFG table=filter:74 family=10 entries=1 op=nft_register_rule pid=2394 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Nov 1 00:22:49.417000 audit[2394]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffebe8ef10 a2=0 a3=1 items=0 ppid=2274 pid=2394 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:22:49.417000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Nov 1 00:22:49.418000 audit[2395]: NETFILTER_CFG table=filter:75 family=10 entries=1 op=nft_register_chain pid=2395 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Nov 1 00:22:49.418000 audit[2395]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffc27ad540 a2=0 a3=1 items=0 ppid=2274 pid=2395 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:22:49.418000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Nov 1 00:22:49.420000 audit[2397]: NETFILTER_CFG table=filter:76 family=10 entries=1 op=nft_register_rule pid=2397 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Nov 1 00:22:49.420000 audit[2397]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=fffffe9a5f80 a2=0 a3=1 items=0 ppid=2274 pid=2397 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:22:49.420000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Nov 1 00:22:49.423000 audit[2400]: NETFILTER_CFG table=filter:77 family=10 entries=1 op=nft_register_rule pid=2400 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Nov 1 00:22:49.423000 audit[2400]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffdbae3550 a2=0 a3=1 items=0 ppid=2274 pid=2400 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:22:49.423000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Nov 1 00:22:49.426000 audit[2403]: NETFILTER_CFG table=filter:78 family=10 entries=1 op=nft_register_rule pid=2403 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Nov 1 00:22:49.426000 audit[2403]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffef737d00 a2=0 a3=1 items=0 ppid=2274 pid=2403 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:22:49.426000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C Nov 1 00:22:49.427000 audit[2404]: NETFILTER_CFG table=nat:79 family=10 entries=1 op=nft_register_chain pid=2404 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Nov 1 00:22:49.427000 audit[2404]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffccb99080 a2=0 a3=1 items=0 ppid=2274 pid=2404 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:22:49.427000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Nov 1 00:22:49.429000 audit[2406]: NETFILTER_CFG table=nat:80 family=10 entries=2 op=nft_register_chain pid=2406 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Nov 1 00:22:49.429000 audit[2406]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=600 a0=3 a1=ffffecaa6990 a2=0 a3=1 items=0 ppid=2274 pid=2406 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:22:49.429000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Nov 1 00:22:49.432000 audit[2409]: NETFILTER_CFG table=nat:81 family=10 entries=2 op=nft_register_chain pid=2409 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Nov 1 00:22:49.432000 audit[2409]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=608 a0=3 a1=ffffe02dae80 a2=0 a3=1 items=0 ppid=2274 pid=2409 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:22:49.432000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Nov 1 00:22:49.433000 audit[2410]: NETFILTER_CFG table=nat:82 family=10 entries=1 op=nft_register_chain pid=2410 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Nov 1 00:22:49.433000 audit[2410]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffe96dcfe0 a2=0 a3=1 items=0 ppid=2274 pid=2410 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:22:49.433000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Nov 1 00:22:49.435000 audit[2412]: NETFILTER_CFG table=nat:83 family=10 entries=2 op=nft_register_chain pid=2412 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Nov 1 00:22:49.435000 audit[2412]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=612 a0=3 a1=ffffe372bb40 a2=0 a3=1 items=0 ppid=2274 pid=2412 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:22:49.435000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Nov 1 00:22:49.436000 audit[2413]: NETFILTER_CFG table=filter:84 family=10 entries=1 op=nft_register_chain pid=2413 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Nov 1 00:22:49.436000 audit[2413]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffd23b5050 a2=0 a3=1 items=0 ppid=2274 pid=2413 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:22:49.436000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Nov 1 00:22:49.438000 audit[2415]: NETFILTER_CFG table=filter:85 family=10 entries=1 op=nft_register_rule pid=2415 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Nov 1 00:22:49.438000 audit[2415]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=228 a0=3 a1=ffffe91d3d00 a2=0 a3=1 items=0 ppid=2274 pid=2415 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:22:49.438000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Nov 1 00:22:49.441000 audit[2418]: NETFILTER_CFG table=filter:86 family=10 entries=1 op=nft_register_rule pid=2418 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Nov 1 00:22:49.441000 audit[2418]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=228 a0=3 a1=ffffd8ab1400 a2=0 a3=1 items=0 ppid=2274 pid=2418 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:22:49.441000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Nov 1 00:22:49.443000 audit[2420]: NETFILTER_CFG table=filter:87 family=10 entries=3 op=nft_register_rule pid=2420 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Nov 1 00:22:49.443000 audit[2420]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2088 a0=3 a1=ffffd720ab20 a2=0 a3=1 items=0 ppid=2274 pid=2420 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:22:49.443000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 00:22:49.444000 audit[2420]: NETFILTER_CFG table=nat:88 family=10 entries=7 op=nft_register_chain pid=2420 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Nov 1 00:22:49.444000 audit[2420]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2056 a0=3 a1=ffffd720ab20 a2=0 a3=1 items=0 ppid=2274 pid=2420 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:22:49.444000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 00:22:49.659611 kubelet[2124]: E1101 00:22:49.659219 2124 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:22:49.669742 kubelet[2124]: I1101 00:22:49.669529 2124 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-pr94k" podStartSLOduration=1.6695145999999998 podStartE2EDuration="1.6695146s" podCreationTimestamp="2025-11-01 00:22:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 00:22:49.669135838 +0000 UTC m=+7.133029056" watchObservedRunningTime="2025-11-01 00:22:49.6695146 +0000 UTC m=+7.133407778" Nov 1 00:22:49.766983 systemd[1]: run-containerd-runc-k8s.io-087518bd59dac92f7c245f9f01197dc9d06665598b9f581ccf3757dfe83adb1a-runc.IU2nAc.mount: Deactivated successfully. Nov 1 00:22:50.383719 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1685364500.mount: Deactivated successfully. Nov 1 00:22:50.785438 kubelet[2124]: E1101 00:22:50.785166 2124 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:22:51.663959 kubelet[2124]: E1101 00:22:51.663919 2124 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:22:51.754496 env[1324]: time="2025-11-01T00:22:51.754454287Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/tigera/operator:v1.38.7,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:22:51.756382 env[1324]: time="2025-11-01T00:22:51.756328176Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:22:51.758643 env[1324]: time="2025-11-01T00:22:51.758615748Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/tigera/operator:v1.38.7,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:22:51.759857 env[1324]: time="2025-11-01T00:22:51.759829994Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:22:51.760392 env[1324]: time="2025-11-01T00:22:51.760369077Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\"" Nov 1 00:22:51.763336 env[1324]: time="2025-11-01T00:22:51.763287332Z" level=info msg="CreateContainer within sandbox \"087518bd59dac92f7c245f9f01197dc9d06665598b9f581ccf3757dfe83adb1a\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Nov 1 00:22:51.772274 env[1324]: time="2025-11-01T00:22:51.772244377Z" level=info msg="CreateContainer within sandbox \"087518bd59dac92f7c245f9f01197dc9d06665598b9f581ccf3757dfe83adb1a\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"85e19ccb6c0ed70e4f556396b162be46298bbcf50d5136cbe6544773f0fd3771\"" Nov 1 00:22:51.772797 env[1324]: time="2025-11-01T00:22:51.772711059Z" level=info msg="StartContainer for \"85e19ccb6c0ed70e4f556396b162be46298bbcf50d5136cbe6544773f0fd3771\"" Nov 1 00:22:51.816834 env[1324]: time="2025-11-01T00:22:51.816784201Z" level=info msg="StartContainer for \"85e19ccb6c0ed70e4f556396b162be46298bbcf50d5136cbe6544773f0fd3771\" returns successfully" Nov 1 00:22:52.669681 kubelet[2124]: E1101 00:22:52.669602 2124 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:22:54.145160 kubelet[2124]: E1101 00:22:54.145113 2124 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:22:54.159458 kubelet[2124]: I1101 00:22:54.159396 2124 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7dcd859c48-sz2tt" podStartSLOduration=3.323003409 podStartE2EDuration="6.159380395s" podCreationTimestamp="2025-11-01 00:22:48 +0000 UTC" firstStartedPulling="2025-11-01 00:22:48.924741015 +0000 UTC m=+6.388634153" lastFinishedPulling="2025-11-01 00:22:51.761117961 +0000 UTC m=+9.225011139" observedRunningTime="2025-11-01 00:22:52.678994058 +0000 UTC m=+10.142887196" watchObservedRunningTime="2025-11-01 00:22:54.159380395 +0000 UTC m=+11.623273533" Nov 1 00:22:55.636389 kubelet[2124]: E1101 00:22:55.636354 2124 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:22:55.673658 kubelet[2124]: E1101 00:22:55.673610 2124 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:22:57.206100 sudo[1485]: pam_unix(sudo:session): session closed for user root Nov 1 00:22:57.210601 kernel: kauditd_printk_skb: 143 callbacks suppressed Nov 1 00:22:57.210694 kernel: audit: type=1106 audit(1761956577.204:281): pid=1485 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Nov 1 00:22:57.204000 audit[1485]: USER_END pid=1485 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Nov 1 00:22:57.210909 sshd[1479]: pam_unix(sshd:session): session closed for user core Nov 1 00:22:57.214175 kernel: audit: type=1104 audit(1761956577.204:282): pid=1485 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Nov 1 00:22:57.204000 audit[1485]: CRED_DISP pid=1485 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Nov 1 00:22:57.213000 audit[1479]: USER_END pid=1479 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Nov 1 00:22:57.216511 systemd[1]: sshd@6-10.0.0.92:22-10.0.0.1:42036.service: Deactivated successfully. Nov 1 00:22:57.217230 systemd[1]: session-7.scope: Deactivated successfully. Nov 1 00:22:57.217677 systemd-logind[1310]: Session 7 logged out. Waiting for processes to exit. Nov 1 00:22:57.218374 systemd-logind[1310]: Removed session 7. Nov 1 00:22:57.213000 audit[1479]: CRED_DISP pid=1479 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Nov 1 00:22:57.223338 kernel: audit: type=1106 audit(1761956577.213:283): pid=1479 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Nov 1 00:22:57.223480 kernel: audit: type=1104 audit(1761956577.213:284): pid=1479 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Nov 1 00:22:57.214000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.0.0.92:22-10.0.0.1:42036 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:22:57.226868 kernel: audit: type=1131 audit(1761956577.214:285): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.0.0.92:22-10.0.0.1:42036 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:22:57.539967 update_engine[1312]: I1101 00:22:57.539359 1312 update_attempter.cc:509] Updating boot flags... Nov 1 00:22:58.013175 kernel: audit: type=1325 audit(1761956578.006:286): table=filter:89 family=2 entries=15 op=nft_register_rule pid=2528 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 00:22:58.013307 kernel: audit: type=1300 audit(1761956578.006:286): arch=c00000b7 syscall=211 success=yes exit=5992 a0=3 a1=fffffbbb8f30 a2=0 a3=1 items=0 ppid=2274 pid=2528 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:22:58.006000 audit[2528]: NETFILTER_CFG table=filter:89 family=2 entries=15 op=nft_register_rule pid=2528 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 00:22:58.006000 audit[2528]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5992 a0=3 a1=fffffbbb8f30 a2=0 a3=1 items=0 ppid=2274 pid=2528 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:22:58.006000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 00:22:58.015267 kernel: audit: type=1327 audit(1761956578.006:286): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 00:22:58.013000 audit[2528]: NETFILTER_CFG table=nat:90 family=2 entries=12 op=nft_register_rule pid=2528 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 00:22:58.017437 kernel: audit: type=1325 audit(1761956578.013:287): table=nat:90 family=2 entries=12 op=nft_register_rule pid=2528 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 00:22:58.013000 audit[2528]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=fffffbbb8f30 a2=0 a3=1 items=0 ppid=2274 pid=2528 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:22:58.013000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 00:22:58.034438 kernel: audit: type=1300 audit(1761956578.013:287): arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=fffffbbb8f30 a2=0 a3=1 items=0 ppid=2274 pid=2528 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:22:58.088000 audit[2530]: NETFILTER_CFG table=filter:91 family=2 entries=16 op=nft_register_rule pid=2530 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 00:22:58.088000 audit[2530]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5992 a0=3 a1=fffffa097050 a2=0 a3=1 items=0 ppid=2274 pid=2530 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:22:58.088000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 00:22:58.096000 audit[2530]: NETFILTER_CFG table=nat:92 family=2 entries=12 op=nft_register_rule pid=2530 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 00:22:58.096000 audit[2530]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=fffffa097050 a2=0 a3=1 items=0 ppid=2274 pid=2530 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:22:58.096000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 00:23:00.674000 audit[2532]: NETFILTER_CFG table=filter:93 family=2 entries=17 op=nft_register_rule pid=2532 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 00:23:00.674000 audit[2532]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=6736 a0=3 a1=ffffc44d29a0 a2=0 a3=1 items=0 ppid=2274 pid=2532 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:23:00.674000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 00:23:00.682000 audit[2532]: NETFILTER_CFG table=nat:94 family=2 entries=12 op=nft_register_rule pid=2532 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 00:23:00.682000 audit[2532]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffc44d29a0 a2=0 a3=1 items=0 ppid=2274 pid=2532 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:23:00.682000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 00:23:00.703000 audit[2534]: NETFILTER_CFG table=filter:95 family=2 entries=18 op=nft_register_rule pid=2534 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 00:23:00.703000 audit[2534]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=6736 a0=3 a1=fffffb416450 a2=0 a3=1 items=0 ppid=2274 pid=2534 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:23:00.703000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 00:23:00.709000 audit[2534]: NETFILTER_CFG table=nat:96 family=2 entries=12 op=nft_register_rule pid=2534 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 00:23:00.709000 audit[2534]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=fffffb416450 a2=0 a3=1 items=0 ppid=2274 pid=2534 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:23:00.709000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 00:23:01.719000 audit[2536]: NETFILTER_CFG table=filter:97 family=2 entries=19 op=nft_register_rule pid=2536 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 00:23:01.719000 audit[2536]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=7480 a0=3 a1=fffff6c45f30 a2=0 a3=1 items=0 ppid=2274 pid=2536 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:23:01.719000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 00:23:01.725000 audit[2536]: NETFILTER_CFG table=nat:98 family=2 entries=12 op=nft_register_rule pid=2536 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 00:23:01.725000 audit[2536]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=fffff6c45f30 a2=0 a3=1 items=0 ppid=2274 pid=2536 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:23:01.725000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 00:23:03.901456 kernel: kauditd_printk_skb: 25 callbacks suppressed Nov 1 00:23:03.901553 kernel: audit: type=1325 audit(1761956583.897:296): table=filter:99 family=2 entries=21 op=nft_register_rule pid=2538 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 00:23:03.901589 kernel: audit: type=1300 audit(1761956583.897:296): arch=c00000b7 syscall=211 success=yes exit=8224 a0=3 a1=ffffdb031ec0 a2=0 a3=1 items=0 ppid=2274 pid=2538 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:23:03.897000 audit[2538]: NETFILTER_CFG table=filter:99 family=2 entries=21 op=nft_register_rule pid=2538 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 00:23:03.897000 audit[2538]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=8224 a0=3 a1=ffffdb031ec0 a2=0 a3=1 items=0 ppid=2274 pid=2538 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:23:03.897000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 00:23:03.908400 kernel: audit: type=1327 audit(1761956583.897:296): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 00:23:03.908000 audit[2538]: NETFILTER_CFG table=nat:100 family=2 entries=12 op=nft_register_rule pid=2538 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 00:23:03.908000 audit[2538]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffdb031ec0 a2=0 a3=1 items=0 ppid=2274 pid=2538 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:23:03.919460 kernel: audit: type=1325 audit(1761956583.908:297): table=nat:100 family=2 entries=12 op=nft_register_rule pid=2538 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 00:23:03.919537 kernel: audit: type=1300 audit(1761956583.908:297): arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffdb031ec0 a2=0 a3=1 items=0 ppid=2274 pid=2538 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:23:03.908000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 00:23:03.926809 kernel: audit: type=1327 audit(1761956583.908:297): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 00:23:03.944000 audit[2541]: NETFILTER_CFG table=filter:101 family=2 entries=22 op=nft_register_rule pid=2541 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 00:23:03.944000 audit[2541]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=8224 a0=3 a1=fffff7652c60 a2=0 a3=1 items=0 ppid=2274 pid=2541 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:23:03.951983 kernel: audit: type=1325 audit(1761956583.944:298): table=filter:101 family=2 entries=22 op=nft_register_rule pid=2541 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 00:23:03.952056 kernel: audit: type=1300 audit(1761956583.944:298): arch=c00000b7 syscall=211 success=yes exit=8224 a0=3 a1=fffff7652c60 a2=0 a3=1 items=0 ppid=2274 pid=2541 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:23:03.952079 kernel: audit: type=1327 audit(1761956583.944:298): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 00:23:03.944000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 00:23:03.960000 audit[2541]: NETFILTER_CFG table=nat:102 family=2 entries=12 op=nft_register_rule pid=2541 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 00:23:03.960000 audit[2541]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=fffff7652c60 a2=0 a3=1 items=0 ppid=2274 pid=2541 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:23:03.960000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 00:23:03.964434 kernel: audit: type=1325 audit(1761956583.960:299): table=nat:102 family=2 entries=12 op=nft_register_rule pid=2541 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 00:23:03.972727 kubelet[2124]: I1101 00:23:03.972673 2124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/525a1d6d-3d5a-4ee8-9dcd-48b6febc6d6b-typha-certs\") pod \"calico-typha-7c5f7b7498-rtx9m\" (UID: \"525a1d6d-3d5a-4ee8-9dcd-48b6febc6d6b\") " pod="calico-system/calico-typha-7c5f7b7498-rtx9m" Nov 1 00:23:03.973163 kubelet[2124]: I1101 00:23:03.973138 2124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n2ww2\" (UniqueName: \"kubernetes.io/projected/525a1d6d-3d5a-4ee8-9dcd-48b6febc6d6b-kube-api-access-n2ww2\") pod \"calico-typha-7c5f7b7498-rtx9m\" (UID: \"525a1d6d-3d5a-4ee8-9dcd-48b6febc6d6b\") " pod="calico-system/calico-typha-7c5f7b7498-rtx9m" Nov 1 00:23:03.973209 kubelet[2124]: I1101 00:23:03.973177 2124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/525a1d6d-3d5a-4ee8-9dcd-48b6febc6d6b-tigera-ca-bundle\") pod \"calico-typha-7c5f7b7498-rtx9m\" (UID: \"525a1d6d-3d5a-4ee8-9dcd-48b6febc6d6b\") " pod="calico-system/calico-typha-7c5f7b7498-rtx9m" Nov 1 00:23:04.174813 kubelet[2124]: I1101 00:23:04.174719 2124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/b243908c-21b7-47ba-b236-a77d185af913-cni-log-dir\") pod \"calico-node-wfrnf\" (UID: \"b243908c-21b7-47ba-b236-a77d185af913\") " pod="calico-system/calico-node-wfrnf" Nov 1 00:23:04.174813 kubelet[2124]: I1101 00:23:04.174769 2124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b243908c-21b7-47ba-b236-a77d185af913-lib-modules\") pod \"calico-node-wfrnf\" (UID: \"b243908c-21b7-47ba-b236-a77d185af913\") " pod="calico-system/calico-node-wfrnf" Nov 1 00:23:04.174813 kubelet[2124]: I1101 00:23:04.174786 2124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/b243908c-21b7-47ba-b236-a77d185af913-cni-bin-dir\") pod \"calico-node-wfrnf\" (UID: \"b243908c-21b7-47ba-b236-a77d185af913\") " pod="calico-system/calico-node-wfrnf" Nov 1 00:23:04.174813 kubelet[2124]: I1101 00:23:04.174804 2124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/b243908c-21b7-47ba-b236-a77d185af913-var-run-calico\") pod \"calico-node-wfrnf\" (UID: \"b243908c-21b7-47ba-b236-a77d185af913\") " pod="calico-system/calico-node-wfrnf" Nov 1 00:23:04.174988 kubelet[2124]: I1101 00:23:04.174822 2124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/b243908c-21b7-47ba-b236-a77d185af913-policysync\") pod \"calico-node-wfrnf\" (UID: \"b243908c-21b7-47ba-b236-a77d185af913\") " pod="calico-system/calico-node-wfrnf" Nov 1 00:23:04.174988 kubelet[2124]: I1101 00:23:04.174846 2124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/b243908c-21b7-47ba-b236-a77d185af913-flexvol-driver-host\") pod \"calico-node-wfrnf\" (UID: \"b243908c-21b7-47ba-b236-a77d185af913\") " pod="calico-system/calico-node-wfrnf" Nov 1 00:23:04.174988 kubelet[2124]: I1101 00:23:04.174865 2124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/b243908c-21b7-47ba-b236-a77d185af913-node-certs\") pod \"calico-node-wfrnf\" (UID: \"b243908c-21b7-47ba-b236-a77d185af913\") " pod="calico-system/calico-node-wfrnf" Nov 1 00:23:04.174988 kubelet[2124]: I1101 00:23:04.174879 2124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b243908c-21b7-47ba-b236-a77d185af913-tigera-ca-bundle\") pod \"calico-node-wfrnf\" (UID: \"b243908c-21b7-47ba-b236-a77d185af913\") " pod="calico-system/calico-node-wfrnf" Nov 1 00:23:04.174988 kubelet[2124]: I1101 00:23:04.174899 2124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b243908c-21b7-47ba-b236-a77d185af913-xtables-lock\") pod \"calico-node-wfrnf\" (UID: \"b243908c-21b7-47ba-b236-a77d185af913\") " pod="calico-system/calico-node-wfrnf" Nov 1 00:23:04.175097 kubelet[2124]: I1101 00:23:04.174924 2124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mzc9j\" (UniqueName: \"kubernetes.io/projected/b243908c-21b7-47ba-b236-a77d185af913-kube-api-access-mzc9j\") pod \"calico-node-wfrnf\" (UID: \"b243908c-21b7-47ba-b236-a77d185af913\") " pod="calico-system/calico-node-wfrnf" Nov 1 00:23:04.175097 kubelet[2124]: I1101 00:23:04.174941 2124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/b243908c-21b7-47ba-b236-a77d185af913-cni-net-dir\") pod \"calico-node-wfrnf\" (UID: \"b243908c-21b7-47ba-b236-a77d185af913\") " pod="calico-system/calico-node-wfrnf" Nov 1 00:23:04.175097 kubelet[2124]: I1101 00:23:04.174955 2124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/b243908c-21b7-47ba-b236-a77d185af913-var-lib-calico\") pod \"calico-node-wfrnf\" (UID: \"b243908c-21b7-47ba-b236-a77d185af913\") " pod="calico-system/calico-node-wfrnf" Nov 1 00:23:04.217960 kubelet[2124]: E1101 00:23:04.217924 2124 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:23:04.218395 env[1324]: time="2025-11-01T00:23:04.218349406Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7c5f7b7498-rtx9m,Uid:525a1d6d-3d5a-4ee8-9dcd-48b6febc6d6b,Namespace:calico-system,Attempt:0,}" Nov 1 00:23:04.232008 env[1324]: time="2025-11-01T00:23:04.231945235Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:23:04.232008 env[1324]: time="2025-11-01T00:23:04.231986636Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:23:04.232008 env[1324]: time="2025-11-01T00:23:04.231996916Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:23:04.232224 env[1324]: time="2025-11-01T00:23:04.232188956Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/efafc72fcbabc8ffdff2c7c351940700165ba34bc39ba11c69e4935468f79504 pid=2552 runtime=io.containerd.runc.v2 Nov 1 00:23:04.276903 env[1324]: time="2025-11-01T00:23:04.276520093Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7c5f7b7498-rtx9m,Uid:525a1d6d-3d5a-4ee8-9dcd-48b6febc6d6b,Namespace:calico-system,Attempt:0,} returns sandbox id \"efafc72fcbabc8ffdff2c7c351940700165ba34bc39ba11c69e4935468f79504\"" Nov 1 00:23:04.279370 kubelet[2124]: E1101 00:23:04.277862 2124 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:23:04.279370 kubelet[2124]: W1101 00:23:04.277892 2124 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:23:04.279370 kubelet[2124]: E1101 00:23:04.278036 2124 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:23:04.279370 kubelet[2124]: E1101 00:23:04.278453 2124 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:23:04.279370 kubelet[2124]: E1101 00:23:04.278660 2124 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:23:04.279370 kubelet[2124]: W1101 00:23:04.278803 2124 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:23:04.279370 kubelet[2124]: E1101 00:23:04.278818 2124 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:23:04.281701 env[1324]: time="2025-11-01T00:23:04.281146943Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Nov 1 00:23:04.287619 kubelet[2124]: E1101 00:23:04.287562 2124 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:23:04.287619 kubelet[2124]: W1101 00:23:04.287587 2124 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:23:04.287619 kubelet[2124]: E1101 00:23:04.287617 2124 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:23:04.287857 kubelet[2124]: E1101 00:23:04.287836 2124 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:23:04.287920 kubelet[2124]: W1101 00:23:04.287907 2124 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:23:04.287984 kubelet[2124]: E1101 00:23:04.287972 2124 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:23:04.288602 kubelet[2124]: E1101 00:23:04.288569 2124 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:23:04.288602 kubelet[2124]: W1101 00:23:04.288590 2124 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:23:04.288757 kubelet[2124]: E1101 00:23:04.288605 2124 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:23:04.290910 kubelet[2124]: E1101 00:23:04.290868 2124 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:23:04.290910 kubelet[2124]: W1101 00:23:04.290882 2124 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:23:04.290910 kubelet[2124]: E1101 00:23:04.290895 2124 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:23:04.297338 kubelet[2124]: E1101 00:23:04.297317 2124 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:23:04.297459 kubelet[2124]: W1101 00:23:04.297443 2124 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:23:04.297532 kubelet[2124]: E1101 00:23:04.297519 2124 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:23:04.299445 kubelet[2124]: E1101 00:23:04.299398 2124 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-52pqx" podUID="a584285f-c40b-477a-8ddb-bfa9e3439fe6" Nov 1 00:23:04.365066 kubelet[2124]: E1101 00:23:04.365034 2124 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:23:04.365066 kubelet[2124]: W1101 00:23:04.365054 2124 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:23:04.365066 kubelet[2124]: E1101 00:23:04.365074 2124 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:23:04.365239 kubelet[2124]: E1101 00:23:04.365217 2124 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:23:04.365268 kubelet[2124]: W1101 00:23:04.365224 2124 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:23:04.365268 kubelet[2124]: E1101 00:23:04.365255 2124 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:23:04.365423 kubelet[2124]: E1101 00:23:04.365389 2124 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:23:04.365423 kubelet[2124]: W1101 00:23:04.365400 2124 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:23:04.365423 kubelet[2124]: E1101 00:23:04.365423 2124 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:23:04.365568 kubelet[2124]: E1101 00:23:04.365557 2124 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:23:04.365568 kubelet[2124]: W1101 00:23:04.365567 2124 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:23:04.365623 kubelet[2124]: E1101 00:23:04.365574 2124 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:23:04.365717 kubelet[2124]: E1101 00:23:04.365707 2124 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:23:04.365717 kubelet[2124]: W1101 00:23:04.365717 2124 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:23:04.365813 kubelet[2124]: E1101 00:23:04.365724 2124 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:23:04.365875 kubelet[2124]: E1101 00:23:04.365862 2124 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:23:04.365875 kubelet[2124]: W1101 00:23:04.365874 2124 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:23:04.365926 kubelet[2124]: E1101 00:23:04.365881 2124 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:23:04.366024 kubelet[2124]: E1101 00:23:04.366000 2124 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:23:04.366024 kubelet[2124]: W1101 00:23:04.366011 2124 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:23:04.366024 kubelet[2124]: E1101 00:23:04.366018 2124 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:23:04.366148 kubelet[2124]: E1101 00:23:04.366138 2124 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:23:04.366148 kubelet[2124]: W1101 00:23:04.366147 2124 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:23:04.366198 kubelet[2124]: E1101 00:23:04.366157 2124 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:23:04.366296 kubelet[2124]: E1101 00:23:04.366287 2124 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:23:04.366331 kubelet[2124]: W1101 00:23:04.366296 2124 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:23:04.366331 kubelet[2124]: E1101 00:23:04.366304 2124 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:23:04.366465 kubelet[2124]: E1101 00:23:04.366453 2124 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:23:04.366465 kubelet[2124]: W1101 00:23:04.366464 2124 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:23:04.366526 kubelet[2124]: E1101 00:23:04.366472 2124 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:23:04.366606 kubelet[2124]: E1101 00:23:04.366596 2124 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:23:04.366606 kubelet[2124]: W1101 00:23:04.366605 2124 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:23:04.366657 kubelet[2124]: E1101 00:23:04.366612 2124 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:23:04.366743 kubelet[2124]: E1101 00:23:04.366734 2124 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:23:04.366743 kubelet[2124]: W1101 00:23:04.366743 2124 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:23:04.366790 kubelet[2124]: E1101 00:23:04.366750 2124 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:23:04.366895 kubelet[2124]: E1101 00:23:04.366885 2124 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:23:04.366895 kubelet[2124]: W1101 00:23:04.366895 2124 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:23:04.366953 kubelet[2124]: E1101 00:23:04.366902 2124 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:23:04.367027 kubelet[2124]: E1101 00:23:04.367018 2124 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:23:04.367055 kubelet[2124]: W1101 00:23:04.367028 2124 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:23:04.367055 kubelet[2124]: E1101 00:23:04.367036 2124 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:23:04.367160 kubelet[2124]: E1101 00:23:04.367151 2124 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:23:04.367185 kubelet[2124]: W1101 00:23:04.367159 2124 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:23:04.367185 kubelet[2124]: E1101 00:23:04.367167 2124 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:23:04.367361 kubelet[2124]: E1101 00:23:04.367348 2124 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:23:04.367361 kubelet[2124]: W1101 00:23:04.367360 2124 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:23:04.367428 kubelet[2124]: E1101 00:23:04.367368 2124 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:23:04.367537 kubelet[2124]: E1101 00:23:04.367526 2124 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:23:04.367537 kubelet[2124]: W1101 00:23:04.367535 2124 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:23:04.367590 kubelet[2124]: E1101 00:23:04.367544 2124 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:23:04.367680 kubelet[2124]: E1101 00:23:04.367670 2124 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:23:04.367706 kubelet[2124]: W1101 00:23:04.367681 2124 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:23:04.367706 kubelet[2124]: E1101 00:23:04.367688 2124 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:23:04.367821 kubelet[2124]: E1101 00:23:04.367811 2124 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:23:04.367821 kubelet[2124]: W1101 00:23:04.367819 2124 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:23:04.367868 kubelet[2124]: E1101 00:23:04.367828 2124 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:23:04.367954 kubelet[2124]: E1101 00:23:04.367945 2124 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:23:04.367980 kubelet[2124]: W1101 00:23:04.367954 2124 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:23:04.367980 kubelet[2124]: E1101 00:23:04.367961 2124 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:23:04.378173 kubelet[2124]: E1101 00:23:04.378150 2124 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:23:04.378321 kubelet[2124]: W1101 00:23:04.378180 2124 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:23:04.378321 kubelet[2124]: E1101 00:23:04.378207 2124 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:23:04.378321 kubelet[2124]: I1101 00:23:04.378274 2124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ppkz8\" (UniqueName: \"kubernetes.io/projected/a584285f-c40b-477a-8ddb-bfa9e3439fe6-kube-api-access-ppkz8\") pod \"csi-node-driver-52pqx\" (UID: \"a584285f-c40b-477a-8ddb-bfa9e3439fe6\") " pod="calico-system/csi-node-driver-52pqx" Nov 1 00:23:04.378661 kubelet[2124]: E1101 00:23:04.378469 2124 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:23:04.378661 kubelet[2124]: W1101 00:23:04.378481 2124 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:23:04.378661 kubelet[2124]: E1101 00:23:04.378493 2124 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:23:04.378661 kubelet[2124]: I1101 00:23:04.378509 2124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/a584285f-c40b-477a-8ddb-bfa9e3439fe6-socket-dir\") pod \"csi-node-driver-52pqx\" (UID: \"a584285f-c40b-477a-8ddb-bfa9e3439fe6\") " pod="calico-system/csi-node-driver-52pqx" Nov 1 00:23:04.379039 kubelet[2124]: E1101 00:23:04.379018 2124 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:23:04.379101 kubelet[2124]: W1101 00:23:04.379038 2124 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:23:04.379101 kubelet[2124]: E1101 00:23:04.379057 2124 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:23:04.379101 kubelet[2124]: I1101 00:23:04.379074 2124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a584285f-c40b-477a-8ddb-bfa9e3439fe6-kubelet-dir\") pod \"csi-node-driver-52pqx\" (UID: \"a584285f-c40b-477a-8ddb-bfa9e3439fe6\") " pod="calico-system/csi-node-driver-52pqx" Nov 1 00:23:04.379446 kubelet[2124]: E1101 00:23:04.379280 2124 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:23:04.379446 kubelet[2124]: W1101 00:23:04.379296 2124 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:23:04.379446 kubelet[2124]: E1101 00:23:04.379314 2124 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:23:04.379446 kubelet[2124]: I1101 00:23:04.379331 2124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/a584285f-c40b-477a-8ddb-bfa9e3439fe6-registration-dir\") pod \"csi-node-driver-52pqx\" (UID: \"a584285f-c40b-477a-8ddb-bfa9e3439fe6\") " pod="calico-system/csi-node-driver-52pqx" Nov 1 00:23:04.379712 kubelet[2124]: E1101 00:23:04.379546 2124 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:23:04.379712 kubelet[2124]: W1101 00:23:04.379556 2124 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:23:04.379712 kubelet[2124]: E1101 00:23:04.379566 2124 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:23:04.379712 kubelet[2124]: I1101 00:23:04.379583 2124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/a584285f-c40b-477a-8ddb-bfa9e3439fe6-varrun\") pod \"csi-node-driver-52pqx\" (UID: \"a584285f-c40b-477a-8ddb-bfa9e3439fe6\") " pod="calico-system/csi-node-driver-52pqx" Nov 1 00:23:04.380487 kubelet[2124]: E1101 00:23:04.380114 2124 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:23:04.380487 kubelet[2124]: W1101 00:23:04.380132 2124 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:23:04.380487 kubelet[2124]: E1101 00:23:04.380209 2124 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:23:04.380487 kubelet[2124]: E1101 00:23:04.380347 2124 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:23:04.380487 kubelet[2124]: W1101 00:23:04.380356 2124 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:23:04.380487 kubelet[2124]: E1101 00:23:04.380393 2124 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:23:04.380667 kubelet[2124]: E1101 00:23:04.380537 2124 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:23:04.380667 kubelet[2124]: W1101 00:23:04.380547 2124 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:23:04.380667 kubelet[2124]: E1101 00:23:04.380588 2124 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:23:04.380733 kubelet[2124]: E1101 00:23:04.380681 2124 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:23:04.380733 kubelet[2124]: W1101 00:23:04.380688 2124 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:23:04.380819 kubelet[2124]: E1101 00:23:04.380800 2124 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:23:04.382531 kubelet[2124]: E1101 00:23:04.381246 2124 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:23:04.382634 kubelet[2124]: W1101 00:23:04.382617 2124 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:23:04.382776 kubelet[2124]: E1101 00:23:04.382740 2124 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:23:04.382983 kubelet[2124]: E1101 00:23:04.382970 2124 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:23:04.383064 kubelet[2124]: W1101 00:23:04.383052 2124 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:23:04.383124 kubelet[2124]: E1101 00:23:04.383113 2124 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:23:04.383494 kubelet[2124]: E1101 00:23:04.383479 2124 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:23:04.383602 kubelet[2124]: W1101 00:23:04.383573 2124 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:23:04.383699 kubelet[2124]: E1101 00:23:04.383687 2124 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:23:04.384128 kubelet[2124]: E1101 00:23:04.384113 2124 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:23:04.384218 kubelet[2124]: W1101 00:23:04.384205 2124 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:23:04.384274 kubelet[2124]: E1101 00:23:04.384263 2124 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:23:04.386539 kubelet[2124]: E1101 00:23:04.386437 2124 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:23:04.386631 kubelet[2124]: W1101 00:23:04.386615 2124 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:23:04.386704 kubelet[2124]: E1101 00:23:04.386691 2124 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:23:04.386964 kubelet[2124]: E1101 00:23:04.386952 2124 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:23:04.387046 kubelet[2124]: W1101 00:23:04.387032 2124 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:23:04.387115 kubelet[2124]: E1101 00:23:04.387102 2124 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:23:04.409175 kubelet[2124]: E1101 00:23:04.409150 2124 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:23:04.410305 env[1324]: time="2025-11-01T00:23:04.410267544Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-wfrnf,Uid:b243908c-21b7-47ba-b236-a77d185af913,Namespace:calico-system,Attempt:0,}" Nov 1 00:23:04.424082 env[1324]: time="2025-11-01T00:23:04.424021374Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:23:04.424250 env[1324]: time="2025-11-01T00:23:04.424074734Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:23:04.424250 env[1324]: time="2025-11-01T00:23:04.424086374Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:23:04.424335 env[1324]: time="2025-11-01T00:23:04.424252415Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/4e589d3ff540d88a4610252ad3ba48fa9cf531a6eb9298a0c37422586ed46a26 pid=2648 runtime=io.containerd.runc.v2 Nov 1 00:23:04.459044 env[1324]: time="2025-11-01T00:23:04.459005851Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-wfrnf,Uid:b243908c-21b7-47ba-b236-a77d185af913,Namespace:calico-system,Attempt:0,} returns sandbox id \"4e589d3ff540d88a4610252ad3ba48fa9cf531a6eb9298a0c37422586ed46a26\"" Nov 1 00:23:04.460155 kubelet[2124]: E1101 00:23:04.459826 2124 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:23:04.480859 kubelet[2124]: E1101 00:23:04.480833 2124 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:23:04.480859 kubelet[2124]: W1101 00:23:04.480857 2124 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:23:04.480985 kubelet[2124]: E1101 00:23:04.480874 2124 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:23:04.481132 kubelet[2124]: E1101 00:23:04.481120 2124 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:23:04.481167 kubelet[2124]: W1101 00:23:04.481132 2124 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:23:04.481167 kubelet[2124]: E1101 00:23:04.481146 2124 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:23:04.481361 kubelet[2124]: E1101 00:23:04.481348 2124 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:23:04.481361 kubelet[2124]: W1101 00:23:04.481360 2124 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:23:04.481456 kubelet[2124]: E1101 00:23:04.481378 2124 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:23:04.481542 kubelet[2124]: E1101 00:23:04.481531 2124 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:23:04.481577 kubelet[2124]: W1101 00:23:04.481542 2124 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:23:04.481577 kubelet[2124]: E1101 00:23:04.481552 2124 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:23:04.481726 kubelet[2124]: E1101 00:23:04.481714 2124 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:23:04.481726 kubelet[2124]: W1101 00:23:04.481725 2124 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:23:04.481787 kubelet[2124]: E1101 00:23:04.481734 2124 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:23:04.481928 kubelet[2124]: E1101 00:23:04.481915 2124 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:23:04.481928 kubelet[2124]: W1101 00:23:04.481926 2124 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:23:04.481990 kubelet[2124]: E1101 00:23:04.481939 2124 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:23:04.482085 kubelet[2124]: E1101 00:23:04.482072 2124 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:23:04.482119 kubelet[2124]: W1101 00:23:04.482086 2124 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:23:04.482146 kubelet[2124]: E1101 00:23:04.482128 2124 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:23:04.482237 kubelet[2124]: E1101 00:23:04.482227 2124 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:23:04.482267 kubelet[2124]: W1101 00:23:04.482237 2124 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:23:04.482289 kubelet[2124]: E1101 00:23:04.482270 2124 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:23:04.482390 kubelet[2124]: E1101 00:23:04.482378 2124 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:23:04.482390 kubelet[2124]: W1101 00:23:04.482388 2124 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:23:04.482459 kubelet[2124]: E1101 00:23:04.482401 2124 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:23:04.482572 kubelet[2124]: E1101 00:23:04.482562 2124 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:23:04.482599 kubelet[2124]: W1101 00:23:04.482572 2124 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:23:04.482623 kubelet[2124]: E1101 00:23:04.482606 2124 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:23:04.482706 kubelet[2124]: E1101 00:23:04.482695 2124 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:23:04.482734 kubelet[2124]: W1101 00:23:04.482711 2124 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:23:04.482759 kubelet[2124]: E1101 00:23:04.482747 2124 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:23:04.482850 kubelet[2124]: E1101 00:23:04.482840 2124 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:23:04.482850 kubelet[2124]: W1101 00:23:04.482849 2124 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:23:04.482903 kubelet[2124]: E1101 00:23:04.482891 2124 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:23:04.482996 kubelet[2124]: E1101 00:23:04.482986 2124 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:23:04.483023 kubelet[2124]: W1101 00:23:04.482995 2124 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:23:04.483023 kubelet[2124]: E1101 00:23:04.483018 2124 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:23:04.483390 kubelet[2124]: E1101 00:23:04.483375 2124 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:23:04.483390 kubelet[2124]: W1101 00:23:04.483390 2124 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:23:04.483467 kubelet[2124]: E1101 00:23:04.483452 2124 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:23:04.483558 kubelet[2124]: E1101 00:23:04.483546 2124 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:23:04.483558 kubelet[2124]: W1101 00:23:04.483556 2124 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:23:04.484655 kubelet[2124]: E1101 00:23:04.484634 2124 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:23:04.484655 kubelet[2124]: W1101 00:23:04.484653 2124 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:23:04.484742 kubelet[2124]: E1101 00:23:04.484674 2124 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:23:04.485442 kubelet[2124]: E1101 00:23:04.485339 2124 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:23:04.485643 kubelet[2124]: E1101 00:23:04.485608 2124 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:23:04.489755 kubelet[2124]: W1101 00:23:04.485643 2124 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:23:04.489755 kubelet[2124]: E1101 00:23:04.487564 2124 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:23:04.489755 kubelet[2124]: E1101 00:23:04.487644 2124 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:23:04.489755 kubelet[2124]: W1101 00:23:04.487652 2124 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:23:04.489755 kubelet[2124]: E1101 00:23:04.487719 2124 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:23:04.489755 kubelet[2124]: E1101 00:23:04.487817 2124 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:23:04.489755 kubelet[2124]: W1101 00:23:04.487824 2124 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:23:04.489755 kubelet[2124]: E1101 00:23:04.487896 2124 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:23:04.489755 kubelet[2124]: E1101 00:23:04.487970 2124 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:23:04.489755 kubelet[2124]: W1101 00:23:04.487977 2124 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:23:04.490020 kubelet[2124]: E1101 00:23:04.488039 2124 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:23:04.490020 kubelet[2124]: E1101 00:23:04.488151 2124 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:23:04.490020 kubelet[2124]: W1101 00:23:04.488160 2124 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:23:04.490020 kubelet[2124]: E1101 00:23:04.488175 2124 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:23:04.490020 kubelet[2124]: E1101 00:23:04.488478 2124 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:23:04.490020 kubelet[2124]: W1101 00:23:04.488497 2124 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:23:04.490020 kubelet[2124]: E1101 00:23:04.488508 2124 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:23:04.490020 kubelet[2124]: E1101 00:23:04.488729 2124 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:23:04.490020 kubelet[2124]: W1101 00:23:04.488739 2124 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:23:04.490020 kubelet[2124]: E1101 00:23:04.488753 2124 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:23:04.490253 kubelet[2124]: E1101 00:23:04.488921 2124 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:23:04.490253 kubelet[2124]: W1101 00:23:04.488928 2124 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:23:04.490253 kubelet[2124]: E1101 00:23:04.488936 2124 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:23:04.490253 kubelet[2124]: E1101 00:23:04.489101 2124 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:23:04.490253 kubelet[2124]: W1101 00:23:04.489108 2124 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:23:04.490253 kubelet[2124]: E1101 00:23:04.489115 2124 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:23:04.494272 kubelet[2124]: E1101 00:23:04.494215 2124 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:23:04.494272 kubelet[2124]: W1101 00:23:04.494231 2124 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:23:04.494272 kubelet[2124]: E1101 00:23:04.494244 2124 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:23:04.976000 audit[2709]: NETFILTER_CFG table=filter:103 family=2 entries=22 op=nft_register_rule pid=2709 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 00:23:04.976000 audit[2709]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=8224 a0=3 a1=ffffe24d6790 a2=0 a3=1 items=0 ppid=2274 pid=2709 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:23:04.976000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 00:23:04.982000 audit[2709]: NETFILTER_CFG table=nat:104 family=2 entries=12 op=nft_register_rule pid=2709 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 00:23:04.982000 audit[2709]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffe24d6790 a2=0 a3=1 items=0 ppid=2274 pid=2709 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:23:04.982000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 00:23:05.171721 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1945058806.mount: Deactivated successfully. Nov 1 00:23:05.639731 kubelet[2124]: E1101 00:23:05.639671 2124 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-52pqx" podUID="a584285f-c40b-477a-8ddb-bfa9e3439fe6" Nov 1 00:23:05.986960 env[1324]: time="2025-11-01T00:23:05.986904768Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/typha:v3.30.4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:23:05.988544 env[1324]: time="2025-11-01T00:23:05.988516651Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:23:05.990163 env[1324]: time="2025-11-01T00:23:05.990135974Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/typha:v3.30.4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:23:05.991457 env[1324]: time="2025-11-01T00:23:05.991428937Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:23:05.991871 env[1324]: time="2025-11-01T00:23:05.991851498Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\"" Nov 1 00:23:05.994059 env[1324]: time="2025-11-01T00:23:05.994034582Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Nov 1 00:23:06.008184 env[1324]: time="2025-11-01T00:23:06.008141610Z" level=info msg="CreateContainer within sandbox \"efafc72fcbabc8ffdff2c7c351940700165ba34bc39ba11c69e4935468f79504\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Nov 1 00:23:06.018006 env[1324]: time="2025-11-01T00:23:06.017963989Z" level=info msg="CreateContainer within sandbox \"efafc72fcbabc8ffdff2c7c351940700165ba34bc39ba11c69e4935468f79504\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"084b2dfafd81be3b4b5b1b4e1fcb8c42586c9301a4d8d89786e4ecd8de3dc92f\"" Nov 1 00:23:06.018637 env[1324]: time="2025-11-01T00:23:06.018581510Z" level=info msg="StartContainer for \"084b2dfafd81be3b4b5b1b4e1fcb8c42586c9301a4d8d89786e4ecd8de3dc92f\"" Nov 1 00:23:06.112513 env[1324]: time="2025-11-01T00:23:06.112433370Z" level=info msg="StartContainer for \"084b2dfafd81be3b4b5b1b4e1fcb8c42586c9301a4d8d89786e4ecd8de3dc92f\" returns successfully" Nov 1 00:23:06.696455 kubelet[2124]: E1101 00:23:06.696389 2124 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:23:06.785816 kubelet[2124]: E1101 00:23:06.785789 2124 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:23:06.785998 kubelet[2124]: W1101 00:23:06.785981 2124 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:23:06.786084 kubelet[2124]: E1101 00:23:06.786070 2124 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:23:06.786324 kubelet[2124]: E1101 00:23:06.786310 2124 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:23:06.786433 kubelet[2124]: W1101 00:23:06.786419 2124 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:23:06.786517 kubelet[2124]: E1101 00:23:06.786505 2124 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:23:06.786767 kubelet[2124]: E1101 00:23:06.786753 2124 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:23:06.786858 kubelet[2124]: W1101 00:23:06.786845 2124 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:23:06.786937 kubelet[2124]: E1101 00:23:06.786925 2124 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:23:06.787170 kubelet[2124]: E1101 00:23:06.787157 2124 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:23:06.787251 kubelet[2124]: W1101 00:23:06.787237 2124 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:23:06.787335 kubelet[2124]: E1101 00:23:06.787322 2124 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:23:06.787596 kubelet[2124]: E1101 00:23:06.787583 2124 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:23:06.787687 kubelet[2124]: W1101 00:23:06.787675 2124 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:23:06.787767 kubelet[2124]: E1101 00:23:06.787755 2124 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:23:06.788016 kubelet[2124]: E1101 00:23:06.788003 2124 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:23:06.788093 kubelet[2124]: W1101 00:23:06.788081 2124 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:23:06.788168 kubelet[2124]: E1101 00:23:06.788158 2124 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:23:06.788399 kubelet[2124]: E1101 00:23:06.788381 2124 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:23:06.788508 kubelet[2124]: W1101 00:23:06.788495 2124 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:23:06.788585 kubelet[2124]: E1101 00:23:06.788574 2124 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:23:06.792278 kubelet[2124]: E1101 00:23:06.792261 2124 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:23:06.792385 kubelet[2124]: W1101 00:23:06.792370 2124 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:23:06.792489 kubelet[2124]: E1101 00:23:06.792476 2124 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:23:06.792703 kubelet[2124]: E1101 00:23:06.792690 2124 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:23:06.792783 kubelet[2124]: W1101 00:23:06.792769 2124 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:23:06.792848 kubelet[2124]: E1101 00:23:06.792837 2124 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:23:06.793044 kubelet[2124]: E1101 00:23:06.793032 2124 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:23:06.793116 kubelet[2124]: W1101 00:23:06.793103 2124 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:23:06.793171 kubelet[2124]: E1101 00:23:06.793160 2124 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:23:06.793396 kubelet[2124]: E1101 00:23:06.793382 2124 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:23:06.793495 kubelet[2124]: W1101 00:23:06.793482 2124 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:23:06.793558 kubelet[2124]: E1101 00:23:06.793547 2124 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:23:06.793790 kubelet[2124]: E1101 00:23:06.793778 2124 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:23:06.793863 kubelet[2124]: W1101 00:23:06.793851 2124 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:23:06.793917 kubelet[2124]: E1101 00:23:06.793907 2124 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:23:06.794131 kubelet[2124]: E1101 00:23:06.794119 2124 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:23:06.794201 kubelet[2124]: W1101 00:23:06.794187 2124 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:23:06.794263 kubelet[2124]: E1101 00:23:06.794252 2124 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:23:06.794487 kubelet[2124]: E1101 00:23:06.794475 2124 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:23:06.794569 kubelet[2124]: W1101 00:23:06.794555 2124 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:23:06.794632 kubelet[2124]: E1101 00:23:06.794620 2124 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:23:06.795229 kubelet[2124]: E1101 00:23:06.795214 2124 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:23:06.795324 kubelet[2124]: W1101 00:23:06.795310 2124 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:23:06.795393 kubelet[2124]: E1101 00:23:06.795382 2124 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:23:06.799316 kubelet[2124]: E1101 00:23:06.799291 2124 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:23:06.799443 kubelet[2124]: W1101 00:23:06.799427 2124 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:23:06.799578 kubelet[2124]: E1101 00:23:06.799561 2124 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:23:06.800063 kubelet[2124]: E1101 00:23:06.800048 2124 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:23:06.800156 kubelet[2124]: W1101 00:23:06.800141 2124 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:23:06.800224 kubelet[2124]: E1101 00:23:06.800212 2124 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:23:06.800480 kubelet[2124]: E1101 00:23:06.800464 2124 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:23:06.800539 kubelet[2124]: W1101 00:23:06.800481 2124 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:23:06.800539 kubelet[2124]: E1101 00:23:06.800496 2124 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:23:06.800658 kubelet[2124]: E1101 00:23:06.800643 2124 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:23:06.800658 kubelet[2124]: W1101 00:23:06.800654 2124 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:23:06.800724 kubelet[2124]: E1101 00:23:06.800664 2124 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:23:06.800797 kubelet[2124]: E1101 00:23:06.800783 2124 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:23:06.800797 kubelet[2124]: W1101 00:23:06.800793 2124 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:23:06.800859 kubelet[2124]: E1101 00:23:06.800801 2124 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:23:06.801498 kubelet[2124]: E1101 00:23:06.801477 2124 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:23:06.801498 kubelet[2124]: W1101 00:23:06.801490 2124 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:23:06.801498 kubelet[2124]: E1101 00:23:06.801503 2124 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:23:06.801851 kubelet[2124]: E1101 00:23:06.801838 2124 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:23:06.801851 kubelet[2124]: W1101 00:23:06.801850 2124 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:23:06.801931 kubelet[2124]: E1101 00:23:06.801877 2124 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:23:06.802001 kubelet[2124]: E1101 00:23:06.801991 2124 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:23:06.802041 kubelet[2124]: W1101 00:23:06.802003 2124 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:23:06.802084 kubelet[2124]: E1101 00:23:06.802071 2124 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:23:06.802155 kubelet[2124]: E1101 00:23:06.802146 2124 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:23:06.802191 kubelet[2124]: W1101 00:23:06.802155 2124 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:23:06.802191 kubelet[2124]: E1101 00:23:06.802167 2124 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:23:06.802324 kubelet[2124]: E1101 00:23:06.802314 2124 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:23:06.802324 kubelet[2124]: W1101 00:23:06.802324 2124 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:23:06.802395 kubelet[2124]: E1101 00:23:06.802336 2124 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:23:06.802912 kubelet[2124]: E1101 00:23:06.802881 2124 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:23:06.802912 kubelet[2124]: W1101 00:23:06.802894 2124 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:23:06.802912 kubelet[2124]: E1101 00:23:06.802906 2124 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:23:06.803090 kubelet[2124]: E1101 00:23:06.803070 2124 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:23:06.803090 kubelet[2124]: W1101 00:23:06.803082 2124 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:23:06.803190 kubelet[2124]: E1101 00:23:06.803095 2124 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:23:06.803325 kubelet[2124]: E1101 00:23:06.803294 2124 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:23:06.803325 kubelet[2124]: W1101 00:23:06.803319 2124 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:23:06.803394 kubelet[2124]: E1101 00:23:06.803333 2124 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:23:06.803801 kubelet[2124]: E1101 00:23:06.803773 2124 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:23:06.803801 kubelet[2124]: W1101 00:23:06.803792 2124 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:23:06.803879 kubelet[2124]: E1101 00:23:06.803811 2124 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:23:06.803998 kubelet[2124]: E1101 00:23:06.803979 2124 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:23:06.803998 kubelet[2124]: W1101 00:23:06.803992 2124 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:23:06.804056 kubelet[2124]: E1101 00:23:06.804006 2124 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:23:06.804199 kubelet[2124]: E1101 00:23:06.804182 2124 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:23:06.804199 kubelet[2124]: W1101 00:23:06.804195 2124 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:23:06.804263 kubelet[2124]: E1101 00:23:06.804210 2124 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:23:06.804445 kubelet[2124]: E1101 00:23:06.804426 2124 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:23:06.804445 kubelet[2124]: W1101 00:23:06.804442 2124 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:23:06.804517 kubelet[2124]: E1101 00:23:06.804454 2124 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:23:06.805511 kubelet[2124]: E1101 00:23:06.805320 2124 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:23:06.805511 kubelet[2124]: W1101 00:23:06.805508 2124 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:23:06.805607 kubelet[2124]: E1101 00:23:06.805523 2124 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:23:06.918483 env[1324]: time="2025-11-01T00:23:06.918423155Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:23:06.920090 env[1324]: time="2025-11-01T00:23:06.920048158Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:23:06.921634 env[1324]: time="2025-11-01T00:23:06.921598321Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:23:06.924150 env[1324]: time="2025-11-01T00:23:06.924114046Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:23:06.924557 env[1324]: time="2025-11-01T00:23:06.924520207Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\"" Nov 1 00:23:06.928034 env[1324]: time="2025-11-01T00:23:06.928003333Z" level=info msg="CreateContainer within sandbox \"4e589d3ff540d88a4610252ad3ba48fa9cf531a6eb9298a0c37422586ed46a26\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Nov 1 00:23:06.939423 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2051286118.mount: Deactivated successfully. Nov 1 00:23:06.943191 env[1324]: time="2025-11-01T00:23:06.943144882Z" level=info msg="CreateContainer within sandbox \"4e589d3ff540d88a4610252ad3ba48fa9cf531a6eb9298a0c37422586ed46a26\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"a405cae369a3e4299383759208b4acfc9e7c9f9c21bff53259977e762f7c28aa\"" Nov 1 00:23:06.945195 env[1324]: time="2025-11-01T00:23:06.945157086Z" level=info msg="StartContainer for \"a405cae369a3e4299383759208b4acfc9e7c9f9c21bff53259977e762f7c28aa\"" Nov 1 00:23:06.999381 env[1324]: time="2025-11-01T00:23:06.999274590Z" level=info msg="StartContainer for \"a405cae369a3e4299383759208b4acfc9e7c9f9c21bff53259977e762f7c28aa\" returns successfully" Nov 1 00:23:07.032731 env[1324]: time="2025-11-01T00:23:07.032680970Z" level=info msg="shim disconnected" id=a405cae369a3e4299383759208b4acfc9e7c9f9c21bff53259977e762f7c28aa Nov 1 00:23:07.032731 env[1324]: time="2025-11-01T00:23:07.032728810Z" level=warning msg="cleaning up after shim disconnected" id=a405cae369a3e4299383759208b4acfc9e7c9f9c21bff53259977e762f7c28aa namespace=k8s.io Nov 1 00:23:07.032918 env[1324]: time="2025-11-01T00:23:07.032740890Z" level=info msg="cleaning up dead shim" Nov 1 00:23:07.038997 env[1324]: time="2025-11-01T00:23:07.038960421Z" level=warning msg="cleanup warnings time=\"2025-11-01T00:23:07Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2835 runtime=io.containerd.runc.v2\n" Nov 1 00:23:07.079638 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a405cae369a3e4299383759208b4acfc9e7c9f9c21bff53259977e762f7c28aa-rootfs.mount: Deactivated successfully. Nov 1 00:23:07.639921 kubelet[2124]: E1101 00:23:07.639874 2124 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-52pqx" podUID="a584285f-c40b-477a-8ddb-bfa9e3439fe6" Nov 1 00:23:07.698784 kubelet[2124]: I1101 00:23:07.698756 2124 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 1 00:23:07.699444 kubelet[2124]: E1101 00:23:07.699394 2124 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:23:07.699575 kubelet[2124]: E1101 00:23:07.699032 2124 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:23:07.700889 env[1324]: time="2025-11-01T00:23:07.700851171Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Nov 1 00:23:07.714878 kubelet[2124]: I1101 00:23:07.714811 2124 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-7c5f7b7498-rtx9m" podStartSLOduration=3.000562033 podStartE2EDuration="4.714786196s" podCreationTimestamp="2025-11-01 00:23:03 +0000 UTC" firstStartedPulling="2025-11-01 00:23:04.279547259 +0000 UTC m=+21.743440437" lastFinishedPulling="2025-11-01 00:23:05.993771422 +0000 UTC m=+23.457664600" observedRunningTime="2025-11-01 00:23:06.711467038 +0000 UTC m=+24.175360256" watchObservedRunningTime="2025-11-01 00:23:07.714786196 +0000 UTC m=+25.178679374" Nov 1 00:23:09.640898 kubelet[2124]: E1101 00:23:09.639964 2124 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-52pqx" podUID="a584285f-c40b-477a-8ddb-bfa9e3439fe6" Nov 1 00:23:10.336430 env[1324]: time="2025-11-01T00:23:10.336345550Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/cni:v3.30.4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:23:10.338220 env[1324]: time="2025-11-01T00:23:10.338180753Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:23:10.339848 env[1324]: time="2025-11-01T00:23:10.339815755Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/cni:v3.30.4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:23:10.344583 env[1324]: time="2025-11-01T00:23:10.344540042Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:23:10.345159 env[1324]: time="2025-11-01T00:23:10.345117923Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\"" Nov 1 00:23:10.348098 env[1324]: time="2025-11-01T00:23:10.347575127Z" level=info msg="CreateContainer within sandbox \"4e589d3ff540d88a4610252ad3ba48fa9cf531a6eb9298a0c37422586ed46a26\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Nov 1 00:23:10.359933 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4138057950.mount: Deactivated successfully. Nov 1 00:23:10.362647 env[1324]: time="2025-11-01T00:23:10.362609149Z" level=info msg="CreateContainer within sandbox \"4e589d3ff540d88a4610252ad3ba48fa9cf531a6eb9298a0c37422586ed46a26\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"cf6eb534b7d46b97dec9b08dfcd673f728546a5cbc417259c3ce93a4b82686e8\"" Nov 1 00:23:10.363087 env[1324]: time="2025-11-01T00:23:10.363048310Z" level=info msg="StartContainer for \"cf6eb534b7d46b97dec9b08dfcd673f728546a5cbc417259c3ce93a4b82686e8\"" Nov 1 00:23:10.487955 env[1324]: time="2025-11-01T00:23:10.487904615Z" level=info msg="StartContainer for \"cf6eb534b7d46b97dec9b08dfcd673f728546a5cbc417259c3ce93a4b82686e8\" returns successfully" Nov 1 00:23:10.708753 kubelet[2124]: E1101 00:23:10.708712 2124 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:23:11.093852 env[1324]: time="2025-11-01T00:23:11.093723863Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/calico-kubeconfig\": WRITE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Nov 1 00:23:11.097507 kubelet[2124]: I1101 00:23:11.097470 2124 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Nov 1 00:23:11.119998 env[1324]: time="2025-11-01T00:23:11.119943579Z" level=info msg="shim disconnected" id=cf6eb534b7d46b97dec9b08dfcd673f728546a5cbc417259c3ce93a4b82686e8 Nov 1 00:23:11.119998 env[1324]: time="2025-11-01T00:23:11.119987459Z" level=warning msg="cleaning up after shim disconnected" id=cf6eb534b7d46b97dec9b08dfcd673f728546a5cbc417259c3ce93a4b82686e8 namespace=k8s.io Nov 1 00:23:11.119998 env[1324]: time="2025-11-01T00:23:11.119996299Z" level=info msg="cleaning up dead shim" Nov 1 00:23:11.134196 kubelet[2124]: I1101 00:23:11.134136 2124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/e3dd8f6c-3b39-4d19-a732-fff37a40f25e-calico-apiserver-certs\") pod \"calico-apiserver-77cfb4d4d6-4nr5k\" (UID: \"e3dd8f6c-3b39-4d19-a732-fff37a40f25e\") " pod="calico-apiserver/calico-apiserver-77cfb4d4d6-4nr5k" Nov 1 00:23:11.134196 kubelet[2124]: I1101 00:23:11.134182 2124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jhn7d\" (UniqueName: \"kubernetes.io/projected/bf1893fd-31bf-427a-928e-11685512f41a-kube-api-access-jhn7d\") pod \"goldmane-666569f655-cp7jx\" (UID: \"bf1893fd-31bf-427a-928e-11685512f41a\") " pod="calico-system/goldmane-666569f655-cp7jx" Nov 1 00:23:11.134196 kubelet[2124]: I1101 00:23:11.134206 2124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bfv7s\" (UniqueName: \"kubernetes.io/projected/098f9c0f-a24a-4001-88bf-ea4e44e957ea-kube-api-access-bfv7s\") pod \"calico-apiserver-77cfb4d4d6-l78bq\" (UID: \"098f9c0f-a24a-4001-88bf-ea4e44e957ea\") " pod="calico-apiserver/calico-apiserver-77cfb4d4d6-l78bq" Nov 1 00:23:11.134481 kubelet[2124]: I1101 00:23:11.134224 2124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gfbq7\" (UniqueName: \"kubernetes.io/projected/e95bf48f-c761-42b0-aba2-5fe9b024e3f4-kube-api-access-gfbq7\") pod \"coredns-668d6bf9bc-8c5b8\" (UID: \"e95bf48f-c761-42b0-aba2-5fe9b024e3f4\") " pod="kube-system/coredns-668d6bf9bc-8c5b8" Nov 1 00:23:11.134481 kubelet[2124]: I1101 00:23:11.134241 2124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bf1893fd-31bf-427a-928e-11685512f41a-config\") pod \"goldmane-666569f655-cp7jx\" (UID: \"bf1893fd-31bf-427a-928e-11685512f41a\") " pod="calico-system/goldmane-666569f655-cp7jx" Nov 1 00:23:11.134481 kubelet[2124]: I1101 00:23:11.134261 2124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2ceacdf1-40ab-4971-87a7-298d59c91848-config-volume\") pod \"coredns-668d6bf9bc-jfnxh\" (UID: \"2ceacdf1-40ab-4971-87a7-298d59c91848\") " pod="kube-system/coredns-668d6bf9bc-jfnxh" Nov 1 00:23:11.134481 kubelet[2124]: I1101 00:23:11.134277 2124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-962b5\" (UniqueName: \"kubernetes.io/projected/2ceacdf1-40ab-4971-87a7-298d59c91848-kube-api-access-962b5\") pod \"coredns-668d6bf9bc-jfnxh\" (UID: \"2ceacdf1-40ab-4971-87a7-298d59c91848\") " pod="kube-system/coredns-668d6bf9bc-jfnxh" Nov 1 00:23:11.134481 kubelet[2124]: I1101 00:23:11.134306 2124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/098f9c0f-a24a-4001-88bf-ea4e44e957ea-calico-apiserver-certs\") pod \"calico-apiserver-77cfb4d4d6-l78bq\" (UID: \"098f9c0f-a24a-4001-88bf-ea4e44e957ea\") " pod="calico-apiserver/calico-apiserver-77cfb4d4d6-l78bq" Nov 1 00:23:11.134603 kubelet[2124]: I1101 00:23:11.134324 2124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6gqqt\" (UniqueName: \"kubernetes.io/projected/e3dd8f6c-3b39-4d19-a732-fff37a40f25e-kube-api-access-6gqqt\") pod \"calico-apiserver-77cfb4d4d6-4nr5k\" (UID: \"e3dd8f6c-3b39-4d19-a732-fff37a40f25e\") " pod="calico-apiserver/calico-apiserver-77cfb4d4d6-4nr5k" Nov 1 00:23:11.134603 kubelet[2124]: I1101 00:23:11.134344 2124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e95bf48f-c761-42b0-aba2-5fe9b024e3f4-config-volume\") pod \"coredns-668d6bf9bc-8c5b8\" (UID: \"e95bf48f-c761-42b0-aba2-5fe9b024e3f4\") " pod="kube-system/coredns-668d6bf9bc-8c5b8" Nov 1 00:23:11.134603 kubelet[2124]: I1101 00:23:11.134361 2124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/bf1893fd-31bf-427a-928e-11685512f41a-goldmane-key-pair\") pod \"goldmane-666569f655-cp7jx\" (UID: \"bf1893fd-31bf-427a-928e-11685512f41a\") " pod="calico-system/goldmane-666569f655-cp7jx" Nov 1 00:23:11.134603 kubelet[2124]: I1101 00:23:11.134377 2124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bf1893fd-31bf-427a-928e-11685512f41a-goldmane-ca-bundle\") pod \"goldmane-666569f655-cp7jx\" (UID: \"bf1893fd-31bf-427a-928e-11685512f41a\") " pod="calico-system/goldmane-666569f655-cp7jx" Nov 1 00:23:11.149476 env[1324]: time="2025-11-01T00:23:11.146142416Z" level=warning msg="cleanup warnings time=\"2025-11-01T00:23:11Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2907 runtime=io.containerd.runc.v2\n" Nov 1 00:23:11.235254 kubelet[2124]: I1101 00:23:11.234743 2124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/c9ce1775-ce62-4551-bd13-ddb007698a7f-whisker-backend-key-pair\") pod \"whisker-595d769df8-7v79c\" (UID: \"c9ce1775-ce62-4551-bd13-ddb007698a7f\") " pod="calico-system/whisker-595d769df8-7v79c" Nov 1 00:23:11.235889 kubelet[2124]: I1101 00:23:11.235865 2124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b83bf5c0-405f-4b6b-b82a-0980cae1df67-tigera-ca-bundle\") pod \"calico-kube-controllers-866bcf4d9f-tt9sm\" (UID: \"b83bf5c0-405f-4b6b-b82a-0980cae1df67\") " pod="calico-system/calico-kube-controllers-866bcf4d9f-tt9sm" Nov 1 00:23:11.236106 kubelet[2124]: I1101 00:23:11.236088 2124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c9ce1775-ce62-4551-bd13-ddb007698a7f-whisker-ca-bundle\") pod \"whisker-595d769df8-7v79c\" (UID: \"c9ce1775-ce62-4551-bd13-ddb007698a7f\") " pod="calico-system/whisker-595d769df8-7v79c" Nov 1 00:23:11.236213 kubelet[2124]: I1101 00:23:11.236198 2124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-98mpz\" (UniqueName: \"kubernetes.io/projected/c9ce1775-ce62-4551-bd13-ddb007698a7f-kube-api-access-98mpz\") pod \"whisker-595d769df8-7v79c\" (UID: \"c9ce1775-ce62-4551-bd13-ddb007698a7f\") " pod="calico-system/whisker-595d769df8-7v79c" Nov 1 00:23:11.236376 kubelet[2124]: I1101 00:23:11.236360 2124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-74m96\" (UniqueName: \"kubernetes.io/projected/b83bf5c0-405f-4b6b-b82a-0980cae1df67-kube-api-access-74m96\") pod \"calico-kube-controllers-866bcf4d9f-tt9sm\" (UID: \"b83bf5c0-405f-4b6b-b82a-0980cae1df67\") " pod="calico-system/calico-kube-controllers-866bcf4d9f-tt9sm" Nov 1 00:23:11.364366 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-cf6eb534b7d46b97dec9b08dfcd673f728546a5cbc417259c3ce93a4b82686e8-rootfs.mount: Deactivated successfully. Nov 1 00:23:11.423031 kubelet[2124]: E1101 00:23:11.422994 2124 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:23:11.424674 env[1324]: time="2025-11-01T00:23:11.424313482Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-jfnxh,Uid:2ceacdf1-40ab-4971-87a7-298d59c91848,Namespace:kube-system,Attempt:0,}" Nov 1 00:23:11.425047 kubelet[2124]: E1101 00:23:11.425022 2124 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:23:11.425570 env[1324]: time="2025-11-01T00:23:11.425538563Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-8c5b8,Uid:e95bf48f-c761-42b0-aba2-5fe9b024e3f4,Namespace:kube-system,Attempt:0,}" Nov 1 00:23:11.428248 env[1324]: time="2025-11-01T00:23:11.428195887Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-77cfb4d4d6-4nr5k,Uid:e3dd8f6c-3b39-4d19-a732-fff37a40f25e,Namespace:calico-apiserver,Attempt:0,}" Nov 1 00:23:11.435747 env[1324]: time="2025-11-01T00:23:11.435709697Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-77cfb4d4d6-l78bq,Uid:098f9c0f-a24a-4001-88bf-ea4e44e957ea,Namespace:calico-apiserver,Attempt:0,}" Nov 1 00:23:11.437272 env[1324]: time="2025-11-01T00:23:11.437143619Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-cp7jx,Uid:bf1893fd-31bf-427a-928e-11685512f41a,Namespace:calico-system,Attempt:0,}" Nov 1 00:23:11.442955 env[1324]: time="2025-11-01T00:23:11.442716787Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-595d769df8-7v79c,Uid:c9ce1775-ce62-4551-bd13-ddb007698a7f,Namespace:calico-system,Attempt:0,}" Nov 1 00:23:11.443260 env[1324]: time="2025-11-01T00:23:11.443225108Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-866bcf4d9f-tt9sm,Uid:b83bf5c0-405f-4b6b-b82a-0980cae1df67,Namespace:calico-system,Attempt:0,}" Nov 1 00:23:11.583613 env[1324]: time="2025-11-01T00:23:11.583532863Z" level=error msg="Failed to destroy network for sandbox \"9edb2d9a08c95b3b3d6993efca4621934b5f50fd9a0165021f3363739452b767\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:23:11.583960 env[1324]: time="2025-11-01T00:23:11.583918543Z" level=error msg="encountered an error cleaning up failed sandbox \"9edb2d9a08c95b3b3d6993efca4621934b5f50fd9a0165021f3363739452b767\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:23:11.584016 env[1324]: time="2025-11-01T00:23:11.583968303Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-8c5b8,Uid:e95bf48f-c761-42b0-aba2-5fe9b024e3f4,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"9edb2d9a08c95b3b3d6993efca4621934b5f50fd9a0165021f3363739452b767\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:23:11.584304 kubelet[2124]: E1101 00:23:11.584175 2124 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9edb2d9a08c95b3b3d6993efca4621934b5f50fd9a0165021f3363739452b767\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:23:11.584401 env[1324]: time="2025-11-01T00:23:11.584213504Z" level=error msg="Failed to destroy network for sandbox \"fa406b43a30fa638400c575a6ea7cf741c2e9a12075a2582114651f5a6fa8633\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:23:11.586809 kubelet[2124]: E1101 00:23:11.586715 2124 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9edb2d9a08c95b3b3d6993efca4621934b5f50fd9a0165021f3363739452b767\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-8c5b8" Nov 1 00:23:11.587506 env[1324]: time="2025-11-01T00:23:11.587073628Z" level=error msg="encountered an error cleaning up failed sandbox \"fa406b43a30fa638400c575a6ea7cf741c2e9a12075a2582114651f5a6fa8633\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:23:11.587506 env[1324]: time="2025-11-01T00:23:11.587128708Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-jfnxh,Uid:2ceacdf1-40ab-4971-87a7-298d59c91848,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"fa406b43a30fa638400c575a6ea7cf741c2e9a12075a2582114651f5a6fa8633\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:23:11.587602 kubelet[2124]: E1101 00:23:11.587293 2124 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fa406b43a30fa638400c575a6ea7cf741c2e9a12075a2582114651f5a6fa8633\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:23:11.587602 kubelet[2124]: E1101 00:23:11.587336 2124 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fa406b43a30fa638400c575a6ea7cf741c2e9a12075a2582114651f5a6fa8633\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-jfnxh" Nov 1 00:23:11.587602 kubelet[2124]: E1101 00:23:11.587459 2124 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fa406b43a30fa638400c575a6ea7cf741c2e9a12075a2582114651f5a6fa8633\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-jfnxh" Nov 1 00:23:11.587602 kubelet[2124]: E1101 00:23:11.587461 2124 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9edb2d9a08c95b3b3d6993efca4621934b5f50fd9a0165021f3363739452b767\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-8c5b8" Nov 1 00:23:11.587708 kubelet[2124]: E1101 00:23:11.587518 2124 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-jfnxh_kube-system(2ceacdf1-40ab-4971-87a7-298d59c91848)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-jfnxh_kube-system(2ceacdf1-40ab-4971-87a7-298d59c91848)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"fa406b43a30fa638400c575a6ea7cf741c2e9a12075a2582114651f5a6fa8633\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-jfnxh" podUID="2ceacdf1-40ab-4971-87a7-298d59c91848" Nov 1 00:23:11.587847 kubelet[2124]: E1101 00:23:11.587804 2124 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-8c5b8_kube-system(e95bf48f-c761-42b0-aba2-5fe9b024e3f4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-8c5b8_kube-system(e95bf48f-c761-42b0-aba2-5fe9b024e3f4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9edb2d9a08c95b3b3d6993efca4621934b5f50fd9a0165021f3363739452b767\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-8c5b8" podUID="e95bf48f-c761-42b0-aba2-5fe9b024e3f4" Nov 1 00:23:11.605342 env[1324]: time="2025-11-01T00:23:11.605282693Z" level=error msg="Failed to destroy network for sandbox \"b5d69b86e4f937472c3a94e15103c703bc3d1f5d028f08fa0af3e9103cf1ef15\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:23:11.605492 env[1324]: time="2025-11-01T00:23:11.605299093Z" level=error msg="Failed to destroy network for sandbox \"f0852cc89cb223aa7610806523ff706e346ec5eb1bd18c75b2310cfbea3cd578\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:23:11.605721 env[1324]: time="2025-11-01T00:23:11.605682133Z" level=error msg="encountered an error cleaning up failed sandbox \"b5d69b86e4f937472c3a94e15103c703bc3d1f5d028f08fa0af3e9103cf1ef15\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:23:11.605785 env[1324]: time="2025-11-01T00:23:11.605734773Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-77cfb4d4d6-4nr5k,Uid:e3dd8f6c-3b39-4d19-a732-fff37a40f25e,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"b5d69b86e4f937472c3a94e15103c703bc3d1f5d028f08fa0af3e9103cf1ef15\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:23:11.606095 kubelet[2124]: E1101 00:23:11.606044 2124 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b5d69b86e4f937472c3a94e15103c703bc3d1f5d028f08fa0af3e9103cf1ef15\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:23:11.606172 kubelet[2124]: E1101 00:23:11.606116 2124 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b5d69b86e4f937472c3a94e15103c703bc3d1f5d028f08fa0af3e9103cf1ef15\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-77cfb4d4d6-4nr5k" Nov 1 00:23:11.606172 kubelet[2124]: E1101 00:23:11.606143 2124 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b5d69b86e4f937472c3a94e15103c703bc3d1f5d028f08fa0af3e9103cf1ef15\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-77cfb4d4d6-4nr5k" Nov 1 00:23:11.606232 env[1324]: time="2025-11-01T00:23:11.606073374Z" level=error msg="encountered an error cleaning up failed sandbox \"f0852cc89cb223aa7610806523ff706e346ec5eb1bd18c75b2310cfbea3cd578\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:23:11.606232 env[1324]: time="2025-11-01T00:23:11.606112014Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-77cfb4d4d6-l78bq,Uid:098f9c0f-a24a-4001-88bf-ea4e44e957ea,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"f0852cc89cb223aa7610806523ff706e346ec5eb1bd18c75b2310cfbea3cd578\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:23:11.606315 kubelet[2124]: E1101 00:23:11.606189 2124 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-77cfb4d4d6-4nr5k_calico-apiserver(e3dd8f6c-3b39-4d19-a732-fff37a40f25e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-77cfb4d4d6-4nr5k_calico-apiserver(e3dd8f6c-3b39-4d19-a732-fff37a40f25e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b5d69b86e4f937472c3a94e15103c703bc3d1f5d028f08fa0af3e9103cf1ef15\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-77cfb4d4d6-4nr5k" podUID="e3dd8f6c-3b39-4d19-a732-fff37a40f25e" Nov 1 00:23:11.606469 kubelet[2124]: E1101 00:23:11.606440 2124 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f0852cc89cb223aa7610806523ff706e346ec5eb1bd18c75b2310cfbea3cd578\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:23:11.606542 kubelet[2124]: E1101 00:23:11.606472 2124 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f0852cc89cb223aa7610806523ff706e346ec5eb1bd18c75b2310cfbea3cd578\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-77cfb4d4d6-l78bq" Nov 1 00:23:11.606542 kubelet[2124]: E1101 00:23:11.606487 2124 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f0852cc89cb223aa7610806523ff706e346ec5eb1bd18c75b2310cfbea3cd578\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-77cfb4d4d6-l78bq" Nov 1 00:23:11.606542 kubelet[2124]: E1101 00:23:11.606511 2124 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-77cfb4d4d6-l78bq_calico-apiserver(098f9c0f-a24a-4001-88bf-ea4e44e957ea)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-77cfb4d4d6-l78bq_calico-apiserver(098f9c0f-a24a-4001-88bf-ea4e44e957ea)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f0852cc89cb223aa7610806523ff706e346ec5eb1bd18c75b2310cfbea3cd578\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-77cfb4d4d6-l78bq" podUID="098f9c0f-a24a-4001-88bf-ea4e44e957ea" Nov 1 00:23:11.627978 env[1324]: time="2025-11-01T00:23:11.627860684Z" level=error msg="Failed to destroy network for sandbox \"8daf4c7352584ba96785ece68ac010b475d962722fdef4b04147dcba5b784e2f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:23:11.629803 env[1324]: time="2025-11-01T00:23:11.629752007Z" level=error msg="encountered an error cleaning up failed sandbox \"8daf4c7352584ba96785ece68ac010b475d962722fdef4b04147dcba5b784e2f\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:23:11.629882 env[1324]: time="2025-11-01T00:23:11.629822407Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-866bcf4d9f-tt9sm,Uid:b83bf5c0-405f-4b6b-b82a-0980cae1df67,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"8daf4c7352584ba96785ece68ac010b475d962722fdef4b04147dcba5b784e2f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:23:11.630069 kubelet[2124]: E1101 00:23:11.630033 2124 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8daf4c7352584ba96785ece68ac010b475d962722fdef4b04147dcba5b784e2f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:23:11.630123 kubelet[2124]: E1101 00:23:11.630092 2124 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8daf4c7352584ba96785ece68ac010b475d962722fdef4b04147dcba5b784e2f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-866bcf4d9f-tt9sm" Nov 1 00:23:11.630123 kubelet[2124]: E1101 00:23:11.630112 2124 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8daf4c7352584ba96785ece68ac010b475d962722fdef4b04147dcba5b784e2f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-866bcf4d9f-tt9sm" Nov 1 00:23:11.630180 kubelet[2124]: E1101 00:23:11.630149 2124 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-866bcf4d9f-tt9sm_calico-system(b83bf5c0-405f-4b6b-b82a-0980cae1df67)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-866bcf4d9f-tt9sm_calico-system(b83bf5c0-405f-4b6b-b82a-0980cae1df67)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8daf4c7352584ba96785ece68ac010b475d962722fdef4b04147dcba5b784e2f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-866bcf4d9f-tt9sm" podUID="b83bf5c0-405f-4b6b-b82a-0980cae1df67" Nov 1 00:23:11.630841 env[1324]: time="2025-11-01T00:23:11.630805368Z" level=error msg="Failed to destroy network for sandbox \"6f0b79189a902f2b024e5033689b722a6b6fef17a4e921a2816283b12410a496\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:23:11.631820 env[1324]: time="2025-11-01T00:23:11.631338769Z" level=error msg="encountered an error cleaning up failed sandbox \"6f0b79189a902f2b024e5033689b722a6b6fef17a4e921a2816283b12410a496\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:23:11.631820 env[1324]: time="2025-11-01T00:23:11.631390129Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-595d769df8-7v79c,Uid:c9ce1775-ce62-4551-bd13-ddb007698a7f,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"6f0b79189a902f2b024e5033689b722a6b6fef17a4e921a2816283b12410a496\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:23:11.631950 kubelet[2124]: E1101 00:23:11.631828 2124 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6f0b79189a902f2b024e5033689b722a6b6fef17a4e921a2816283b12410a496\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:23:11.631950 kubelet[2124]: E1101 00:23:11.631874 2124 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6f0b79189a902f2b024e5033689b722a6b6fef17a4e921a2816283b12410a496\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-595d769df8-7v79c" Nov 1 00:23:11.631950 kubelet[2124]: E1101 00:23:11.631891 2124 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6f0b79189a902f2b024e5033689b722a6b6fef17a4e921a2816283b12410a496\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-595d769df8-7v79c" Nov 1 00:23:11.632035 kubelet[2124]: E1101 00:23:11.631922 2124 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-595d769df8-7v79c_calico-system(c9ce1775-ce62-4551-bd13-ddb007698a7f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-595d769df8-7v79c_calico-system(c9ce1775-ce62-4551-bd13-ddb007698a7f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6f0b79189a902f2b024e5033689b722a6b6fef17a4e921a2816283b12410a496\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-595d769df8-7v79c" podUID="c9ce1775-ce62-4551-bd13-ddb007698a7f" Nov 1 00:23:11.638512 env[1324]: time="2025-11-01T00:23:11.638455699Z" level=error msg="Failed to destroy network for sandbox \"f80c0e08d467894b039f948b8f86ba376ec5e81932e79cbeb44ff3a13e6573ce\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:23:11.638859 env[1324]: time="2025-11-01T00:23:11.638830339Z" level=error msg="encountered an error cleaning up failed sandbox \"f80c0e08d467894b039f948b8f86ba376ec5e81932e79cbeb44ff3a13e6573ce\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:23:11.638917 env[1324]: time="2025-11-01T00:23:11.638892859Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-cp7jx,Uid:bf1893fd-31bf-427a-928e-11685512f41a,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"f80c0e08d467894b039f948b8f86ba376ec5e81932e79cbeb44ff3a13e6573ce\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:23:11.639209 kubelet[2124]: E1101 00:23:11.639078 2124 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f80c0e08d467894b039f948b8f86ba376ec5e81932e79cbeb44ff3a13e6573ce\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:23:11.639209 kubelet[2124]: E1101 00:23:11.639124 2124 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f80c0e08d467894b039f948b8f86ba376ec5e81932e79cbeb44ff3a13e6573ce\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-cp7jx" Nov 1 00:23:11.639209 kubelet[2124]: E1101 00:23:11.639140 2124 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f80c0e08d467894b039f948b8f86ba376ec5e81932e79cbeb44ff3a13e6573ce\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-cp7jx" Nov 1 00:23:11.639342 kubelet[2124]: E1101 00:23:11.639176 2124 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-cp7jx_calico-system(bf1893fd-31bf-427a-928e-11685512f41a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-cp7jx_calico-system(bf1893fd-31bf-427a-928e-11685512f41a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f80c0e08d467894b039f948b8f86ba376ec5e81932e79cbeb44ff3a13e6573ce\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-cp7jx" podUID="bf1893fd-31bf-427a-928e-11685512f41a" Nov 1 00:23:11.644669 env[1324]: time="2025-11-01T00:23:11.644630627Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-52pqx,Uid:a584285f-c40b-477a-8ddb-bfa9e3439fe6,Namespace:calico-system,Attempt:0,}" Nov 1 00:23:11.690608 env[1324]: time="2025-11-01T00:23:11.690530691Z" level=error msg="Failed to destroy network for sandbox \"b4bdbcd9fc0808043b166066c935f2205113269a5c6f10e3fbe294d92cc3f43d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:23:11.690906 env[1324]: time="2025-11-01T00:23:11.690862612Z" level=error msg="encountered an error cleaning up failed sandbox \"b4bdbcd9fc0808043b166066c935f2205113269a5c6f10e3fbe294d92cc3f43d\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:23:11.690950 env[1324]: time="2025-11-01T00:23:11.690914892Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-52pqx,Uid:a584285f-c40b-477a-8ddb-bfa9e3439fe6,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"b4bdbcd9fc0808043b166066c935f2205113269a5c6f10e3fbe294d92cc3f43d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:23:11.691168 kubelet[2124]: E1101 00:23:11.691131 2124 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b4bdbcd9fc0808043b166066c935f2205113269a5c6f10e3fbe294d92cc3f43d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:23:11.691213 kubelet[2124]: E1101 00:23:11.691191 2124 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b4bdbcd9fc0808043b166066c935f2205113269a5c6f10e3fbe294d92cc3f43d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-52pqx" Nov 1 00:23:11.691240 kubelet[2124]: E1101 00:23:11.691211 2124 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b4bdbcd9fc0808043b166066c935f2205113269a5c6f10e3fbe294d92cc3f43d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-52pqx" Nov 1 00:23:11.691299 kubelet[2124]: E1101 00:23:11.691267 2124 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-52pqx_calico-system(a584285f-c40b-477a-8ddb-bfa9e3439fe6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-52pqx_calico-system(a584285f-c40b-477a-8ddb-bfa9e3439fe6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b4bdbcd9fc0808043b166066c935f2205113269a5c6f10e3fbe294d92cc3f43d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-52pqx" podUID="a584285f-c40b-477a-8ddb-bfa9e3439fe6" Nov 1 00:23:11.711519 kubelet[2124]: E1101 00:23:11.711478 2124 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:23:11.712935 env[1324]: time="2025-11-01T00:23:11.712904522Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Nov 1 00:23:11.715197 kubelet[2124]: I1101 00:23:11.715152 2124 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f0852cc89cb223aa7610806523ff706e346ec5eb1bd18c75b2310cfbea3cd578" Nov 1 00:23:11.716173 env[1324]: time="2025-11-01T00:23:11.716144567Z" level=info msg="StopPodSandbox for \"f0852cc89cb223aa7610806523ff706e346ec5eb1bd18c75b2310cfbea3cd578\"" Nov 1 00:23:11.722117 kubelet[2124]: I1101 00:23:11.721665 2124 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9edb2d9a08c95b3b3d6993efca4621934b5f50fd9a0165021f3363739452b767" Nov 1 00:23:11.722428 env[1324]: time="2025-11-01T00:23:11.722379375Z" level=info msg="StopPodSandbox for \"9edb2d9a08c95b3b3d6993efca4621934b5f50fd9a0165021f3363739452b767\"" Nov 1 00:23:11.723824 kubelet[2124]: I1101 00:23:11.723457 2124 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6f0b79189a902f2b024e5033689b722a6b6fef17a4e921a2816283b12410a496" Nov 1 00:23:11.723980 env[1324]: time="2025-11-01T00:23:11.723954418Z" level=info msg="StopPodSandbox for \"6f0b79189a902f2b024e5033689b722a6b6fef17a4e921a2816283b12410a496\"" Nov 1 00:23:11.727335 kubelet[2124]: I1101 00:23:11.726910 2124 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8daf4c7352584ba96785ece68ac010b475d962722fdef4b04147dcba5b784e2f" Nov 1 00:23:11.731047 env[1324]: time="2025-11-01T00:23:11.731007547Z" level=info msg="StopPodSandbox for \"8daf4c7352584ba96785ece68ac010b475d962722fdef4b04147dcba5b784e2f\"" Nov 1 00:23:11.736300 kubelet[2124]: I1101 00:23:11.735913 2124 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f80c0e08d467894b039f948b8f86ba376ec5e81932e79cbeb44ff3a13e6573ce" Nov 1 00:23:11.736609 env[1324]: time="2025-11-01T00:23:11.736569395Z" level=info msg="StopPodSandbox for \"f80c0e08d467894b039f948b8f86ba376ec5e81932e79cbeb44ff3a13e6573ce\"" Nov 1 00:23:11.738046 kubelet[2124]: I1101 00:23:11.738016 2124 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b5d69b86e4f937472c3a94e15103c703bc3d1f5d028f08fa0af3e9103cf1ef15" Nov 1 00:23:11.738806 env[1324]: time="2025-11-01T00:23:11.738739358Z" level=info msg="StopPodSandbox for \"b5d69b86e4f937472c3a94e15103c703bc3d1f5d028f08fa0af3e9103cf1ef15\"" Nov 1 00:23:11.739779 kubelet[2124]: I1101 00:23:11.739750 2124 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b4bdbcd9fc0808043b166066c935f2205113269a5c6f10e3fbe294d92cc3f43d" Nov 1 00:23:11.740269 env[1324]: time="2025-11-01T00:23:11.740232520Z" level=info msg="StopPodSandbox for \"b4bdbcd9fc0808043b166066c935f2205113269a5c6f10e3fbe294d92cc3f43d\"" Nov 1 00:23:11.742646 kubelet[2124]: I1101 00:23:11.742613 2124 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fa406b43a30fa638400c575a6ea7cf741c2e9a12075a2582114651f5a6fa8633" Nov 1 00:23:11.743070 env[1324]: time="2025-11-01T00:23:11.743037004Z" level=info msg="StopPodSandbox for \"fa406b43a30fa638400c575a6ea7cf741c2e9a12075a2582114651f5a6fa8633\"" Nov 1 00:23:11.757584 env[1324]: time="2025-11-01T00:23:11.757525944Z" level=error msg="StopPodSandbox for \"9edb2d9a08c95b3b3d6993efca4621934b5f50fd9a0165021f3363739452b767\" failed" error="failed to destroy network for sandbox \"9edb2d9a08c95b3b3d6993efca4621934b5f50fd9a0165021f3363739452b767\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:23:11.758158 kubelet[2124]: E1101 00:23:11.757953 2124 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"9edb2d9a08c95b3b3d6993efca4621934b5f50fd9a0165021f3363739452b767\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="9edb2d9a08c95b3b3d6993efca4621934b5f50fd9a0165021f3363739452b767" Nov 1 00:23:11.758158 kubelet[2124]: E1101 00:23:11.758027 2124 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"9edb2d9a08c95b3b3d6993efca4621934b5f50fd9a0165021f3363739452b767"} Nov 1 00:23:11.758158 kubelet[2124]: E1101 00:23:11.758102 2124 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"e95bf48f-c761-42b0-aba2-5fe9b024e3f4\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9edb2d9a08c95b3b3d6993efca4621934b5f50fd9a0165021f3363739452b767\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 1 00:23:11.758158 kubelet[2124]: E1101 00:23:11.758122 2124 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"e95bf48f-c761-42b0-aba2-5fe9b024e3f4\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9edb2d9a08c95b3b3d6993efca4621934b5f50fd9a0165021f3363739452b767\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-8c5b8" podUID="e95bf48f-c761-42b0-aba2-5fe9b024e3f4" Nov 1 00:23:11.769369 env[1324]: time="2025-11-01T00:23:11.769303920Z" level=error msg="StopPodSandbox for \"f0852cc89cb223aa7610806523ff706e346ec5eb1bd18c75b2310cfbea3cd578\" failed" error="failed to destroy network for sandbox \"f0852cc89cb223aa7610806523ff706e346ec5eb1bd18c75b2310cfbea3cd578\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:23:11.770585 kubelet[2124]: E1101 00:23:11.770425 2124 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"f0852cc89cb223aa7610806523ff706e346ec5eb1bd18c75b2310cfbea3cd578\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="f0852cc89cb223aa7610806523ff706e346ec5eb1bd18c75b2310cfbea3cd578" Nov 1 00:23:11.770585 kubelet[2124]: E1101 00:23:11.770484 2124 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"f0852cc89cb223aa7610806523ff706e346ec5eb1bd18c75b2310cfbea3cd578"} Nov 1 00:23:11.772207 kubelet[2124]: E1101 00:23:11.770521 2124 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"098f9c0f-a24a-4001-88bf-ea4e44e957ea\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f0852cc89cb223aa7610806523ff706e346ec5eb1bd18c75b2310cfbea3cd578\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 1 00:23:11.772207 kubelet[2124]: E1101 00:23:11.772169 2124 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"098f9c0f-a24a-4001-88bf-ea4e44e957ea\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f0852cc89cb223aa7610806523ff706e346ec5eb1bd18c75b2310cfbea3cd578\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-77cfb4d4d6-l78bq" podUID="098f9c0f-a24a-4001-88bf-ea4e44e957ea" Nov 1 00:23:11.777766 env[1324]: time="2025-11-01T00:23:11.777710772Z" level=error msg="StopPodSandbox for \"6f0b79189a902f2b024e5033689b722a6b6fef17a4e921a2816283b12410a496\" failed" error="failed to destroy network for sandbox \"6f0b79189a902f2b024e5033689b722a6b6fef17a4e921a2816283b12410a496\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:23:11.778138 kubelet[2124]: E1101 00:23:11.777964 2124 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"6f0b79189a902f2b024e5033689b722a6b6fef17a4e921a2816283b12410a496\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="6f0b79189a902f2b024e5033689b722a6b6fef17a4e921a2816283b12410a496" Nov 1 00:23:11.778138 kubelet[2124]: E1101 00:23:11.778014 2124 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"6f0b79189a902f2b024e5033689b722a6b6fef17a4e921a2816283b12410a496"} Nov 1 00:23:11.778138 kubelet[2124]: E1101 00:23:11.778057 2124 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"c9ce1775-ce62-4551-bd13-ddb007698a7f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6f0b79189a902f2b024e5033689b722a6b6fef17a4e921a2816283b12410a496\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 1 00:23:11.778138 kubelet[2124]: E1101 00:23:11.778110 2124 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"c9ce1775-ce62-4551-bd13-ddb007698a7f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6f0b79189a902f2b024e5033689b722a6b6fef17a4e921a2816283b12410a496\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-595d769df8-7v79c" podUID="c9ce1775-ce62-4551-bd13-ddb007698a7f" Nov 1 00:23:11.792616 env[1324]: time="2025-11-01T00:23:11.792532913Z" level=error msg="StopPodSandbox for \"8daf4c7352584ba96785ece68ac010b475d962722fdef4b04147dcba5b784e2f\" failed" error="failed to destroy network for sandbox \"8daf4c7352584ba96785ece68ac010b475d962722fdef4b04147dcba5b784e2f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:23:11.792855 kubelet[2124]: E1101 00:23:11.792812 2124 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"8daf4c7352584ba96785ece68ac010b475d962722fdef4b04147dcba5b784e2f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="8daf4c7352584ba96785ece68ac010b475d962722fdef4b04147dcba5b784e2f" Nov 1 00:23:11.792914 kubelet[2124]: E1101 00:23:11.792863 2124 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"8daf4c7352584ba96785ece68ac010b475d962722fdef4b04147dcba5b784e2f"} Nov 1 00:23:11.792944 kubelet[2124]: E1101 00:23:11.792912 2124 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"b83bf5c0-405f-4b6b-b82a-0980cae1df67\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8daf4c7352584ba96785ece68ac010b475d962722fdef4b04147dcba5b784e2f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 1 00:23:11.792998 kubelet[2124]: E1101 00:23:11.792934 2124 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"b83bf5c0-405f-4b6b-b82a-0980cae1df67\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8daf4c7352584ba96785ece68ac010b475d962722fdef4b04147dcba5b784e2f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-866bcf4d9f-tt9sm" podUID="b83bf5c0-405f-4b6b-b82a-0980cae1df67" Nov 1 00:23:11.796619 env[1324]: time="2025-11-01T00:23:11.796568718Z" level=error msg="StopPodSandbox for \"b4bdbcd9fc0808043b166066c935f2205113269a5c6f10e3fbe294d92cc3f43d\" failed" error="failed to destroy network for sandbox \"b4bdbcd9fc0808043b166066c935f2205113269a5c6f10e3fbe294d92cc3f43d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:23:11.796881 kubelet[2124]: E1101 00:23:11.796832 2124 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"b4bdbcd9fc0808043b166066c935f2205113269a5c6f10e3fbe294d92cc3f43d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="b4bdbcd9fc0808043b166066c935f2205113269a5c6f10e3fbe294d92cc3f43d" Nov 1 00:23:11.796952 kubelet[2124]: E1101 00:23:11.796886 2124 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"b4bdbcd9fc0808043b166066c935f2205113269a5c6f10e3fbe294d92cc3f43d"} Nov 1 00:23:11.796952 kubelet[2124]: E1101 00:23:11.796914 2124 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"a584285f-c40b-477a-8ddb-bfa9e3439fe6\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b4bdbcd9fc0808043b166066c935f2205113269a5c6f10e3fbe294d92cc3f43d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 1 00:23:11.796952 kubelet[2124]: E1101 00:23:11.796934 2124 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"a584285f-c40b-477a-8ddb-bfa9e3439fe6\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b4bdbcd9fc0808043b166066c935f2205113269a5c6f10e3fbe294d92cc3f43d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-52pqx" podUID="a584285f-c40b-477a-8ddb-bfa9e3439fe6" Nov 1 00:23:11.800744 env[1324]: time="2025-11-01T00:23:11.800700124Z" level=error msg="StopPodSandbox for \"fa406b43a30fa638400c575a6ea7cf741c2e9a12075a2582114651f5a6fa8633\" failed" error="failed to destroy network for sandbox \"fa406b43a30fa638400c575a6ea7cf741c2e9a12075a2582114651f5a6fa8633\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:23:11.801044 kubelet[2124]: E1101 00:23:11.800919 2124 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"fa406b43a30fa638400c575a6ea7cf741c2e9a12075a2582114651f5a6fa8633\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="fa406b43a30fa638400c575a6ea7cf741c2e9a12075a2582114651f5a6fa8633" Nov 1 00:23:11.801044 kubelet[2124]: E1101 00:23:11.800969 2124 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"fa406b43a30fa638400c575a6ea7cf741c2e9a12075a2582114651f5a6fa8633"} Nov 1 00:23:11.801044 kubelet[2124]: E1101 00:23:11.800993 2124 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"2ceacdf1-40ab-4971-87a7-298d59c91848\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"fa406b43a30fa638400c575a6ea7cf741c2e9a12075a2582114651f5a6fa8633\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 1 00:23:11.801044 kubelet[2124]: E1101 00:23:11.801021 2124 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"2ceacdf1-40ab-4971-87a7-298d59c91848\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"fa406b43a30fa638400c575a6ea7cf741c2e9a12075a2582114651f5a6fa8633\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-jfnxh" podUID="2ceacdf1-40ab-4971-87a7-298d59c91848" Nov 1 00:23:11.801772 env[1324]: time="2025-11-01T00:23:11.801737245Z" level=error msg="StopPodSandbox for \"b5d69b86e4f937472c3a94e15103c703bc3d1f5d028f08fa0af3e9103cf1ef15\" failed" error="failed to destroy network for sandbox \"b5d69b86e4f937472c3a94e15103c703bc3d1f5d028f08fa0af3e9103cf1ef15\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:23:11.801905 kubelet[2124]: E1101 00:23:11.801880 2124 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"b5d69b86e4f937472c3a94e15103c703bc3d1f5d028f08fa0af3e9103cf1ef15\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="b5d69b86e4f937472c3a94e15103c703bc3d1f5d028f08fa0af3e9103cf1ef15" Nov 1 00:23:11.801959 kubelet[2124]: E1101 00:23:11.801913 2124 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"b5d69b86e4f937472c3a94e15103c703bc3d1f5d028f08fa0af3e9103cf1ef15"} Nov 1 00:23:11.801959 kubelet[2124]: E1101 00:23:11.801939 2124 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"e3dd8f6c-3b39-4d19-a732-fff37a40f25e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b5d69b86e4f937472c3a94e15103c703bc3d1f5d028f08fa0af3e9103cf1ef15\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 1 00:23:11.802027 kubelet[2124]: E1101 00:23:11.801956 2124 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"e3dd8f6c-3b39-4d19-a732-fff37a40f25e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b5d69b86e4f937472c3a94e15103c703bc3d1f5d028f08fa0af3e9103cf1ef15\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-77cfb4d4d6-4nr5k" podUID="e3dd8f6c-3b39-4d19-a732-fff37a40f25e" Nov 1 00:23:11.807880 env[1324]: time="2025-11-01T00:23:11.807841694Z" level=error msg="StopPodSandbox for \"f80c0e08d467894b039f948b8f86ba376ec5e81932e79cbeb44ff3a13e6573ce\" failed" error="failed to destroy network for sandbox \"f80c0e08d467894b039f948b8f86ba376ec5e81932e79cbeb44ff3a13e6573ce\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:23:11.808075 kubelet[2124]: E1101 00:23:11.808042 2124 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"f80c0e08d467894b039f948b8f86ba376ec5e81932e79cbeb44ff3a13e6573ce\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="f80c0e08d467894b039f948b8f86ba376ec5e81932e79cbeb44ff3a13e6573ce" Nov 1 00:23:11.808144 kubelet[2124]: E1101 00:23:11.808081 2124 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"f80c0e08d467894b039f948b8f86ba376ec5e81932e79cbeb44ff3a13e6573ce"} Nov 1 00:23:11.808144 kubelet[2124]: E1101 00:23:11.808104 2124 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"bf1893fd-31bf-427a-928e-11685512f41a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f80c0e08d467894b039f948b8f86ba376ec5e81932e79cbeb44ff3a13e6573ce\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 1 00:23:11.808144 kubelet[2124]: E1101 00:23:11.808122 2124 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"bf1893fd-31bf-427a-928e-11685512f41a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f80c0e08d467894b039f948b8f86ba376ec5e81932e79cbeb44ff3a13e6573ce\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-cp7jx" podUID="bf1893fd-31bf-427a-928e-11685512f41a" Nov 1 00:23:12.358393 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-9edb2d9a08c95b3b3d6993efca4621934b5f50fd9a0165021f3363739452b767-shm.mount: Deactivated successfully. Nov 1 00:23:12.358556 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-fa406b43a30fa638400c575a6ea7cf741c2e9a12075a2582114651f5a6fa8633-shm.mount: Deactivated successfully. Nov 1 00:23:16.690401 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2387678895.mount: Deactivated successfully. Nov 1 00:23:16.962217 env[1324]: time="2025-11-01T00:23:16.962117265Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node:v3.30.4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:23:16.963809 env[1324]: time="2025-11-01T00:23:16.963771667Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:23:16.965378 env[1324]: time="2025-11-01T00:23:16.965349788Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/node:v3.30.4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:23:16.966738 env[1324]: time="2025-11-01T00:23:16.966708869Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:23:16.967174 env[1324]: time="2025-11-01T00:23:16.967148510Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\"" Nov 1 00:23:16.984079 env[1324]: time="2025-11-01T00:23:16.984029207Z" level=info msg="CreateContainer within sandbox \"4e589d3ff540d88a4610252ad3ba48fa9cf531a6eb9298a0c37422586ed46a26\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Nov 1 00:23:16.997446 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2889667971.mount: Deactivated successfully. Nov 1 00:23:16.998747 env[1324]: time="2025-11-01T00:23:16.998714182Z" level=info msg="CreateContainer within sandbox \"4e589d3ff540d88a4610252ad3ba48fa9cf531a6eb9298a0c37422586ed46a26\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"7621a62555f564289e10dfe2974ae3aa0367415a3746d1a3dee81946f4941aae\"" Nov 1 00:23:17.000514 env[1324]: time="2025-11-01T00:23:17.000483303Z" level=info msg="StartContainer for \"7621a62555f564289e10dfe2974ae3aa0367415a3746d1a3dee81946f4941aae\"" Nov 1 00:23:17.105996 env[1324]: time="2025-11-01T00:23:17.105945763Z" level=info msg="StartContainer for \"7621a62555f564289e10dfe2974ae3aa0367415a3746d1a3dee81946f4941aae\" returns successfully" Nov 1 00:23:17.192443 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Nov 1 00:23:17.192577 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Nov 1 00:23:17.313689 env[1324]: time="2025-11-01T00:23:17.313556478Z" level=info msg="StopPodSandbox for \"6f0b79189a902f2b024e5033689b722a6b6fef17a4e921a2816283b12410a496\"" Nov 1 00:23:17.560693 env[1324]: 2025-11-01 00:23:17.396 [INFO][3407] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="6f0b79189a902f2b024e5033689b722a6b6fef17a4e921a2816283b12410a496" Nov 1 00:23:17.560693 env[1324]: 2025-11-01 00:23:17.396 [INFO][3407] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="6f0b79189a902f2b024e5033689b722a6b6fef17a4e921a2816283b12410a496" iface="eth0" netns="/var/run/netns/cni-be8e53f4-495a-4f5f-27a0-01cfd509c4fc" Nov 1 00:23:17.560693 env[1324]: 2025-11-01 00:23:17.397 [INFO][3407] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="6f0b79189a902f2b024e5033689b722a6b6fef17a4e921a2816283b12410a496" iface="eth0" netns="/var/run/netns/cni-be8e53f4-495a-4f5f-27a0-01cfd509c4fc" Nov 1 00:23:17.560693 env[1324]: 2025-11-01 00:23:17.397 [INFO][3407] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="6f0b79189a902f2b024e5033689b722a6b6fef17a4e921a2816283b12410a496" iface="eth0" netns="/var/run/netns/cni-be8e53f4-495a-4f5f-27a0-01cfd509c4fc" Nov 1 00:23:17.560693 env[1324]: 2025-11-01 00:23:17.398 [INFO][3407] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="6f0b79189a902f2b024e5033689b722a6b6fef17a4e921a2816283b12410a496" Nov 1 00:23:17.560693 env[1324]: 2025-11-01 00:23:17.398 [INFO][3407] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6f0b79189a902f2b024e5033689b722a6b6fef17a4e921a2816283b12410a496" Nov 1 00:23:17.560693 env[1324]: 2025-11-01 00:23:17.543 [INFO][3418] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="6f0b79189a902f2b024e5033689b722a6b6fef17a4e921a2816283b12410a496" HandleID="k8s-pod-network.6f0b79189a902f2b024e5033689b722a6b6fef17a4e921a2816283b12410a496" Workload="localhost-k8s-whisker--595d769df8--7v79c-eth0" Nov 1 00:23:17.560693 env[1324]: 2025-11-01 00:23:17.546 [INFO][3418] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:23:17.560693 env[1324]: 2025-11-01 00:23:17.546 [INFO][3418] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:23:17.560693 env[1324]: 2025-11-01 00:23:17.556 [WARNING][3418] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="6f0b79189a902f2b024e5033689b722a6b6fef17a4e921a2816283b12410a496" HandleID="k8s-pod-network.6f0b79189a902f2b024e5033689b722a6b6fef17a4e921a2816283b12410a496" Workload="localhost-k8s-whisker--595d769df8--7v79c-eth0" Nov 1 00:23:17.560693 env[1324]: 2025-11-01 00:23:17.556 [INFO][3418] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="6f0b79189a902f2b024e5033689b722a6b6fef17a4e921a2816283b12410a496" HandleID="k8s-pod-network.6f0b79189a902f2b024e5033689b722a6b6fef17a4e921a2816283b12410a496" Workload="localhost-k8s-whisker--595d769df8--7v79c-eth0" Nov 1 00:23:17.560693 env[1324]: 2025-11-01 00:23:17.557 [INFO][3418] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:23:17.560693 env[1324]: 2025-11-01 00:23:17.559 [INFO][3407] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="6f0b79189a902f2b024e5033689b722a6b6fef17a4e921a2816283b12410a496" Nov 1 00:23:17.561182 env[1324]: time="2025-11-01T00:23:17.560823631Z" level=info msg="TearDown network for sandbox \"6f0b79189a902f2b024e5033689b722a6b6fef17a4e921a2816283b12410a496\" successfully" Nov 1 00:23:17.561182 env[1324]: time="2025-11-01T00:23:17.560853272Z" level=info msg="StopPodSandbox for \"6f0b79189a902f2b024e5033689b722a6b6fef17a4e921a2816283b12410a496\" returns successfully" Nov 1 00:23:17.586046 kubelet[2124]: I1101 00:23:17.585396 2124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/c9ce1775-ce62-4551-bd13-ddb007698a7f-whisker-backend-key-pair\") pod \"c9ce1775-ce62-4551-bd13-ddb007698a7f\" (UID: \"c9ce1775-ce62-4551-bd13-ddb007698a7f\") " Nov 1 00:23:17.586046 kubelet[2124]: I1101 00:23:17.585454 2124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c9ce1775-ce62-4551-bd13-ddb007698a7f-whisker-ca-bundle\") pod \"c9ce1775-ce62-4551-bd13-ddb007698a7f\" (UID: \"c9ce1775-ce62-4551-bd13-ddb007698a7f\") " Nov 1 00:23:17.586046 kubelet[2124]: I1101 00:23:17.585481 2124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-98mpz\" (UniqueName: \"kubernetes.io/projected/c9ce1775-ce62-4551-bd13-ddb007698a7f-kube-api-access-98mpz\") pod \"c9ce1775-ce62-4551-bd13-ddb007698a7f\" (UID: \"c9ce1775-ce62-4551-bd13-ddb007698a7f\") " Nov 1 00:23:17.586755 kubelet[2124]: I1101 00:23:17.586183 2124 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c9ce1775-ce62-4551-bd13-ddb007698a7f-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "c9ce1775-ce62-4551-bd13-ddb007698a7f" (UID: "c9ce1775-ce62-4551-bd13-ddb007698a7f"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Nov 1 00:23:17.588278 kubelet[2124]: I1101 00:23:17.588243 2124 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c9ce1775-ce62-4551-bd13-ddb007698a7f-kube-api-access-98mpz" (OuterVolumeSpecName: "kube-api-access-98mpz") pod "c9ce1775-ce62-4551-bd13-ddb007698a7f" (UID: "c9ce1775-ce62-4551-bd13-ddb007698a7f"). InnerVolumeSpecName "kube-api-access-98mpz". PluginName "kubernetes.io/projected", VolumeGIDValue "" Nov 1 00:23:17.589646 kubelet[2124]: I1101 00:23:17.589615 2124 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c9ce1775-ce62-4551-bd13-ddb007698a7f-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "c9ce1775-ce62-4551-bd13-ddb007698a7f" (UID: "c9ce1775-ce62-4551-bd13-ddb007698a7f"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Nov 1 00:23:17.685855 kubelet[2124]: I1101 00:23:17.685819 2124 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/c9ce1775-ce62-4551-bd13-ddb007698a7f-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Nov 1 00:23:17.685855 kubelet[2124]: I1101 00:23:17.685852 2124 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c9ce1775-ce62-4551-bd13-ddb007698a7f-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Nov 1 00:23:17.685855 kubelet[2124]: I1101 00:23:17.685862 2124 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-98mpz\" (UniqueName: \"kubernetes.io/projected/c9ce1775-ce62-4551-bd13-ddb007698a7f-kube-api-access-98mpz\") on node \"localhost\" DevicePath \"\"" Nov 1 00:23:17.691326 systemd[1]: run-netns-cni\x2dbe8e53f4\x2d495a\x2d4f5f\x2d27a0\x2d01cfd509c4fc.mount: Deactivated successfully. Nov 1 00:23:17.691489 systemd[1]: var-lib-kubelet-pods-c9ce1775\x2dce62\x2d4551\x2dbd13\x2dddb007698a7f-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d98mpz.mount: Deactivated successfully. Nov 1 00:23:17.691580 systemd[1]: var-lib-kubelet-pods-c9ce1775\x2dce62\x2d4551\x2dbd13\x2dddb007698a7f-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Nov 1 00:23:17.755905 kubelet[2124]: E1101 00:23:17.755875 2124 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:23:17.772245 kubelet[2124]: I1101 00:23:17.772181 2124 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-wfrnf" podStartSLOduration=1.2643912529999999 podStartE2EDuration="13.772167111s" podCreationTimestamp="2025-11-01 00:23:04 +0000 UTC" firstStartedPulling="2025-11-01 00:23:04.460339133 +0000 UTC m=+21.924232311" lastFinishedPulling="2025-11-01 00:23:16.968114991 +0000 UTC m=+34.432008169" observedRunningTime="2025-11-01 00:23:17.770024589 +0000 UTC m=+35.233917727" watchObservedRunningTime="2025-11-01 00:23:17.772167111 +0000 UTC m=+35.236060289" Nov 1 00:23:17.887478 kubelet[2124]: I1101 00:23:17.887347 2124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/8bcd59ea-9151-4aaf-9b6c-77893bc394d7-whisker-backend-key-pair\") pod \"whisker-69fb5f888d-pptgw\" (UID: \"8bcd59ea-9151-4aaf-9b6c-77893bc394d7\") " pod="calico-system/whisker-69fb5f888d-pptgw" Nov 1 00:23:17.887478 kubelet[2124]: I1101 00:23:17.887396 2124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jtvw6\" (UniqueName: \"kubernetes.io/projected/8bcd59ea-9151-4aaf-9b6c-77893bc394d7-kube-api-access-jtvw6\") pod \"whisker-69fb5f888d-pptgw\" (UID: \"8bcd59ea-9151-4aaf-9b6c-77893bc394d7\") " pod="calico-system/whisker-69fb5f888d-pptgw" Nov 1 00:23:17.887478 kubelet[2124]: I1101 00:23:17.887448 2124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8bcd59ea-9151-4aaf-9b6c-77893bc394d7-whisker-ca-bundle\") pod \"whisker-69fb5f888d-pptgw\" (UID: \"8bcd59ea-9151-4aaf-9b6c-77893bc394d7\") " pod="calico-system/whisker-69fb5f888d-pptgw" Nov 1 00:23:18.117888 env[1324]: time="2025-11-01T00:23:18.117475989Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-69fb5f888d-pptgw,Uid:8bcd59ea-9151-4aaf-9b6c-77893bc394d7,Namespace:calico-system,Attempt:0,}" Nov 1 00:23:18.246904 systemd-networkd[1098]: calicb232c6326b: Link UP Nov 1 00:23:18.249223 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Nov 1 00:23:18.249310 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): calicb232c6326b: link becomes ready Nov 1 00:23:18.249367 systemd-networkd[1098]: calicb232c6326b: Gained carrier Nov 1 00:23:18.268688 env[1324]: 2025-11-01 00:23:18.158 [INFO][3441] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 1 00:23:18.268688 env[1324]: 2025-11-01 00:23:18.175 [INFO][3441] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--69fb5f888d--pptgw-eth0 whisker-69fb5f888d- calico-system 8bcd59ea-9151-4aaf-9b6c-77893bc394d7 939 0 2025-11-01 00:23:17 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:69fb5f888d projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-69fb5f888d-pptgw eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calicb232c6326b [] [] }} ContainerID="9a0f6eb561244101a53a4c934805eb8bf80123e76ef4b612ecad49b3c289e588" Namespace="calico-system" Pod="whisker-69fb5f888d-pptgw" WorkloadEndpoint="localhost-k8s-whisker--69fb5f888d--pptgw-" Nov 1 00:23:18.268688 env[1324]: 2025-11-01 00:23:18.175 [INFO][3441] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="9a0f6eb561244101a53a4c934805eb8bf80123e76ef4b612ecad49b3c289e588" Namespace="calico-system" Pod="whisker-69fb5f888d-pptgw" WorkloadEndpoint="localhost-k8s-whisker--69fb5f888d--pptgw-eth0" Nov 1 00:23:18.268688 env[1324]: 2025-11-01 00:23:18.198 [INFO][3455] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9a0f6eb561244101a53a4c934805eb8bf80123e76ef4b612ecad49b3c289e588" HandleID="k8s-pod-network.9a0f6eb561244101a53a4c934805eb8bf80123e76ef4b612ecad49b3c289e588" Workload="localhost-k8s-whisker--69fb5f888d--pptgw-eth0" Nov 1 00:23:18.268688 env[1324]: 2025-11-01 00:23:18.198 [INFO][3455] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="9a0f6eb561244101a53a4c934805eb8bf80123e76ef4b612ecad49b3c289e588" HandleID="k8s-pod-network.9a0f6eb561244101a53a4c934805eb8bf80123e76ef4b612ecad49b3c289e588" Workload="localhost-k8s-whisker--69fb5f888d--pptgw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004c7e0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-69fb5f888d-pptgw", "timestamp":"2025-11-01 00:23:18.198539941 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 1 00:23:18.268688 env[1324]: 2025-11-01 00:23:18.198 [INFO][3455] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:23:18.268688 env[1324]: 2025-11-01 00:23:18.198 [INFO][3455] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:23:18.268688 env[1324]: 2025-11-01 00:23:18.198 [INFO][3455] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 1 00:23:18.268688 env[1324]: 2025-11-01 00:23:18.213 [INFO][3455] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.9a0f6eb561244101a53a4c934805eb8bf80123e76ef4b612ecad49b3c289e588" host="localhost" Nov 1 00:23:18.268688 env[1324]: 2025-11-01 00:23:18.218 [INFO][3455] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 1 00:23:18.268688 env[1324]: 2025-11-01 00:23:18.222 [INFO][3455] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 1 00:23:18.268688 env[1324]: 2025-11-01 00:23:18.223 [INFO][3455] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 1 00:23:18.268688 env[1324]: 2025-11-01 00:23:18.227 [INFO][3455] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 1 00:23:18.268688 env[1324]: 2025-11-01 00:23:18.227 [INFO][3455] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.9a0f6eb561244101a53a4c934805eb8bf80123e76ef4b612ecad49b3c289e588" host="localhost" Nov 1 00:23:18.268688 env[1324]: 2025-11-01 00:23:18.228 [INFO][3455] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.9a0f6eb561244101a53a4c934805eb8bf80123e76ef4b612ecad49b3c289e588 Nov 1 00:23:18.268688 env[1324]: 2025-11-01 00:23:18.232 [INFO][3455] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.9a0f6eb561244101a53a4c934805eb8bf80123e76ef4b612ecad49b3c289e588" host="localhost" Nov 1 00:23:18.268688 env[1324]: 2025-11-01 00:23:18.237 [INFO][3455] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.9a0f6eb561244101a53a4c934805eb8bf80123e76ef4b612ecad49b3c289e588" host="localhost" Nov 1 00:23:18.268688 env[1324]: 2025-11-01 00:23:18.237 [INFO][3455] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.9a0f6eb561244101a53a4c934805eb8bf80123e76ef4b612ecad49b3c289e588" host="localhost" Nov 1 00:23:18.268688 env[1324]: 2025-11-01 00:23:18.237 [INFO][3455] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:23:18.268688 env[1324]: 2025-11-01 00:23:18.237 [INFO][3455] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="9a0f6eb561244101a53a4c934805eb8bf80123e76ef4b612ecad49b3c289e588" HandleID="k8s-pod-network.9a0f6eb561244101a53a4c934805eb8bf80123e76ef4b612ecad49b3c289e588" Workload="localhost-k8s-whisker--69fb5f888d--pptgw-eth0" Nov 1 00:23:18.269242 env[1324]: 2025-11-01 00:23:18.239 [INFO][3441] cni-plugin/k8s.go 418: Populated endpoint ContainerID="9a0f6eb561244101a53a4c934805eb8bf80123e76ef4b612ecad49b3c289e588" Namespace="calico-system" Pod="whisker-69fb5f888d-pptgw" WorkloadEndpoint="localhost-k8s-whisker--69fb5f888d--pptgw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--69fb5f888d--pptgw-eth0", GenerateName:"whisker-69fb5f888d-", Namespace:"calico-system", SelfLink:"", UID:"8bcd59ea-9151-4aaf-9b6c-77893bc394d7", ResourceVersion:"939", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 23, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"69fb5f888d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-69fb5f888d-pptgw", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calicb232c6326b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:23:18.269242 env[1324]: 2025-11-01 00:23:18.239 [INFO][3441] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="9a0f6eb561244101a53a4c934805eb8bf80123e76ef4b612ecad49b3c289e588" Namespace="calico-system" Pod="whisker-69fb5f888d-pptgw" WorkloadEndpoint="localhost-k8s-whisker--69fb5f888d--pptgw-eth0" Nov 1 00:23:18.269242 env[1324]: 2025-11-01 00:23:18.239 [INFO][3441] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calicb232c6326b ContainerID="9a0f6eb561244101a53a4c934805eb8bf80123e76ef4b612ecad49b3c289e588" Namespace="calico-system" Pod="whisker-69fb5f888d-pptgw" WorkloadEndpoint="localhost-k8s-whisker--69fb5f888d--pptgw-eth0" Nov 1 00:23:18.269242 env[1324]: 2025-11-01 00:23:18.250 [INFO][3441] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9a0f6eb561244101a53a4c934805eb8bf80123e76ef4b612ecad49b3c289e588" Namespace="calico-system" Pod="whisker-69fb5f888d-pptgw" WorkloadEndpoint="localhost-k8s-whisker--69fb5f888d--pptgw-eth0" Nov 1 00:23:18.269242 env[1324]: 2025-11-01 00:23:18.251 [INFO][3441] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="9a0f6eb561244101a53a4c934805eb8bf80123e76ef4b612ecad49b3c289e588" Namespace="calico-system" Pod="whisker-69fb5f888d-pptgw" WorkloadEndpoint="localhost-k8s-whisker--69fb5f888d--pptgw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--69fb5f888d--pptgw-eth0", GenerateName:"whisker-69fb5f888d-", Namespace:"calico-system", SelfLink:"", UID:"8bcd59ea-9151-4aaf-9b6c-77893bc394d7", ResourceVersion:"939", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 23, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"69fb5f888d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"9a0f6eb561244101a53a4c934805eb8bf80123e76ef4b612ecad49b3c289e588", Pod:"whisker-69fb5f888d-pptgw", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calicb232c6326b", MAC:"a6:0a:35:9e:c3:23", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:23:18.269242 env[1324]: 2025-11-01 00:23:18.266 [INFO][3441] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="9a0f6eb561244101a53a4c934805eb8bf80123e76ef4b612ecad49b3c289e588" Namespace="calico-system" Pod="whisker-69fb5f888d-pptgw" WorkloadEndpoint="localhost-k8s-whisker--69fb5f888d--pptgw-eth0" Nov 1 00:23:18.277610 env[1324]: time="2025-11-01T00:23:18.277433450Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:23:18.277610 env[1324]: time="2025-11-01T00:23:18.277470891Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:23:18.277610 env[1324]: time="2025-11-01T00:23:18.277481131Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:23:18.277750 env[1324]: time="2025-11-01T00:23:18.277664171Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/9a0f6eb561244101a53a4c934805eb8bf80123e76ef4b612ecad49b3c289e588 pid=3481 runtime=io.containerd.runc.v2 Nov 1 00:23:18.302058 systemd-resolved[1239]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 1 00:23:18.318783 env[1324]: time="2025-11-01T00:23:18.318742287Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-69fb5f888d-pptgw,Uid:8bcd59ea-9151-4aaf-9b6c-77893bc394d7,Namespace:calico-system,Attempt:0,} returns sandbox id \"9a0f6eb561244101a53a4c934805eb8bf80123e76ef4b612ecad49b3c289e588\"" Nov 1 00:23:18.322220 env[1324]: time="2025-11-01T00:23:18.322184930Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 1 00:23:18.536209 env[1324]: time="2025-11-01T00:23:18.536086799Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:23:18.537206 env[1324]: time="2025-11-01T00:23:18.537099680Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 1 00:23:18.537484 kubelet[2124]: E1101 00:23:18.537400 2124 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 1 00:23:18.537484 kubelet[2124]: E1101 00:23:18.537465 2124 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 1 00:23:18.539663 kubelet[2124]: E1101 00:23:18.537822 2124 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:5acdfbcf0fe34a9f88c8ad5a16543143,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-jtvw6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-69fb5f888d-pptgw_calico-system(8bcd59ea-9151-4aaf-9b6c-77893bc394d7): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 1 00:23:18.539847 env[1324]: time="2025-11-01T00:23:18.539683442Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 1 00:23:18.600000 audit[3567]: AVC avc: denied { write } for pid=3567 comm="tee" name="fd" dev="proc" ino=19365 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Nov 1 00:23:18.606286 kernel: kauditd_printk_skb: 8 callbacks suppressed Nov 1 00:23:18.606377 kernel: audit: type=1400 audit(1761956598.600:302): avc: denied { write } for pid=3567 comm="tee" name="fd" dev="proc" ino=19365 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Nov 1 00:23:18.606415 kernel: audit: type=1300 audit(1761956598.600:302): arch=c00000b7 syscall=56 success=yes exit=3 a0=ffffffffffffff9c a1=fffffc0e17d9 a2=241 a3=1b6 items=1 ppid=3524 pid=3567 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:23:18.600000 audit[3567]: SYSCALL arch=c00000b7 syscall=56 success=yes exit=3 a0=ffffffffffffff9c a1=fffffc0e17d9 a2=241 a3=1b6 items=1 ppid=3524 pid=3567 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:23:18.600000 audit: CWD cwd="/etc/service/enabled/allocate-tunnel-addrs/log" Nov 1 00:23:18.611924 kernel: audit: type=1307 audit(1761956598.600:302): cwd="/etc/service/enabled/allocate-tunnel-addrs/log" Nov 1 00:23:18.611974 kernel: audit: type=1302 audit(1761956598.600:302): item=0 name="/dev/fd/63" inode=18288 dev=00:0b mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:23:18.600000 audit: PATH item=0 name="/dev/fd/63" inode=18288 dev=00:0b mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:23:18.616596 kernel: audit: type=1327 audit(1761956598.600:302): proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Nov 1 00:23:18.600000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Nov 1 00:23:18.603000 audit[3578]: AVC avc: denied { write } for pid=3578 comm="tee" name="fd" dev="proc" ino=19371 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Nov 1 00:23:18.638396 kernel: audit: type=1400 audit(1761956598.603:303): avc: denied { write } for pid=3578 comm="tee" name="fd" dev="proc" ino=19371 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Nov 1 00:23:18.642414 kubelet[2124]: I1101 00:23:18.642377 2124 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c9ce1775-ce62-4551-bd13-ddb007698a7f" path="/var/lib/kubelet/pods/c9ce1775-ce62-4551-bd13-ddb007698a7f/volumes" Nov 1 00:23:18.603000 audit[3578]: SYSCALL arch=c00000b7 syscall=56 success=yes exit=3 a0=ffffffffffffff9c a1=ffffd19d07e9 a2=241 a3=1b6 items=1 ppid=3522 pid=3578 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:23:18.653425 kernel: audit: type=1300 audit(1761956598.603:303): arch=c00000b7 syscall=56 success=yes exit=3 a0=ffffffffffffff9c a1=ffffd19d07e9 a2=241 a3=1b6 items=1 ppid=3522 pid=3578 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:23:18.603000 audit: CWD cwd="/etc/service/enabled/felix/log" Nov 1 00:23:18.659346 kernel: audit: type=1307 audit(1761956598.603:303): cwd="/etc/service/enabled/felix/log" Nov 1 00:23:18.603000 audit: PATH item=0 name="/dev/fd/63" inode=17314 dev=00:0b mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:23:18.668590 kernel: audit: type=1302 audit(1761956598.603:303): item=0 name="/dev/fd/63" inode=17314 dev=00:0b mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:23:18.603000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Nov 1 00:23:18.670760 kernel: audit: type=1327 audit(1761956598.603:303): proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Nov 1 00:23:18.632000 audit[3592]: AVC avc: denied { write } for pid=3592 comm="tee" name="fd" dev="proc" ino=18298 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Nov 1 00:23:18.632000 audit[3592]: SYSCALL arch=c00000b7 syscall=56 success=yes exit=3 a0=ffffffffffffff9c a1=ffffcba477eb a2=241 a3=1b6 items=1 ppid=3530 pid=3592 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:23:18.632000 audit: CWD cwd="/etc/service/enabled/cni/log" Nov 1 00:23:18.632000 audit: PATH item=0 name="/dev/fd/63" inode=18293 dev=00:0b mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:23:18.632000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Nov 1 00:23:18.637000 audit[3595]: AVC avc: denied { write } for pid=3595 comm="tee" name="fd" dev="proc" ino=19384 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Nov 1 00:23:18.637000 audit[3595]: SYSCALL arch=c00000b7 syscall=56 success=yes exit=3 a0=ffffffffffffff9c a1=ffffdb18a7e9 a2=241 a3=1b6 items=1 ppid=3533 pid=3595 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:23:18.637000 audit: CWD cwd="/etc/service/enabled/bird6/log" Nov 1 00:23:18.637000 audit: PATH item=0 name="/dev/fd/63" inode=19771 dev=00:0b mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:23:18.637000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Nov 1 00:23:18.637000 audit[3587]: AVC avc: denied { write } for pid=3587 comm="tee" name="fd" dev="proc" ino=18305 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Nov 1 00:23:18.637000 audit[3587]: SYSCALL arch=c00000b7 syscall=56 success=yes exit=3 a0=ffffffffffffff9c a1=ffffe6fb87ea a2=241 a3=1b6 items=1 ppid=3523 pid=3587 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:23:18.637000 audit: CWD cwd="/etc/service/enabled/bird/log" Nov 1 00:23:18.637000 audit: PATH item=0 name="/dev/fd/63" inode=19379 dev=00:0b mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:23:18.637000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Nov 1 00:23:18.644000 audit[3603]: AVC avc: denied { write } for pid=3603 comm="tee" name="fd" dev="proc" ino=19400 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Nov 1 00:23:18.644000 audit[3603]: SYSCALL arch=c00000b7 syscall=56 success=yes exit=3 a0=ffffffffffffff9c a1=fffff260f7da a2=241 a3=1b6 items=1 ppid=3528 pid=3603 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:23:18.644000 audit: CWD cwd="/etc/service/enabled/node-status-reporter/log" Nov 1 00:23:18.644000 audit: PATH item=0 name="/dev/fd/63" inode=17323 dev=00:0b mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:23:18.644000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Nov 1 00:23:18.681000 audit[3607]: AVC avc: denied { write } for pid=3607 comm="tee" name="fd" dev="proc" ino=18314 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Nov 1 00:23:18.681000 audit[3607]: SYSCALL arch=c00000b7 syscall=56 success=yes exit=3 a0=ffffffffffffff9c a1=ffffc4f197e9 a2=241 a3=1b6 items=1 ppid=3532 pid=3607 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:23:18.681000 audit: CWD cwd="/etc/service/enabled/confd/log" Nov 1 00:23:18.681000 audit: PATH item=0 name="/dev/fd/63" inode=17326 dev=00:0b mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:23:18.681000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Nov 1 00:23:18.752413 env[1324]: time="2025-11-01T00:23:18.752359430Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:23:18.756787 env[1324]: time="2025-11-01T00:23:18.756742114Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 1 00:23:18.756910 kubelet[2124]: E1101 00:23:18.756877 2124 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 1 00:23:18.756959 kubelet[2124]: E1101 00:23:18.756915 2124 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 1 00:23:18.757062 kubelet[2124]: E1101 00:23:18.757009 2124 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jtvw6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-69fb5f888d-pptgw_calico-system(8bcd59ea-9151-4aaf-9b6c-77893bc394d7): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 1 00:23:18.758220 kubelet[2124]: E1101 00:23:18.758167 2124 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-69fb5f888d-pptgw" podUID="8bcd59ea-9151-4aaf-9b6c-77893bc394d7" Nov 1 00:23:18.758691 kubelet[2124]: I1101 00:23:18.758663 2124 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 1 00:23:18.759059 kubelet[2124]: E1101 00:23:18.759032 2124 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:23:19.734528 systemd-networkd[1098]: calicb232c6326b: Gained IPv6LL Nov 1 00:23:19.764039 kubelet[2124]: E1101 00:23:19.763993 2124 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-69fb5f888d-pptgw" podUID="8bcd59ea-9151-4aaf-9b6c-77893bc394d7" Nov 1 00:23:19.798000 audit[3642]: NETFILTER_CFG table=filter:105 family=2 entries=22 op=nft_register_rule pid=3642 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 00:23:19.798000 audit[3642]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=8224 a0=3 a1=ffffc0806280 a2=0 a3=1 items=0 ppid=2274 pid=3642 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:23:19.798000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 00:23:19.807000 audit[3642]: NETFILTER_CFG table=nat:106 family=2 entries=12 op=nft_register_rule pid=3642 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 00:23:19.807000 audit[3642]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffc0806280 a2=0 a3=1 items=0 ppid=2274 pid=3642 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:23:19.807000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 00:23:20.766049 kubelet[2124]: E1101 00:23:20.765998 2124 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-69fb5f888d-pptgw" podUID="8bcd59ea-9151-4aaf-9b6c-77893bc394d7" Nov 1 00:23:22.651283 env[1324]: time="2025-11-01T00:23:22.650817226Z" level=info msg="StopPodSandbox for \"9edb2d9a08c95b3b3d6993efca4621934b5f50fd9a0165021f3363739452b767\"" Nov 1 00:23:22.651283 env[1324]: time="2025-11-01T00:23:22.650884146Z" level=info msg="StopPodSandbox for \"f0852cc89cb223aa7610806523ff706e346ec5eb1bd18c75b2310cfbea3cd578\"" Nov 1 00:23:22.765984 env[1324]: 2025-11-01 00:23:22.709 [INFO][3718] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="f0852cc89cb223aa7610806523ff706e346ec5eb1bd18c75b2310cfbea3cd578" Nov 1 00:23:22.765984 env[1324]: 2025-11-01 00:23:22.709 [INFO][3718] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="f0852cc89cb223aa7610806523ff706e346ec5eb1bd18c75b2310cfbea3cd578" iface="eth0" netns="/var/run/netns/cni-808cf0bb-0ace-cd90-9034-82bd5d0509c6" Nov 1 00:23:22.765984 env[1324]: 2025-11-01 00:23:22.709 [INFO][3718] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="f0852cc89cb223aa7610806523ff706e346ec5eb1bd18c75b2310cfbea3cd578" iface="eth0" netns="/var/run/netns/cni-808cf0bb-0ace-cd90-9034-82bd5d0509c6" Nov 1 00:23:22.765984 env[1324]: 2025-11-01 00:23:22.709 [INFO][3718] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="f0852cc89cb223aa7610806523ff706e346ec5eb1bd18c75b2310cfbea3cd578" iface="eth0" netns="/var/run/netns/cni-808cf0bb-0ace-cd90-9034-82bd5d0509c6" Nov 1 00:23:22.765984 env[1324]: 2025-11-01 00:23:22.709 [INFO][3718] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="f0852cc89cb223aa7610806523ff706e346ec5eb1bd18c75b2310cfbea3cd578" Nov 1 00:23:22.765984 env[1324]: 2025-11-01 00:23:22.709 [INFO][3718] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f0852cc89cb223aa7610806523ff706e346ec5eb1bd18c75b2310cfbea3cd578" Nov 1 00:23:22.765984 env[1324]: 2025-11-01 00:23:22.732 [INFO][3733] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="f0852cc89cb223aa7610806523ff706e346ec5eb1bd18c75b2310cfbea3cd578" HandleID="k8s-pod-network.f0852cc89cb223aa7610806523ff706e346ec5eb1bd18c75b2310cfbea3cd578" Workload="localhost-k8s-calico--apiserver--77cfb4d4d6--l78bq-eth0" Nov 1 00:23:22.765984 env[1324]: 2025-11-01 00:23:22.732 [INFO][3733] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:23:22.765984 env[1324]: 2025-11-01 00:23:22.732 [INFO][3733] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:23:22.765984 env[1324]: 2025-11-01 00:23:22.748 [WARNING][3733] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="f0852cc89cb223aa7610806523ff706e346ec5eb1bd18c75b2310cfbea3cd578" HandleID="k8s-pod-network.f0852cc89cb223aa7610806523ff706e346ec5eb1bd18c75b2310cfbea3cd578" Workload="localhost-k8s-calico--apiserver--77cfb4d4d6--l78bq-eth0" Nov 1 00:23:22.765984 env[1324]: 2025-11-01 00:23:22.748 [INFO][3733] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="f0852cc89cb223aa7610806523ff706e346ec5eb1bd18c75b2310cfbea3cd578" HandleID="k8s-pod-network.f0852cc89cb223aa7610806523ff706e346ec5eb1bd18c75b2310cfbea3cd578" Workload="localhost-k8s-calico--apiserver--77cfb4d4d6--l78bq-eth0" Nov 1 00:23:22.765984 env[1324]: 2025-11-01 00:23:22.750 [INFO][3733] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:23:22.765984 env[1324]: 2025-11-01 00:23:22.763 [INFO][3718] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="f0852cc89cb223aa7610806523ff706e346ec5eb1bd18c75b2310cfbea3cd578" Nov 1 00:23:22.772107 env[1324]: time="2025-11-01T00:23:22.768638867Z" level=info msg="TearDown network for sandbox \"f0852cc89cb223aa7610806523ff706e346ec5eb1bd18c75b2310cfbea3cd578\" successfully" Nov 1 00:23:22.772107 env[1324]: time="2025-11-01T00:23:22.768679667Z" level=info msg="StopPodSandbox for \"f0852cc89cb223aa7610806523ff706e346ec5eb1bd18c75b2310cfbea3cd578\" returns successfully" Nov 1 00:23:22.772107 env[1324]: time="2025-11-01T00:23:22.769750387Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-77cfb4d4d6-l78bq,Uid:098f9c0f-a24a-4001-88bf-ea4e44e957ea,Namespace:calico-apiserver,Attempt:1,}" Nov 1 00:23:22.770621 systemd[1]: run-netns-cni\x2d808cf0bb\x2d0ace\x2dcd90\x2d9034\x2d82bd5d0509c6.mount: Deactivated successfully. Nov 1 00:23:22.778771 env[1324]: 2025-11-01 00:23:22.708 [INFO][3714] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="9edb2d9a08c95b3b3d6993efca4621934b5f50fd9a0165021f3363739452b767" Nov 1 00:23:22.778771 env[1324]: 2025-11-01 00:23:22.708 [INFO][3714] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="9edb2d9a08c95b3b3d6993efca4621934b5f50fd9a0165021f3363739452b767" iface="eth0" netns="/var/run/netns/cni-afd3b725-b76a-1047-43ec-eb16589ba4a0" Nov 1 00:23:22.778771 env[1324]: 2025-11-01 00:23:22.709 [INFO][3714] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="9edb2d9a08c95b3b3d6993efca4621934b5f50fd9a0165021f3363739452b767" iface="eth0" netns="/var/run/netns/cni-afd3b725-b76a-1047-43ec-eb16589ba4a0" Nov 1 00:23:22.778771 env[1324]: 2025-11-01 00:23:22.709 [INFO][3714] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="9edb2d9a08c95b3b3d6993efca4621934b5f50fd9a0165021f3363739452b767" iface="eth0" netns="/var/run/netns/cni-afd3b725-b76a-1047-43ec-eb16589ba4a0" Nov 1 00:23:22.778771 env[1324]: 2025-11-01 00:23:22.710 [INFO][3714] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="9edb2d9a08c95b3b3d6993efca4621934b5f50fd9a0165021f3363739452b767" Nov 1 00:23:22.778771 env[1324]: 2025-11-01 00:23:22.710 [INFO][3714] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9edb2d9a08c95b3b3d6993efca4621934b5f50fd9a0165021f3363739452b767" Nov 1 00:23:22.778771 env[1324]: 2025-11-01 00:23:22.752 [INFO][3739] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="9edb2d9a08c95b3b3d6993efca4621934b5f50fd9a0165021f3363739452b767" HandleID="k8s-pod-network.9edb2d9a08c95b3b3d6993efca4621934b5f50fd9a0165021f3363739452b767" Workload="localhost-k8s-coredns--668d6bf9bc--8c5b8-eth0" Nov 1 00:23:22.778771 env[1324]: 2025-11-01 00:23:22.752 [INFO][3739] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:23:22.778771 env[1324]: 2025-11-01 00:23:22.752 [INFO][3739] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:23:22.778771 env[1324]: 2025-11-01 00:23:22.763 [WARNING][3739] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="9edb2d9a08c95b3b3d6993efca4621934b5f50fd9a0165021f3363739452b767" HandleID="k8s-pod-network.9edb2d9a08c95b3b3d6993efca4621934b5f50fd9a0165021f3363739452b767" Workload="localhost-k8s-coredns--668d6bf9bc--8c5b8-eth0" Nov 1 00:23:22.778771 env[1324]: 2025-11-01 00:23:22.763 [INFO][3739] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="9edb2d9a08c95b3b3d6993efca4621934b5f50fd9a0165021f3363739452b767" HandleID="k8s-pod-network.9edb2d9a08c95b3b3d6993efca4621934b5f50fd9a0165021f3363739452b767" Workload="localhost-k8s-coredns--668d6bf9bc--8c5b8-eth0" Nov 1 00:23:22.778771 env[1324]: 2025-11-01 00:23:22.766 [INFO][3739] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:23:22.778771 env[1324]: 2025-11-01 00:23:22.774 [INFO][3714] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="9edb2d9a08c95b3b3d6993efca4621934b5f50fd9a0165021f3363739452b767" Nov 1 00:23:22.780885 systemd[1]: run-netns-cni\x2dafd3b725\x2db76a\x2d1047\x2d43ec\x2deb16589ba4a0.mount: Deactivated successfully. Nov 1 00:23:22.781566 env[1324]: time="2025-11-01T00:23:22.781530835Z" level=info msg="TearDown network for sandbox \"9edb2d9a08c95b3b3d6993efca4621934b5f50fd9a0165021f3363739452b767\" successfully" Nov 1 00:23:22.781618 env[1324]: time="2025-11-01T00:23:22.781566915Z" level=info msg="StopPodSandbox for \"9edb2d9a08c95b3b3d6993efca4621934b5f50fd9a0165021f3363739452b767\" returns successfully" Nov 1 00:23:22.781855 kubelet[2124]: E1101 00:23:22.781830 2124 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:23:22.782970 env[1324]: time="2025-11-01T00:23:22.782892276Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-8c5b8,Uid:e95bf48f-c761-42b0-aba2-5fe9b024e3f4,Namespace:kube-system,Attempt:1,}" Nov 1 00:23:22.949887 systemd-networkd[1098]: cali2b36af0aaf2: Link UP Nov 1 00:23:22.951948 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Nov 1 00:23:22.951974 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali2b36af0aaf2: link becomes ready Nov 1 00:23:22.951683 systemd-networkd[1098]: cali2b36af0aaf2: Gained carrier Nov 1 00:23:22.965493 env[1324]: 2025-11-01 00:23:22.841 [INFO][3750] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 1 00:23:22.965493 env[1324]: 2025-11-01 00:23:22.858 [INFO][3750] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--77cfb4d4d6--l78bq-eth0 calico-apiserver-77cfb4d4d6- calico-apiserver 098f9c0f-a24a-4001-88bf-ea4e44e957ea 977 0 2025-11-01 00:22:57 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:77cfb4d4d6 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-77cfb4d4d6-l78bq eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali2b36af0aaf2 [] [] }} ContainerID="ad04a470ac12c4465d4d74ee8beea9bf567f3de479d41654686d5bb595dd269d" Namespace="calico-apiserver" Pod="calico-apiserver-77cfb4d4d6-l78bq" WorkloadEndpoint="localhost-k8s-calico--apiserver--77cfb4d4d6--l78bq-" Nov 1 00:23:22.965493 env[1324]: 2025-11-01 00:23:22.858 [INFO][3750] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ad04a470ac12c4465d4d74ee8beea9bf567f3de479d41654686d5bb595dd269d" Namespace="calico-apiserver" Pod="calico-apiserver-77cfb4d4d6-l78bq" WorkloadEndpoint="localhost-k8s-calico--apiserver--77cfb4d4d6--l78bq-eth0" Nov 1 00:23:22.965493 env[1324]: 2025-11-01 00:23:22.884 [INFO][3778] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ad04a470ac12c4465d4d74ee8beea9bf567f3de479d41654686d5bb595dd269d" HandleID="k8s-pod-network.ad04a470ac12c4465d4d74ee8beea9bf567f3de479d41654686d5bb595dd269d" Workload="localhost-k8s-calico--apiserver--77cfb4d4d6--l78bq-eth0" Nov 1 00:23:22.965493 env[1324]: 2025-11-01 00:23:22.884 [INFO][3778] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="ad04a470ac12c4465d4d74ee8beea9bf567f3de479d41654686d5bb595dd269d" HandleID="k8s-pod-network.ad04a470ac12c4465d4d74ee8beea9bf567f3de479d41654686d5bb595dd269d" Workload="localhost-k8s-calico--apiserver--77cfb4d4d6--l78bq-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40004400d0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-77cfb4d4d6-l78bq", "timestamp":"2025-11-01 00:23:22.884779826 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 1 00:23:22.965493 env[1324]: 2025-11-01 00:23:22.884 [INFO][3778] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:23:22.965493 env[1324]: 2025-11-01 00:23:22.885 [INFO][3778] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:23:22.965493 env[1324]: 2025-11-01 00:23:22.885 [INFO][3778] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 1 00:23:22.965493 env[1324]: 2025-11-01 00:23:22.894 [INFO][3778] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.ad04a470ac12c4465d4d74ee8beea9bf567f3de479d41654686d5bb595dd269d" host="localhost" Nov 1 00:23:22.965493 env[1324]: 2025-11-01 00:23:22.898 [INFO][3778] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 1 00:23:22.965493 env[1324]: 2025-11-01 00:23:22.903 [INFO][3778] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 1 00:23:22.965493 env[1324]: 2025-11-01 00:23:22.905 [INFO][3778] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 1 00:23:22.965493 env[1324]: 2025-11-01 00:23:22.907 [INFO][3778] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 1 00:23:22.965493 env[1324]: 2025-11-01 00:23:22.907 [INFO][3778] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.ad04a470ac12c4465d4d74ee8beea9bf567f3de479d41654686d5bb595dd269d" host="localhost" Nov 1 00:23:22.965493 env[1324]: 2025-11-01 00:23:22.908 [INFO][3778] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.ad04a470ac12c4465d4d74ee8beea9bf567f3de479d41654686d5bb595dd269d Nov 1 00:23:22.965493 env[1324]: 2025-11-01 00:23:22.918 [INFO][3778] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.ad04a470ac12c4465d4d74ee8beea9bf567f3de479d41654686d5bb595dd269d" host="localhost" Nov 1 00:23:22.965493 env[1324]: 2025-11-01 00:23:22.944 [INFO][3778] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.ad04a470ac12c4465d4d74ee8beea9bf567f3de479d41654686d5bb595dd269d" host="localhost" Nov 1 00:23:22.965493 env[1324]: 2025-11-01 00:23:22.944 [INFO][3778] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.ad04a470ac12c4465d4d74ee8beea9bf567f3de479d41654686d5bb595dd269d" host="localhost" Nov 1 00:23:22.965493 env[1324]: 2025-11-01 00:23:22.944 [INFO][3778] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:23:22.965493 env[1324]: 2025-11-01 00:23:22.944 [INFO][3778] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="ad04a470ac12c4465d4d74ee8beea9bf567f3de479d41654686d5bb595dd269d" HandleID="k8s-pod-network.ad04a470ac12c4465d4d74ee8beea9bf567f3de479d41654686d5bb595dd269d" Workload="localhost-k8s-calico--apiserver--77cfb4d4d6--l78bq-eth0" Nov 1 00:23:22.966043 env[1324]: 2025-11-01 00:23:22.946 [INFO][3750] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ad04a470ac12c4465d4d74ee8beea9bf567f3de479d41654686d5bb595dd269d" Namespace="calico-apiserver" Pod="calico-apiserver-77cfb4d4d6-l78bq" WorkloadEndpoint="localhost-k8s-calico--apiserver--77cfb4d4d6--l78bq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--77cfb4d4d6--l78bq-eth0", GenerateName:"calico-apiserver-77cfb4d4d6-", Namespace:"calico-apiserver", SelfLink:"", UID:"098f9c0f-a24a-4001-88bf-ea4e44e957ea", ResourceVersion:"977", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 22, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"77cfb4d4d6", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-77cfb4d4d6-l78bq", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali2b36af0aaf2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:23:22.966043 env[1324]: 2025-11-01 00:23:22.947 [INFO][3750] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="ad04a470ac12c4465d4d74ee8beea9bf567f3de479d41654686d5bb595dd269d" Namespace="calico-apiserver" Pod="calico-apiserver-77cfb4d4d6-l78bq" WorkloadEndpoint="localhost-k8s-calico--apiserver--77cfb4d4d6--l78bq-eth0" Nov 1 00:23:22.966043 env[1324]: 2025-11-01 00:23:22.947 [INFO][3750] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2b36af0aaf2 ContainerID="ad04a470ac12c4465d4d74ee8beea9bf567f3de479d41654686d5bb595dd269d" Namespace="calico-apiserver" Pod="calico-apiserver-77cfb4d4d6-l78bq" WorkloadEndpoint="localhost-k8s-calico--apiserver--77cfb4d4d6--l78bq-eth0" Nov 1 00:23:22.966043 env[1324]: 2025-11-01 00:23:22.952 [INFO][3750] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ad04a470ac12c4465d4d74ee8beea9bf567f3de479d41654686d5bb595dd269d" Namespace="calico-apiserver" Pod="calico-apiserver-77cfb4d4d6-l78bq" WorkloadEndpoint="localhost-k8s-calico--apiserver--77cfb4d4d6--l78bq-eth0" Nov 1 00:23:22.966043 env[1324]: 2025-11-01 00:23:22.952 [INFO][3750] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ad04a470ac12c4465d4d74ee8beea9bf567f3de479d41654686d5bb595dd269d" Namespace="calico-apiserver" Pod="calico-apiserver-77cfb4d4d6-l78bq" WorkloadEndpoint="localhost-k8s-calico--apiserver--77cfb4d4d6--l78bq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--77cfb4d4d6--l78bq-eth0", GenerateName:"calico-apiserver-77cfb4d4d6-", Namespace:"calico-apiserver", SelfLink:"", UID:"098f9c0f-a24a-4001-88bf-ea4e44e957ea", ResourceVersion:"977", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 22, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"77cfb4d4d6", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"ad04a470ac12c4465d4d74ee8beea9bf567f3de479d41654686d5bb595dd269d", Pod:"calico-apiserver-77cfb4d4d6-l78bq", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali2b36af0aaf2", MAC:"3e:a7:6e:ad:23:cc", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:23:22.966043 env[1324]: 2025-11-01 00:23:22.962 [INFO][3750] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ad04a470ac12c4465d4d74ee8beea9bf567f3de479d41654686d5bb595dd269d" Namespace="calico-apiserver" Pod="calico-apiserver-77cfb4d4d6-l78bq" WorkloadEndpoint="localhost-k8s-calico--apiserver--77cfb4d4d6--l78bq-eth0" Nov 1 00:23:22.977383 env[1324]: time="2025-11-01T00:23:22.976055568Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:23:22.977383 env[1324]: time="2025-11-01T00:23:22.976094288Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:23:22.977383 env[1324]: time="2025-11-01T00:23:22.976104448Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:23:22.977383 env[1324]: time="2025-11-01T00:23:22.976260488Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/ad04a470ac12c4465d4d74ee8beea9bf567f3de479d41654686d5bb595dd269d pid=3817 runtime=io.containerd.runc.v2 Nov 1 00:23:23.028088 systemd-resolved[1239]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 1 00:23:23.044920 systemd-networkd[1098]: calif63ed2b1edc: Link UP Nov 1 00:23:23.046429 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): calif63ed2b1edc: link becomes ready Nov 1 00:23:23.049575 systemd-networkd[1098]: calif63ed2b1edc: Gained carrier Nov 1 00:23:23.064938 env[1324]: 2025-11-01 00:23:22.848 [INFO][3756] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 1 00:23:23.064938 env[1324]: 2025-11-01 00:23:22.865 [INFO][3756] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--668d6bf9bc--8c5b8-eth0 coredns-668d6bf9bc- kube-system e95bf48f-c761-42b0-aba2-5fe9b024e3f4 978 0 2025-11-01 00:22:48 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-668d6bf9bc-8c5b8 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calif63ed2b1edc [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="4afb09ea9d1b833d3c4f9ae13f764b0a3cb7c26128f3e9e10a9e17f774d3977f" Namespace="kube-system" Pod="coredns-668d6bf9bc-8c5b8" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--8c5b8-" Nov 1 00:23:23.064938 env[1324]: 2025-11-01 00:23:22.865 [INFO][3756] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="4afb09ea9d1b833d3c4f9ae13f764b0a3cb7c26128f3e9e10a9e17f774d3977f" Namespace="kube-system" Pod="coredns-668d6bf9bc-8c5b8" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--8c5b8-eth0" Nov 1 00:23:23.064938 env[1324]: 2025-11-01 00:23:22.897 [INFO][3786] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4afb09ea9d1b833d3c4f9ae13f764b0a3cb7c26128f3e9e10a9e17f774d3977f" HandleID="k8s-pod-network.4afb09ea9d1b833d3c4f9ae13f764b0a3cb7c26128f3e9e10a9e17f774d3977f" Workload="localhost-k8s-coredns--668d6bf9bc--8c5b8-eth0" Nov 1 00:23:23.064938 env[1324]: 2025-11-01 00:23:22.897 [INFO][3786] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="4afb09ea9d1b833d3c4f9ae13f764b0a3cb7c26128f3e9e10a9e17f774d3977f" HandleID="k8s-pod-network.4afb09ea9d1b833d3c4f9ae13f764b0a3cb7c26128f3e9e10a9e17f774d3977f" Workload="localhost-k8s-coredns--668d6bf9bc--8c5b8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40005287c0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-668d6bf9bc-8c5b8", "timestamp":"2025-11-01 00:23:22.897683875 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 1 00:23:23.064938 env[1324]: 2025-11-01 00:23:22.897 [INFO][3786] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:23:23.064938 env[1324]: 2025-11-01 00:23:22.944 [INFO][3786] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:23:23.064938 env[1324]: 2025-11-01 00:23:22.944 [INFO][3786] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 1 00:23:23.064938 env[1324]: 2025-11-01 00:23:23.004 [INFO][3786] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.4afb09ea9d1b833d3c4f9ae13f764b0a3cb7c26128f3e9e10a9e17f774d3977f" host="localhost" Nov 1 00:23:23.064938 env[1324]: 2025-11-01 00:23:23.014 [INFO][3786] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 1 00:23:23.064938 env[1324]: 2025-11-01 00:23:23.020 [INFO][3786] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 1 00:23:23.064938 env[1324]: 2025-11-01 00:23:23.022 [INFO][3786] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 1 00:23:23.064938 env[1324]: 2025-11-01 00:23:23.026 [INFO][3786] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 1 00:23:23.064938 env[1324]: 2025-11-01 00:23:23.026 [INFO][3786] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.4afb09ea9d1b833d3c4f9ae13f764b0a3cb7c26128f3e9e10a9e17f774d3977f" host="localhost" Nov 1 00:23:23.064938 env[1324]: 2025-11-01 00:23:23.028 [INFO][3786] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.4afb09ea9d1b833d3c4f9ae13f764b0a3cb7c26128f3e9e10a9e17f774d3977f Nov 1 00:23:23.064938 env[1324]: 2025-11-01 00:23:23.032 [INFO][3786] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.4afb09ea9d1b833d3c4f9ae13f764b0a3cb7c26128f3e9e10a9e17f774d3977f" host="localhost" Nov 1 00:23:23.064938 env[1324]: 2025-11-01 00:23:23.040 [INFO][3786] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.4afb09ea9d1b833d3c4f9ae13f764b0a3cb7c26128f3e9e10a9e17f774d3977f" host="localhost" Nov 1 00:23:23.064938 env[1324]: 2025-11-01 00:23:23.040 [INFO][3786] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.4afb09ea9d1b833d3c4f9ae13f764b0a3cb7c26128f3e9e10a9e17f774d3977f" host="localhost" Nov 1 00:23:23.064938 env[1324]: 2025-11-01 00:23:23.040 [INFO][3786] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:23:23.064938 env[1324]: 2025-11-01 00:23:23.040 [INFO][3786] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="4afb09ea9d1b833d3c4f9ae13f764b0a3cb7c26128f3e9e10a9e17f774d3977f" HandleID="k8s-pod-network.4afb09ea9d1b833d3c4f9ae13f764b0a3cb7c26128f3e9e10a9e17f774d3977f" Workload="localhost-k8s-coredns--668d6bf9bc--8c5b8-eth0" Nov 1 00:23:23.065992 env[1324]: 2025-11-01 00:23:23.043 [INFO][3756] cni-plugin/k8s.go 418: Populated endpoint ContainerID="4afb09ea9d1b833d3c4f9ae13f764b0a3cb7c26128f3e9e10a9e17f774d3977f" Namespace="kube-system" Pod="coredns-668d6bf9bc-8c5b8" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--8c5b8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--8c5b8-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"e95bf48f-c761-42b0-aba2-5fe9b024e3f4", ResourceVersion:"978", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 22, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-668d6bf9bc-8c5b8", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif63ed2b1edc", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:23:23.065992 env[1324]: 2025-11-01 00:23:23.043 [INFO][3756] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="4afb09ea9d1b833d3c4f9ae13f764b0a3cb7c26128f3e9e10a9e17f774d3977f" Namespace="kube-system" Pod="coredns-668d6bf9bc-8c5b8" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--8c5b8-eth0" Nov 1 00:23:23.065992 env[1324]: 2025-11-01 00:23:23.043 [INFO][3756] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif63ed2b1edc ContainerID="4afb09ea9d1b833d3c4f9ae13f764b0a3cb7c26128f3e9e10a9e17f774d3977f" Namespace="kube-system" Pod="coredns-668d6bf9bc-8c5b8" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--8c5b8-eth0" Nov 1 00:23:23.065992 env[1324]: 2025-11-01 00:23:23.046 [INFO][3756] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4afb09ea9d1b833d3c4f9ae13f764b0a3cb7c26128f3e9e10a9e17f774d3977f" Namespace="kube-system" Pod="coredns-668d6bf9bc-8c5b8" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--8c5b8-eth0" Nov 1 00:23:23.065992 env[1324]: 2025-11-01 00:23:23.047 [INFO][3756] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="4afb09ea9d1b833d3c4f9ae13f764b0a3cb7c26128f3e9e10a9e17f774d3977f" Namespace="kube-system" Pod="coredns-668d6bf9bc-8c5b8" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--8c5b8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--8c5b8-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"e95bf48f-c761-42b0-aba2-5fe9b024e3f4", ResourceVersion:"978", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 22, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"4afb09ea9d1b833d3c4f9ae13f764b0a3cb7c26128f3e9e10a9e17f774d3977f", Pod:"coredns-668d6bf9bc-8c5b8", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif63ed2b1edc", MAC:"16:06:18:30:ba:6d", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:23:23.065992 env[1324]: 2025-11-01 00:23:23.057 [INFO][3756] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="4afb09ea9d1b833d3c4f9ae13f764b0a3cb7c26128f3e9e10a9e17f774d3977f" Namespace="kube-system" Pod="coredns-668d6bf9bc-8c5b8" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--8c5b8-eth0" Nov 1 00:23:23.075477 env[1324]: time="2025-11-01T00:23:23.075435353Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-77cfb4d4d6-l78bq,Uid:098f9c0f-a24a-4001-88bf-ea4e44e957ea,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"ad04a470ac12c4465d4d74ee8beea9bf567f3de479d41654686d5bb595dd269d\"" Nov 1 00:23:23.077853 env[1324]: time="2025-11-01T00:23:23.077815274Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 1 00:23:23.091037 env[1324]: time="2025-11-01T00:23:23.090003722Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:23:23.091310 env[1324]: time="2025-11-01T00:23:23.091055683Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:23:23.091310 env[1324]: time="2025-11-01T00:23:23.091088283Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:23:23.092211 env[1324]: time="2025-11-01T00:23:23.091515363Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/4afb09ea9d1b833d3c4f9ae13f764b0a3cb7c26128f3e9e10a9e17f774d3977f pid=3881 runtime=io.containerd.runc.v2 Nov 1 00:23:23.129465 systemd-resolved[1239]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 1 00:23:23.144164 env[1324]: time="2025-11-01T00:23:23.144117877Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-8c5b8,Uid:e95bf48f-c761-42b0-aba2-5fe9b024e3f4,Namespace:kube-system,Attempt:1,} returns sandbox id \"4afb09ea9d1b833d3c4f9ae13f764b0a3cb7c26128f3e9e10a9e17f774d3977f\"" Nov 1 00:23:23.144802 kubelet[2124]: E1101 00:23:23.144760 2124 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:23:23.146645 env[1324]: time="2025-11-01T00:23:23.146573078Z" level=info msg="CreateContainer within sandbox \"4afb09ea9d1b833d3c4f9ae13f764b0a3cb7c26128f3e9e10a9e17f774d3977f\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 1 00:23:23.163135 env[1324]: time="2025-11-01T00:23:23.163092169Z" level=info msg="CreateContainer within sandbox \"4afb09ea9d1b833d3c4f9ae13f764b0a3cb7c26128f3e9e10a9e17f774d3977f\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"6639a867f7fdaf1dfa6faf2f53859d29f4fd94682aec1c808f327d97cad0458e\"" Nov 1 00:23:23.163732 env[1324]: time="2025-11-01T00:23:23.163706849Z" level=info msg="StartContainer for \"6639a867f7fdaf1dfa6faf2f53859d29f4fd94682aec1c808f327d97cad0458e\"" Nov 1 00:23:23.209261 env[1324]: time="2025-11-01T00:23:23.206460757Z" level=info msg="StartContainer for \"6639a867f7fdaf1dfa6faf2f53859d29f4fd94682aec1c808f327d97cad0458e\" returns successfully" Nov 1 00:23:23.234555 systemd[1]: Started sshd@7-10.0.0.92:22-10.0.0.1:35530.service. Nov 1 00:23:23.233000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.0.0.92:22-10.0.0.1:35530 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:23:23.282000 audit[3947]: USER_ACCT pid=3947 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Nov 1 00:23:23.283983 sshd[3947]: Accepted publickey for core from 10.0.0.1 port 35530 ssh2: RSA SHA256:kb3suJ2QTjzwtL4e8dR0lwVlSg216vRjckn55cG0Sc4 Nov 1 00:23:23.284000 audit[3947]: CRED_ACQ pid=3947 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Nov 1 00:23:23.284000 audit[3947]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffdcc117b0 a2=3 a3=1 items=0 ppid=1 pid=3947 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=8 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:23:23.284000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Nov 1 00:23:23.285895 sshd[3947]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:23:23.287776 env[1324]: time="2025-11-01T00:23:23.287733489Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:23:23.288808 env[1324]: time="2025-11-01T00:23:23.288742769Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 1 00:23:23.290548 kubelet[2124]: E1101 00:23:23.290483 2124 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 00:23:23.290548 kubelet[2124]: E1101 00:23:23.290542 2124 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 00:23:23.290678 systemd[1]: Started session-8.scope. Nov 1 00:23:23.290758 kubelet[2124]: E1101 00:23:23.290666 2124 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bfv7s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-77cfb4d4d6-l78bq_calico-apiserver(098f9c0f-a24a-4001-88bf-ea4e44e957ea): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 1 00:23:23.290847 systemd-logind[1310]: New session 8 of user core. Nov 1 00:23:23.292093 kubelet[2124]: E1101 00:23:23.292052 2124 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-77cfb4d4d6-l78bq" podUID="098f9c0f-a24a-4001-88bf-ea4e44e957ea" Nov 1 00:23:23.295000 audit[3947]: USER_START pid=3947 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Nov 1 00:23:23.297000 audit[3953]: CRED_ACQ pid=3953 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Nov 1 00:23:23.443187 sshd[3947]: pam_unix(sshd:session): session closed for user core Nov 1 00:23:23.442000 audit[3947]: USER_END pid=3947 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Nov 1 00:23:23.442000 audit[3947]: CRED_DISP pid=3947 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Nov 1 00:23:23.445660 systemd[1]: sshd@7-10.0.0.92:22-10.0.0.1:35530.service: Deactivated successfully. Nov 1 00:23:23.444000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.0.0.92:22-10.0.0.1:35530 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:23:23.446679 systemd[1]: session-8.scope: Deactivated successfully. Nov 1 00:23:23.446984 systemd-logind[1310]: Session 8 logged out. Waiting for processes to exit. Nov 1 00:23:23.447679 systemd-logind[1310]: Removed session 8. Nov 1 00:23:23.641099 env[1324]: time="2025-11-01T00:23:23.641003995Z" level=info msg="StopPodSandbox for \"f80c0e08d467894b039f948b8f86ba376ec5e81932e79cbeb44ff3a13e6573ce\"" Nov 1 00:23:23.642398 env[1324]: time="2025-11-01T00:23:23.641073235Z" level=info msg="StopPodSandbox for \"b4bdbcd9fc0808043b166066c935f2205113269a5c6f10e3fbe294d92cc3f43d\"" Nov 1 00:23:23.737586 env[1324]: 2025-11-01 00:23:23.695 [INFO][3993] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="b4bdbcd9fc0808043b166066c935f2205113269a5c6f10e3fbe294d92cc3f43d" Nov 1 00:23:23.737586 env[1324]: 2025-11-01 00:23:23.696 [INFO][3993] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="b4bdbcd9fc0808043b166066c935f2205113269a5c6f10e3fbe294d92cc3f43d" iface="eth0" netns="/var/run/netns/cni-a31e1e18-5ef6-38f7-e5cf-dd81cf34b4e3" Nov 1 00:23:23.737586 env[1324]: 2025-11-01 00:23:23.696 [INFO][3993] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="b4bdbcd9fc0808043b166066c935f2205113269a5c6f10e3fbe294d92cc3f43d" iface="eth0" netns="/var/run/netns/cni-a31e1e18-5ef6-38f7-e5cf-dd81cf34b4e3" Nov 1 00:23:23.737586 env[1324]: 2025-11-01 00:23:23.696 [INFO][3993] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="b4bdbcd9fc0808043b166066c935f2205113269a5c6f10e3fbe294d92cc3f43d" iface="eth0" netns="/var/run/netns/cni-a31e1e18-5ef6-38f7-e5cf-dd81cf34b4e3" Nov 1 00:23:23.737586 env[1324]: 2025-11-01 00:23:23.696 [INFO][3993] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="b4bdbcd9fc0808043b166066c935f2205113269a5c6f10e3fbe294d92cc3f43d" Nov 1 00:23:23.737586 env[1324]: 2025-11-01 00:23:23.696 [INFO][3993] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b4bdbcd9fc0808043b166066c935f2205113269a5c6f10e3fbe294d92cc3f43d" Nov 1 00:23:23.737586 env[1324]: 2025-11-01 00:23:23.719 [INFO][4011] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="b4bdbcd9fc0808043b166066c935f2205113269a5c6f10e3fbe294d92cc3f43d" HandleID="k8s-pod-network.b4bdbcd9fc0808043b166066c935f2205113269a5c6f10e3fbe294d92cc3f43d" Workload="localhost-k8s-csi--node--driver--52pqx-eth0" Nov 1 00:23:23.737586 env[1324]: 2025-11-01 00:23:23.719 [INFO][4011] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:23:23.737586 env[1324]: 2025-11-01 00:23:23.719 [INFO][4011] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:23:23.737586 env[1324]: 2025-11-01 00:23:23.729 [WARNING][4011] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="b4bdbcd9fc0808043b166066c935f2205113269a5c6f10e3fbe294d92cc3f43d" HandleID="k8s-pod-network.b4bdbcd9fc0808043b166066c935f2205113269a5c6f10e3fbe294d92cc3f43d" Workload="localhost-k8s-csi--node--driver--52pqx-eth0" Nov 1 00:23:23.737586 env[1324]: 2025-11-01 00:23:23.729 [INFO][4011] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="b4bdbcd9fc0808043b166066c935f2205113269a5c6f10e3fbe294d92cc3f43d" HandleID="k8s-pod-network.b4bdbcd9fc0808043b166066c935f2205113269a5c6f10e3fbe294d92cc3f43d" Workload="localhost-k8s-csi--node--driver--52pqx-eth0" Nov 1 00:23:23.737586 env[1324]: 2025-11-01 00:23:23.731 [INFO][4011] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:23:23.737586 env[1324]: 2025-11-01 00:23:23.734 [INFO][3993] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="b4bdbcd9fc0808043b166066c935f2205113269a5c6f10e3fbe294d92cc3f43d" Nov 1 00:23:23.738693 env[1324]: time="2025-11-01T00:23:23.737754976Z" level=info msg="TearDown network for sandbox \"b4bdbcd9fc0808043b166066c935f2205113269a5c6f10e3fbe294d92cc3f43d\" successfully" Nov 1 00:23:23.738693 env[1324]: time="2025-11-01T00:23:23.737790936Z" level=info msg="StopPodSandbox for \"b4bdbcd9fc0808043b166066c935f2205113269a5c6f10e3fbe294d92cc3f43d\" returns successfully" Nov 1 00:23:23.739815 env[1324]: time="2025-11-01T00:23:23.739679378Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-52pqx,Uid:a584285f-c40b-477a-8ddb-bfa9e3439fe6,Namespace:calico-system,Attempt:1,}" Nov 1 00:23:23.748119 env[1324]: 2025-11-01 00:23:23.692 [INFO][3994] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="f80c0e08d467894b039f948b8f86ba376ec5e81932e79cbeb44ff3a13e6573ce" Nov 1 00:23:23.748119 env[1324]: 2025-11-01 00:23:23.692 [INFO][3994] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="f80c0e08d467894b039f948b8f86ba376ec5e81932e79cbeb44ff3a13e6573ce" iface="eth0" netns="/var/run/netns/cni-a7eb22e0-e757-4049-d73a-38d7ed620ad0" Nov 1 00:23:23.748119 env[1324]: 2025-11-01 00:23:23.693 [INFO][3994] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="f80c0e08d467894b039f948b8f86ba376ec5e81932e79cbeb44ff3a13e6573ce" iface="eth0" netns="/var/run/netns/cni-a7eb22e0-e757-4049-d73a-38d7ed620ad0" Nov 1 00:23:23.748119 env[1324]: 2025-11-01 00:23:23.693 [INFO][3994] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="f80c0e08d467894b039f948b8f86ba376ec5e81932e79cbeb44ff3a13e6573ce" iface="eth0" netns="/var/run/netns/cni-a7eb22e0-e757-4049-d73a-38d7ed620ad0" Nov 1 00:23:23.748119 env[1324]: 2025-11-01 00:23:23.693 [INFO][3994] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="f80c0e08d467894b039f948b8f86ba376ec5e81932e79cbeb44ff3a13e6573ce" Nov 1 00:23:23.748119 env[1324]: 2025-11-01 00:23:23.693 [INFO][3994] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f80c0e08d467894b039f948b8f86ba376ec5e81932e79cbeb44ff3a13e6573ce" Nov 1 00:23:23.748119 env[1324]: 2025-11-01 00:23:23.721 [INFO][4009] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="f80c0e08d467894b039f948b8f86ba376ec5e81932e79cbeb44ff3a13e6573ce" HandleID="k8s-pod-network.f80c0e08d467894b039f948b8f86ba376ec5e81932e79cbeb44ff3a13e6573ce" Workload="localhost-k8s-goldmane--666569f655--cp7jx-eth0" Nov 1 00:23:23.748119 env[1324]: 2025-11-01 00:23:23.721 [INFO][4009] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:23:23.748119 env[1324]: 2025-11-01 00:23:23.731 [INFO][4009] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:23:23.748119 env[1324]: 2025-11-01 00:23:23.741 [WARNING][4009] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="f80c0e08d467894b039f948b8f86ba376ec5e81932e79cbeb44ff3a13e6573ce" HandleID="k8s-pod-network.f80c0e08d467894b039f948b8f86ba376ec5e81932e79cbeb44ff3a13e6573ce" Workload="localhost-k8s-goldmane--666569f655--cp7jx-eth0" Nov 1 00:23:23.748119 env[1324]: 2025-11-01 00:23:23.741 [INFO][4009] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="f80c0e08d467894b039f948b8f86ba376ec5e81932e79cbeb44ff3a13e6573ce" HandleID="k8s-pod-network.f80c0e08d467894b039f948b8f86ba376ec5e81932e79cbeb44ff3a13e6573ce" Workload="localhost-k8s-goldmane--666569f655--cp7jx-eth0" Nov 1 00:23:23.748119 env[1324]: 2025-11-01 00:23:23.744 [INFO][4009] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:23:23.748119 env[1324]: 2025-11-01 00:23:23.746 [INFO][3994] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="f80c0e08d467894b039f948b8f86ba376ec5e81932e79cbeb44ff3a13e6573ce" Nov 1 00:23:23.748754 env[1324]: time="2025-11-01T00:23:23.748253263Z" level=info msg="TearDown network for sandbox \"f80c0e08d467894b039f948b8f86ba376ec5e81932e79cbeb44ff3a13e6573ce\" successfully" Nov 1 00:23:23.748754 env[1324]: time="2025-11-01T00:23:23.748282183Z" level=info msg="StopPodSandbox for \"f80c0e08d467894b039f948b8f86ba376ec5e81932e79cbeb44ff3a13e6573ce\" returns successfully" Nov 1 00:23:23.749159 env[1324]: time="2025-11-01T00:23:23.749121224Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-cp7jx,Uid:bf1893fd-31bf-427a-928e-11685512f41a,Namespace:calico-system,Attempt:1,}" Nov 1 00:23:23.772089 systemd[1]: run-netns-cni\x2da31e1e18\x2d5ef6\x2d38f7\x2de5cf\x2ddd81cf34b4e3.mount: Deactivated successfully. Nov 1 00:23:23.772229 systemd[1]: run-netns-cni\x2da7eb22e0\x2de757\x2d4049\x2dd73a\x2d38d7ed620ad0.mount: Deactivated successfully. Nov 1 00:23:23.775204 kubelet[2124]: I1101 00:23:23.775158 2124 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 1 00:23:23.775612 kubelet[2124]: E1101 00:23:23.775589 2124 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:23:23.780036 kubelet[2124]: E1101 00:23:23.779685 2124 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-77cfb4d4d6-l78bq" podUID="098f9c0f-a24a-4001-88bf-ea4e44e957ea" Nov 1 00:23:23.786931 kubelet[2124]: E1101 00:23:23.786828 2124 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:23:23.800000 audit[4051]: NETFILTER_CFG table=filter:107 family=2 entries=22 op=nft_register_rule pid=4051 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 00:23:23.802931 kernel: kauditd_printk_skb: 42 callbacks suppressed Nov 1 00:23:23.803184 kernel: audit: type=1325 audit(1761956603.800:320): table=filter:107 family=2 entries=22 op=nft_register_rule pid=4051 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 00:23:23.800000 audit[4051]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=8224 a0=3 a1=ffffcecabfa0 a2=0 a3=1 items=0 ppid=2274 pid=4051 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:23:23.809668 kernel: audit: type=1300 audit(1761956603.800:320): arch=c00000b7 syscall=211 success=yes exit=8224 a0=3 a1=ffffcecabfa0 a2=0 a3=1 items=0 ppid=2274 pid=4051 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:23:23.809744 kernel: audit: type=1327 audit(1761956603.800:320): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 00:23:23.800000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 00:23:23.814000 audit[4051]: NETFILTER_CFG table=nat:108 family=2 entries=12 op=nft_register_rule pid=4051 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 00:23:23.818427 kernel: audit: type=1325 audit(1761956603.814:321): table=nat:108 family=2 entries=12 op=nft_register_rule pid=4051 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 00:23:23.818493 kernel: audit: type=1300 audit(1761956603.814:321): arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffcecabfa0 a2=0 a3=1 items=0 ppid=2274 pid=4051 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:23:23.814000 audit[4051]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffcecabfa0 a2=0 a3=1 items=0 ppid=2274 pid=4051 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:23:23.820442 kubelet[2124]: I1101 00:23:23.820367 2124 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-8c5b8" podStartSLOduration=35.820348909 podStartE2EDuration="35.820348909s" podCreationTimestamp="2025-11-01 00:22:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 00:23:23.820010469 +0000 UTC m=+41.283903647" watchObservedRunningTime="2025-11-01 00:23:23.820348909 +0000 UTC m=+41.284242047" Nov 1 00:23:23.814000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 00:23:23.827680 kernel: audit: type=1327 audit(1761956603.814:321): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 00:23:23.839000 audit[4055]: NETFILTER_CFG table=filter:109 family=2 entries=18 op=nft_register_rule pid=4055 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 00:23:23.839000 audit[4055]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=fffff0c5c500 a2=0 a3=1 items=0 ppid=2274 pid=4055 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:23:23.847274 kernel: audit: type=1325 audit(1761956603.839:322): table=filter:109 family=2 entries=18 op=nft_register_rule pid=4055 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 00:23:23.847346 kernel: audit: type=1300 audit(1761956603.839:322): arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=fffff0c5c500 a2=0 a3=1 items=0 ppid=2274 pid=4055 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:23:23.847374 kernel: audit: type=1327 audit(1761956603.839:322): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 00:23:23.839000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 00:23:23.849000 audit[4055]: NETFILTER_CFG table=nat:110 family=2 entries=40 op=nft_register_chain pid=4055 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 00:23:23.849000 audit[4055]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=17004 a0=3 a1=fffff0c5c500 a2=0 a3=1 items=0 ppid=2274 pid=4055 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:23:23.853472 kernel: audit: type=1325 audit(1761956603.849:323): table=nat:110 family=2 entries=40 op=nft_register_chain pid=4055 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 00:23:23.849000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 00:23:23.912473 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali6b455043a5a: link becomes ready Nov 1 00:23:23.911239 systemd-networkd[1098]: cali6b455043a5a: Link UP Nov 1 00:23:23.911479 systemd-networkd[1098]: cali6b455043a5a: Gained carrier Nov 1 00:23:23.926002 env[1324]: 2025-11-01 00:23:23.807 [INFO][4037] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 1 00:23:23.926002 env[1324]: 2025-11-01 00:23:23.841 [INFO][4037] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--666569f655--cp7jx-eth0 goldmane-666569f655- calico-system bf1893fd-31bf-427a-928e-11685512f41a 1031 0 2025-11-01 00:23:01 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:666569f655 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-666569f655-cp7jx eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali6b455043a5a [] [] }} ContainerID="a68c2dc2f36d87cfee73ffd4e0884765db1815b211809ec5c8e3a8e16fc05113" Namespace="calico-system" Pod="goldmane-666569f655-cp7jx" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--cp7jx-" Nov 1 00:23:23.926002 env[1324]: 2025-11-01 00:23:23.841 [INFO][4037] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a68c2dc2f36d87cfee73ffd4e0884765db1815b211809ec5c8e3a8e16fc05113" Namespace="calico-system" Pod="goldmane-666569f655-cp7jx" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--cp7jx-eth0" Nov 1 00:23:23.926002 env[1324]: 2025-11-01 00:23:23.872 [INFO][4062] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a68c2dc2f36d87cfee73ffd4e0884765db1815b211809ec5c8e3a8e16fc05113" HandleID="k8s-pod-network.a68c2dc2f36d87cfee73ffd4e0884765db1815b211809ec5c8e3a8e16fc05113" Workload="localhost-k8s-goldmane--666569f655--cp7jx-eth0" Nov 1 00:23:23.926002 env[1324]: 2025-11-01 00:23:23.872 [INFO][4062] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="a68c2dc2f36d87cfee73ffd4e0884765db1815b211809ec5c8e3a8e16fc05113" HandleID="k8s-pod-network.a68c2dc2f36d87cfee73ffd4e0884765db1815b211809ec5c8e3a8e16fc05113" Workload="localhost-k8s-goldmane--666569f655--cp7jx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002e5590), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-666569f655-cp7jx", "timestamp":"2025-11-01 00:23:23.872246022 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 1 00:23:23.926002 env[1324]: 2025-11-01 00:23:23.872 [INFO][4062] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:23:23.926002 env[1324]: 2025-11-01 00:23:23.872 [INFO][4062] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:23:23.926002 env[1324]: 2025-11-01 00:23:23.872 [INFO][4062] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 1 00:23:23.926002 env[1324]: 2025-11-01 00:23:23.881 [INFO][4062] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.a68c2dc2f36d87cfee73ffd4e0884765db1815b211809ec5c8e3a8e16fc05113" host="localhost" Nov 1 00:23:23.926002 env[1324]: 2025-11-01 00:23:23.886 [INFO][4062] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 1 00:23:23.926002 env[1324]: 2025-11-01 00:23:23.890 [INFO][4062] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 1 00:23:23.926002 env[1324]: 2025-11-01 00:23:23.892 [INFO][4062] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 1 00:23:23.926002 env[1324]: 2025-11-01 00:23:23.894 [INFO][4062] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 1 00:23:23.926002 env[1324]: 2025-11-01 00:23:23.894 [INFO][4062] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.a68c2dc2f36d87cfee73ffd4e0884765db1815b211809ec5c8e3a8e16fc05113" host="localhost" Nov 1 00:23:23.926002 env[1324]: 2025-11-01 00:23:23.895 [INFO][4062] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.a68c2dc2f36d87cfee73ffd4e0884765db1815b211809ec5c8e3a8e16fc05113 Nov 1 00:23:23.926002 env[1324]: 2025-11-01 00:23:23.899 [INFO][4062] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.a68c2dc2f36d87cfee73ffd4e0884765db1815b211809ec5c8e3a8e16fc05113" host="localhost" Nov 1 00:23:23.926002 env[1324]: 2025-11-01 00:23:23.905 [INFO][4062] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.a68c2dc2f36d87cfee73ffd4e0884765db1815b211809ec5c8e3a8e16fc05113" host="localhost" Nov 1 00:23:23.926002 env[1324]: 2025-11-01 00:23:23.905 [INFO][4062] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.a68c2dc2f36d87cfee73ffd4e0884765db1815b211809ec5c8e3a8e16fc05113" host="localhost" Nov 1 00:23:23.926002 env[1324]: 2025-11-01 00:23:23.905 [INFO][4062] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:23:23.926002 env[1324]: 2025-11-01 00:23:23.906 [INFO][4062] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="a68c2dc2f36d87cfee73ffd4e0884765db1815b211809ec5c8e3a8e16fc05113" HandleID="k8s-pod-network.a68c2dc2f36d87cfee73ffd4e0884765db1815b211809ec5c8e3a8e16fc05113" Workload="localhost-k8s-goldmane--666569f655--cp7jx-eth0" Nov 1 00:23:23.926653 env[1324]: 2025-11-01 00:23:23.908 [INFO][4037] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a68c2dc2f36d87cfee73ffd4e0884765db1815b211809ec5c8e3a8e16fc05113" Namespace="calico-system" Pod="goldmane-666569f655-cp7jx" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--cp7jx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--666569f655--cp7jx-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"bf1893fd-31bf-427a-928e-11685512f41a", ResourceVersion:"1031", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 23, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-666569f655-cp7jx", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali6b455043a5a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:23:23.926653 env[1324]: 2025-11-01 00:23:23.908 [INFO][4037] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="a68c2dc2f36d87cfee73ffd4e0884765db1815b211809ec5c8e3a8e16fc05113" Namespace="calico-system" Pod="goldmane-666569f655-cp7jx" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--cp7jx-eth0" Nov 1 00:23:23.926653 env[1324]: 2025-11-01 00:23:23.908 [INFO][4037] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6b455043a5a ContainerID="a68c2dc2f36d87cfee73ffd4e0884765db1815b211809ec5c8e3a8e16fc05113" Namespace="calico-system" Pod="goldmane-666569f655-cp7jx" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--cp7jx-eth0" Nov 1 00:23:23.926653 env[1324]: 2025-11-01 00:23:23.912 [INFO][4037] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a68c2dc2f36d87cfee73ffd4e0884765db1815b211809ec5c8e3a8e16fc05113" Namespace="calico-system" Pod="goldmane-666569f655-cp7jx" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--cp7jx-eth0" Nov 1 00:23:23.926653 env[1324]: 2025-11-01 00:23:23.913 [INFO][4037] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a68c2dc2f36d87cfee73ffd4e0884765db1815b211809ec5c8e3a8e16fc05113" Namespace="calico-system" Pod="goldmane-666569f655-cp7jx" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--cp7jx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--666569f655--cp7jx-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"bf1893fd-31bf-427a-928e-11685512f41a", ResourceVersion:"1031", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 23, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"a68c2dc2f36d87cfee73ffd4e0884765db1815b211809ec5c8e3a8e16fc05113", Pod:"goldmane-666569f655-cp7jx", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali6b455043a5a", MAC:"ce:c9:d1:02:92:ac", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:23:23.926653 env[1324]: 2025-11-01 00:23:23.923 [INFO][4037] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a68c2dc2f36d87cfee73ffd4e0884765db1815b211809ec5c8e3a8e16fc05113" Namespace="calico-system" Pod="goldmane-666569f655-cp7jx" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--cp7jx-eth0" Nov 1 00:23:23.939209 env[1324]: time="2025-11-01T00:23:23.939142985Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:23:23.939209 env[1324]: time="2025-11-01T00:23:23.939181385Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:23:23.939389 env[1324]: time="2025-11-01T00:23:23.939192305Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:23:23.939647 env[1324]: time="2025-11-01T00:23:23.939613026Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/a68c2dc2f36d87cfee73ffd4e0884765db1815b211809ec5c8e3a8e16fc05113 pid=4089 runtime=io.containerd.runc.v2 Nov 1 00:23:23.965170 systemd-resolved[1239]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 1 00:23:23.983973 env[1324]: time="2025-11-01T00:23:23.983930054Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-cp7jx,Uid:bf1893fd-31bf-427a-928e-11685512f41a,Namespace:calico-system,Attempt:1,} returns sandbox id \"a68c2dc2f36d87cfee73ffd4e0884765db1815b211809ec5c8e3a8e16fc05113\"" Nov 1 00:23:23.987428 env[1324]: time="2025-11-01T00:23:23.986324855Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 1 00:23:24.008894 systemd-networkd[1098]: cali66e52162bd1: Link UP Nov 1 00:23:24.011107 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Nov 1 00:23:24.011183 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali66e52162bd1: link becomes ready Nov 1 00:23:24.011300 systemd-networkd[1098]: cali66e52162bd1: Gained carrier Nov 1 00:23:24.026894 env[1324]: 2025-11-01 00:23:23.799 [INFO][4025] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 1 00:23:24.026894 env[1324]: 2025-11-01 00:23:23.833 [INFO][4025] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--52pqx-eth0 csi-node-driver- calico-system a584285f-c40b-477a-8ddb-bfa9e3439fe6 1032 0 2025-11-01 00:23:04 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:857b56db8f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-52pqx eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali66e52162bd1 [] [] }} ContainerID="05d9aa1010caa1ce1264155c0494bcaed2f600b70ec27172e9105fcf1ba7abb7" Namespace="calico-system" Pod="csi-node-driver-52pqx" WorkloadEndpoint="localhost-k8s-csi--node--driver--52pqx-" Nov 1 00:23:24.026894 env[1324]: 2025-11-01 00:23:23.833 [INFO][4025] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="05d9aa1010caa1ce1264155c0494bcaed2f600b70ec27172e9105fcf1ba7abb7" Namespace="calico-system" Pod="csi-node-driver-52pqx" WorkloadEndpoint="localhost-k8s-csi--node--driver--52pqx-eth0" Nov 1 00:23:24.026894 env[1324]: 2025-11-01 00:23:23.875 [INFO][4057] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="05d9aa1010caa1ce1264155c0494bcaed2f600b70ec27172e9105fcf1ba7abb7" HandleID="k8s-pod-network.05d9aa1010caa1ce1264155c0494bcaed2f600b70ec27172e9105fcf1ba7abb7" Workload="localhost-k8s-csi--node--driver--52pqx-eth0" Nov 1 00:23:24.026894 env[1324]: 2025-11-01 00:23:23.875 [INFO][4057] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="05d9aa1010caa1ce1264155c0494bcaed2f600b70ec27172e9105fcf1ba7abb7" HandleID="k8s-pod-network.05d9aa1010caa1ce1264155c0494bcaed2f600b70ec27172e9105fcf1ba7abb7" Workload="localhost-k8s-csi--node--driver--52pqx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400024b020), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-52pqx", "timestamp":"2025-11-01 00:23:23.875792225 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 1 00:23:24.026894 env[1324]: 2025-11-01 00:23:23.875 [INFO][4057] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:23:24.026894 env[1324]: 2025-11-01 00:23:23.906 [INFO][4057] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:23:24.026894 env[1324]: 2025-11-01 00:23:23.906 [INFO][4057] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 1 00:23:24.026894 env[1324]: 2025-11-01 00:23:23.982 [INFO][4057] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.05d9aa1010caa1ce1264155c0494bcaed2f600b70ec27172e9105fcf1ba7abb7" host="localhost" Nov 1 00:23:24.026894 env[1324]: 2025-11-01 00:23:23.987 [INFO][4057] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 1 00:23:24.026894 env[1324]: 2025-11-01 00:23:23.991 [INFO][4057] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 1 00:23:24.026894 env[1324]: 2025-11-01 00:23:23.993 [INFO][4057] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 1 00:23:24.026894 env[1324]: 2025-11-01 00:23:23.995 [INFO][4057] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 1 00:23:24.026894 env[1324]: 2025-11-01 00:23:23.995 [INFO][4057] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.05d9aa1010caa1ce1264155c0494bcaed2f600b70ec27172e9105fcf1ba7abb7" host="localhost" Nov 1 00:23:24.026894 env[1324]: 2025-11-01 00:23:23.996 [INFO][4057] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.05d9aa1010caa1ce1264155c0494bcaed2f600b70ec27172e9105fcf1ba7abb7 Nov 1 00:23:24.026894 env[1324]: 2025-11-01 00:23:23.999 [INFO][4057] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.05d9aa1010caa1ce1264155c0494bcaed2f600b70ec27172e9105fcf1ba7abb7" host="localhost" Nov 1 00:23:24.026894 env[1324]: 2025-11-01 00:23:24.004 [INFO][4057] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.05d9aa1010caa1ce1264155c0494bcaed2f600b70ec27172e9105fcf1ba7abb7" host="localhost" Nov 1 00:23:24.026894 env[1324]: 2025-11-01 00:23:24.004 [INFO][4057] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.05d9aa1010caa1ce1264155c0494bcaed2f600b70ec27172e9105fcf1ba7abb7" host="localhost" Nov 1 00:23:24.026894 env[1324]: 2025-11-01 00:23:24.004 [INFO][4057] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:23:24.026894 env[1324]: 2025-11-01 00:23:24.004 [INFO][4057] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="05d9aa1010caa1ce1264155c0494bcaed2f600b70ec27172e9105fcf1ba7abb7" HandleID="k8s-pod-network.05d9aa1010caa1ce1264155c0494bcaed2f600b70ec27172e9105fcf1ba7abb7" Workload="localhost-k8s-csi--node--driver--52pqx-eth0" Nov 1 00:23:24.027520 env[1324]: 2025-11-01 00:23:24.006 [INFO][4025] cni-plugin/k8s.go 418: Populated endpoint ContainerID="05d9aa1010caa1ce1264155c0494bcaed2f600b70ec27172e9105fcf1ba7abb7" Namespace="calico-system" Pod="csi-node-driver-52pqx" WorkloadEndpoint="localhost-k8s-csi--node--driver--52pqx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--52pqx-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"a584285f-c40b-477a-8ddb-bfa9e3439fe6", ResourceVersion:"1032", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 23, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-52pqx", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali66e52162bd1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:23:24.027520 env[1324]: 2025-11-01 00:23:24.007 [INFO][4025] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="05d9aa1010caa1ce1264155c0494bcaed2f600b70ec27172e9105fcf1ba7abb7" Namespace="calico-system" Pod="csi-node-driver-52pqx" WorkloadEndpoint="localhost-k8s-csi--node--driver--52pqx-eth0" Nov 1 00:23:24.027520 env[1324]: 2025-11-01 00:23:24.007 [INFO][4025] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali66e52162bd1 ContainerID="05d9aa1010caa1ce1264155c0494bcaed2f600b70ec27172e9105fcf1ba7abb7" Namespace="calico-system" Pod="csi-node-driver-52pqx" WorkloadEndpoint="localhost-k8s-csi--node--driver--52pqx-eth0" Nov 1 00:23:24.027520 env[1324]: 2025-11-01 00:23:24.011 [INFO][4025] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="05d9aa1010caa1ce1264155c0494bcaed2f600b70ec27172e9105fcf1ba7abb7" Namespace="calico-system" Pod="csi-node-driver-52pqx" WorkloadEndpoint="localhost-k8s-csi--node--driver--52pqx-eth0" Nov 1 00:23:24.027520 env[1324]: 2025-11-01 00:23:24.012 [INFO][4025] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="05d9aa1010caa1ce1264155c0494bcaed2f600b70ec27172e9105fcf1ba7abb7" Namespace="calico-system" Pod="csi-node-driver-52pqx" WorkloadEndpoint="localhost-k8s-csi--node--driver--52pqx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--52pqx-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"a584285f-c40b-477a-8ddb-bfa9e3439fe6", ResourceVersion:"1032", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 23, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"05d9aa1010caa1ce1264155c0494bcaed2f600b70ec27172e9105fcf1ba7abb7", Pod:"csi-node-driver-52pqx", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali66e52162bd1", MAC:"e2:b6:21:fd:44:bf", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:23:24.027520 env[1324]: 2025-11-01 00:23:24.025 [INFO][4025] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="05d9aa1010caa1ce1264155c0494bcaed2f600b70ec27172e9105fcf1ba7abb7" Namespace="calico-system" Pod="csi-node-driver-52pqx" WorkloadEndpoint="localhost-k8s-csi--node--driver--52pqx-eth0" Nov 1 00:23:24.042513 env[1324]: time="2025-11-01T00:23:24.042430090Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:23:24.042513 env[1324]: time="2025-11-01T00:23:24.042479130Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:23:24.042513 env[1324]: time="2025-11-01T00:23:24.042489690Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:23:24.042696 env[1324]: time="2025-11-01T00:23:24.042621650Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/05d9aa1010caa1ce1264155c0494bcaed2f600b70ec27172e9105fcf1ba7abb7 pid=4144 runtime=io.containerd.runc.v2 Nov 1 00:23:24.078503 systemd-resolved[1239]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 1 00:23:24.094677 env[1324]: time="2025-11-01T00:23:24.094628361Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-52pqx,Uid:a584285f-c40b-477a-8ddb-bfa9e3439fe6,Namespace:calico-system,Attempt:1,} returns sandbox id \"05d9aa1010caa1ce1264155c0494bcaed2f600b70ec27172e9105fcf1ba7abb7\"" Nov 1 00:23:24.145561 systemd-networkd[1098]: calif63ed2b1edc: Gained IPv6LL Nov 1 00:23:24.183321 env[1324]: time="2025-11-01T00:23:24.183210214Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:23:24.184548 env[1324]: time="2025-11-01T00:23:24.184503015Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 1 00:23:24.184977 kubelet[2124]: E1101 00:23:24.184749 2124 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 1 00:23:24.184977 kubelet[2124]: E1101 00:23:24.184806 2124 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 1 00:23:24.185122 kubelet[2124]: E1101 00:23:24.185025 2124 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jhn7d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-cp7jx_calico-system(bf1893fd-31bf-427a-928e-11685512f41a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 1 00:23:24.185513 env[1324]: time="2025-11-01T00:23:24.185487336Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 1 00:23:24.186634 kubelet[2124]: E1101 00:23:24.186595 2124 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-cp7jx" podUID="bf1893fd-31bf-427a-928e-11685512f41a" Nov 1 00:23:24.367146 env[1324]: time="2025-11-01T00:23:24.367084044Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:23:24.368152 env[1324]: time="2025-11-01T00:23:24.368090205Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 1 00:23:24.368369 kubelet[2124]: E1101 00:23:24.368333 2124 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 1 00:23:24.368457 kubelet[2124]: E1101 00:23:24.368381 2124 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 1 00:23:24.368553 kubelet[2124]: E1101 00:23:24.368514 2124 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ppkz8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-52pqx_calico-system(a584285f-c40b-477a-8ddb-bfa9e3439fe6): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 1 00:23:24.370560 env[1324]: time="2025-11-01T00:23:24.370529246Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 1 00:23:24.529575 systemd-networkd[1098]: cali2b36af0aaf2: Gained IPv6LL Nov 1 00:23:24.593971 env[1324]: time="2025-11-01T00:23:24.593909740Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:23:24.595096 env[1324]: time="2025-11-01T00:23:24.595011781Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 1 00:23:24.595292 kubelet[2124]: E1101 00:23:24.595243 2124 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 1 00:23:24.595370 kubelet[2124]: E1101 00:23:24.595292 2124 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 1 00:23:24.595486 kubelet[2124]: E1101 00:23:24.595415 2124 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ppkz8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-52pqx_calico-system(a584285f-c40b-477a-8ddb-bfa9e3439fe6): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 1 00:23:24.596659 kubelet[2124]: E1101 00:23:24.596601 2124 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-52pqx" podUID="a584285f-c40b-477a-8ddb-bfa9e3439fe6" Nov 1 00:23:24.643147 env[1324]: time="2025-11-01T00:23:24.642514010Z" level=info msg="StopPodSandbox for \"b5d69b86e4f937472c3a94e15103c703bc3d1f5d028f08fa0af3e9103cf1ef15\"" Nov 1 00:23:24.720072 env[1324]: 2025-11-01 00:23:24.689 [INFO][4224] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="b5d69b86e4f937472c3a94e15103c703bc3d1f5d028f08fa0af3e9103cf1ef15" Nov 1 00:23:24.720072 env[1324]: 2025-11-01 00:23:24.690 [INFO][4224] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="b5d69b86e4f937472c3a94e15103c703bc3d1f5d028f08fa0af3e9103cf1ef15" iface="eth0" netns="/var/run/netns/cni-ea9a5015-f43f-c4db-a203-23bbf3bb60dc" Nov 1 00:23:24.720072 env[1324]: 2025-11-01 00:23:24.690 [INFO][4224] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="b5d69b86e4f937472c3a94e15103c703bc3d1f5d028f08fa0af3e9103cf1ef15" iface="eth0" netns="/var/run/netns/cni-ea9a5015-f43f-c4db-a203-23bbf3bb60dc" Nov 1 00:23:24.720072 env[1324]: 2025-11-01 00:23:24.690 [INFO][4224] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="b5d69b86e4f937472c3a94e15103c703bc3d1f5d028f08fa0af3e9103cf1ef15" iface="eth0" netns="/var/run/netns/cni-ea9a5015-f43f-c4db-a203-23bbf3bb60dc" Nov 1 00:23:24.720072 env[1324]: 2025-11-01 00:23:24.690 [INFO][4224] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="b5d69b86e4f937472c3a94e15103c703bc3d1f5d028f08fa0af3e9103cf1ef15" Nov 1 00:23:24.720072 env[1324]: 2025-11-01 00:23:24.690 [INFO][4224] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b5d69b86e4f937472c3a94e15103c703bc3d1f5d028f08fa0af3e9103cf1ef15" Nov 1 00:23:24.720072 env[1324]: 2025-11-01 00:23:24.707 [INFO][4233] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="b5d69b86e4f937472c3a94e15103c703bc3d1f5d028f08fa0af3e9103cf1ef15" HandleID="k8s-pod-network.b5d69b86e4f937472c3a94e15103c703bc3d1f5d028f08fa0af3e9103cf1ef15" Workload="localhost-k8s-calico--apiserver--77cfb4d4d6--4nr5k-eth0" Nov 1 00:23:24.720072 env[1324]: 2025-11-01 00:23:24.707 [INFO][4233] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:23:24.720072 env[1324]: 2025-11-01 00:23:24.707 [INFO][4233] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:23:24.720072 env[1324]: 2025-11-01 00:23:24.715 [WARNING][4233] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="b5d69b86e4f937472c3a94e15103c703bc3d1f5d028f08fa0af3e9103cf1ef15" HandleID="k8s-pod-network.b5d69b86e4f937472c3a94e15103c703bc3d1f5d028f08fa0af3e9103cf1ef15" Workload="localhost-k8s-calico--apiserver--77cfb4d4d6--4nr5k-eth0" Nov 1 00:23:24.720072 env[1324]: 2025-11-01 00:23:24.715 [INFO][4233] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="b5d69b86e4f937472c3a94e15103c703bc3d1f5d028f08fa0af3e9103cf1ef15" HandleID="k8s-pod-network.b5d69b86e4f937472c3a94e15103c703bc3d1f5d028f08fa0af3e9103cf1ef15" Workload="localhost-k8s-calico--apiserver--77cfb4d4d6--4nr5k-eth0" Nov 1 00:23:24.720072 env[1324]: 2025-11-01 00:23:24.716 [INFO][4233] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:23:24.720072 env[1324]: 2025-11-01 00:23:24.718 [INFO][4224] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="b5d69b86e4f937472c3a94e15103c703bc3d1f5d028f08fa0af3e9103cf1ef15" Nov 1 00:23:24.720707 env[1324]: time="2025-11-01T00:23:24.720671856Z" level=info msg="TearDown network for sandbox \"b5d69b86e4f937472c3a94e15103c703bc3d1f5d028f08fa0af3e9103cf1ef15\" successfully" Nov 1 00:23:24.720785 env[1324]: time="2025-11-01T00:23:24.720768057Z" level=info msg="StopPodSandbox for \"b5d69b86e4f937472c3a94e15103c703bc3d1f5d028f08fa0af3e9103cf1ef15\" returns successfully" Nov 1 00:23:24.721535 env[1324]: time="2025-11-01T00:23:24.721500777Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-77cfb4d4d6-4nr5k,Uid:e3dd8f6c-3b39-4d19-a732-fff37a40f25e,Namespace:calico-apiserver,Attempt:1,}" Nov 1 00:23:24.774382 systemd[1]: run-containerd-runc-k8s.io-05d9aa1010caa1ce1264155c0494bcaed2f600b70ec27172e9105fcf1ba7abb7-runc.FhmSEe.mount: Deactivated successfully. Nov 1 00:23:24.774532 systemd[1]: run-netns-cni\x2dea9a5015\x2df43f\x2dc4db\x2da203\x2d23bbf3bb60dc.mount: Deactivated successfully. Nov 1 00:23:24.801079 kubelet[2124]: E1101 00:23:24.800933 2124 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-52pqx" podUID="a584285f-c40b-477a-8ddb-bfa9e3439fe6" Nov 1 00:23:24.804092 kubelet[2124]: E1101 00:23:24.803728 2124 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:23:24.804092 kubelet[2124]: E1101 00:23:24.803741 2124 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:23:24.805665 kubelet[2124]: E1101 00:23:24.805556 2124 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-77cfb4d4d6-l78bq" podUID="098f9c0f-a24a-4001-88bf-ea4e44e957ea" Nov 1 00:23:24.805665 kubelet[2124]: E1101 00:23:24.805626 2124 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-cp7jx" podUID="bf1893fd-31bf-427a-928e-11685512f41a" Nov 1 00:23:24.841000 audit[4270]: NETFILTER_CFG table=filter:111 family=2 entries=14 op=nft_register_rule pid=4270 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 00:23:24.841000 audit[4270]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=ffffd56bd060 a2=0 a3=1 items=0 ppid=2274 pid=4270 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:23:24.841000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 00:23:24.847000 audit[4270]: NETFILTER_CFG table=nat:112 family=2 entries=20 op=nft_register_rule pid=4270 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 00:23:24.847000 audit[4270]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5772 a0=3 a1=ffffd56bd060 a2=0 a3=1 items=0 ppid=2274 pid=4270 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:23:24.847000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 00:23:24.894304 systemd-networkd[1098]: calidd768c08cb3: Link UP Nov 1 00:23:24.896818 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): calidd768c08cb3: link becomes ready Nov 1 00:23:24.896391 systemd-networkd[1098]: calidd768c08cb3: Gained carrier Nov 1 00:23:24.906325 env[1324]: 2025-11-01 00:23:24.752 [INFO][4241] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 1 00:23:24.906325 env[1324]: 2025-11-01 00:23:24.767 [INFO][4241] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--77cfb4d4d6--4nr5k-eth0 calico-apiserver-77cfb4d4d6- calico-apiserver e3dd8f6c-3b39-4d19-a732-fff37a40f25e 1069 0 2025-11-01 00:22:58 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:77cfb4d4d6 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-77cfb4d4d6-4nr5k eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calidd768c08cb3 [] [] }} ContainerID="4845863ae6d9300d44534c3cfbf19d75838929c284fda37f85c8807e4a9e9efc" Namespace="calico-apiserver" Pod="calico-apiserver-77cfb4d4d6-4nr5k" WorkloadEndpoint="localhost-k8s-calico--apiserver--77cfb4d4d6--4nr5k-" Nov 1 00:23:24.906325 env[1324]: 2025-11-01 00:23:24.767 [INFO][4241] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="4845863ae6d9300d44534c3cfbf19d75838929c284fda37f85c8807e4a9e9efc" Namespace="calico-apiserver" Pod="calico-apiserver-77cfb4d4d6-4nr5k" WorkloadEndpoint="localhost-k8s-calico--apiserver--77cfb4d4d6--4nr5k-eth0" Nov 1 00:23:24.906325 env[1324]: 2025-11-01 00:23:24.810 [INFO][4255] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4845863ae6d9300d44534c3cfbf19d75838929c284fda37f85c8807e4a9e9efc" HandleID="k8s-pod-network.4845863ae6d9300d44534c3cfbf19d75838929c284fda37f85c8807e4a9e9efc" Workload="localhost-k8s-calico--apiserver--77cfb4d4d6--4nr5k-eth0" Nov 1 00:23:24.906325 env[1324]: 2025-11-01 00:23:24.810 [INFO][4255] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="4845863ae6d9300d44534c3cfbf19d75838929c284fda37f85c8807e4a9e9efc" HandleID="k8s-pod-network.4845863ae6d9300d44534c3cfbf19d75838929c284fda37f85c8807e4a9e9efc" Workload="localhost-k8s-calico--apiserver--77cfb4d4d6--4nr5k-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000136dd0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-77cfb4d4d6-4nr5k", "timestamp":"2025-11-01 00:23:24.81053251 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 1 00:23:24.906325 env[1324]: 2025-11-01 00:23:24.810 [INFO][4255] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:23:24.906325 env[1324]: 2025-11-01 00:23:24.811 [INFO][4255] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:23:24.906325 env[1324]: 2025-11-01 00:23:24.811 [INFO][4255] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 1 00:23:24.906325 env[1324]: 2025-11-01 00:23:24.828 [INFO][4255] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.4845863ae6d9300d44534c3cfbf19d75838929c284fda37f85c8807e4a9e9efc" host="localhost" Nov 1 00:23:24.906325 env[1324]: 2025-11-01 00:23:24.870 [INFO][4255] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 1 00:23:24.906325 env[1324]: 2025-11-01 00:23:24.874 [INFO][4255] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 1 00:23:24.906325 env[1324]: 2025-11-01 00:23:24.876 [INFO][4255] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 1 00:23:24.906325 env[1324]: 2025-11-01 00:23:24.879 [INFO][4255] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 1 00:23:24.906325 env[1324]: 2025-11-01 00:23:24.879 [INFO][4255] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.4845863ae6d9300d44534c3cfbf19d75838929c284fda37f85c8807e4a9e9efc" host="localhost" Nov 1 00:23:24.906325 env[1324]: 2025-11-01 00:23:24.881 [INFO][4255] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.4845863ae6d9300d44534c3cfbf19d75838929c284fda37f85c8807e4a9e9efc Nov 1 00:23:24.906325 env[1324]: 2025-11-01 00:23:24.885 [INFO][4255] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.4845863ae6d9300d44534c3cfbf19d75838929c284fda37f85c8807e4a9e9efc" host="localhost" Nov 1 00:23:24.906325 env[1324]: 2025-11-01 00:23:24.890 [INFO][4255] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.4845863ae6d9300d44534c3cfbf19d75838929c284fda37f85c8807e4a9e9efc" host="localhost" Nov 1 00:23:24.906325 env[1324]: 2025-11-01 00:23:24.890 [INFO][4255] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.4845863ae6d9300d44534c3cfbf19d75838929c284fda37f85c8807e4a9e9efc" host="localhost" Nov 1 00:23:24.906325 env[1324]: 2025-11-01 00:23:24.890 [INFO][4255] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:23:24.906325 env[1324]: 2025-11-01 00:23:24.890 [INFO][4255] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="4845863ae6d9300d44534c3cfbf19d75838929c284fda37f85c8807e4a9e9efc" HandleID="k8s-pod-network.4845863ae6d9300d44534c3cfbf19d75838929c284fda37f85c8807e4a9e9efc" Workload="localhost-k8s-calico--apiserver--77cfb4d4d6--4nr5k-eth0" Nov 1 00:23:24.907139 env[1324]: 2025-11-01 00:23:24.892 [INFO][4241] cni-plugin/k8s.go 418: Populated endpoint ContainerID="4845863ae6d9300d44534c3cfbf19d75838929c284fda37f85c8807e4a9e9efc" Namespace="calico-apiserver" Pod="calico-apiserver-77cfb4d4d6-4nr5k" WorkloadEndpoint="localhost-k8s-calico--apiserver--77cfb4d4d6--4nr5k-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--77cfb4d4d6--4nr5k-eth0", GenerateName:"calico-apiserver-77cfb4d4d6-", Namespace:"calico-apiserver", SelfLink:"", UID:"e3dd8f6c-3b39-4d19-a732-fff37a40f25e", ResourceVersion:"1069", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 22, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"77cfb4d4d6", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-77cfb4d4d6-4nr5k", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calidd768c08cb3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:23:24.907139 env[1324]: 2025-11-01 00:23:24.892 [INFO][4241] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="4845863ae6d9300d44534c3cfbf19d75838929c284fda37f85c8807e4a9e9efc" Namespace="calico-apiserver" Pod="calico-apiserver-77cfb4d4d6-4nr5k" WorkloadEndpoint="localhost-k8s-calico--apiserver--77cfb4d4d6--4nr5k-eth0" Nov 1 00:23:24.907139 env[1324]: 2025-11-01 00:23:24.892 [INFO][4241] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calidd768c08cb3 ContainerID="4845863ae6d9300d44534c3cfbf19d75838929c284fda37f85c8807e4a9e9efc" Namespace="calico-apiserver" Pod="calico-apiserver-77cfb4d4d6-4nr5k" WorkloadEndpoint="localhost-k8s-calico--apiserver--77cfb4d4d6--4nr5k-eth0" Nov 1 00:23:24.907139 env[1324]: 2025-11-01 00:23:24.894 [INFO][4241] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4845863ae6d9300d44534c3cfbf19d75838929c284fda37f85c8807e4a9e9efc" Namespace="calico-apiserver" Pod="calico-apiserver-77cfb4d4d6-4nr5k" WorkloadEndpoint="localhost-k8s-calico--apiserver--77cfb4d4d6--4nr5k-eth0" Nov 1 00:23:24.907139 env[1324]: 2025-11-01 00:23:24.895 [INFO][4241] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="4845863ae6d9300d44534c3cfbf19d75838929c284fda37f85c8807e4a9e9efc" Namespace="calico-apiserver" Pod="calico-apiserver-77cfb4d4d6-4nr5k" WorkloadEndpoint="localhost-k8s-calico--apiserver--77cfb4d4d6--4nr5k-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--77cfb4d4d6--4nr5k-eth0", GenerateName:"calico-apiserver-77cfb4d4d6-", Namespace:"calico-apiserver", SelfLink:"", UID:"e3dd8f6c-3b39-4d19-a732-fff37a40f25e", ResourceVersion:"1069", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 22, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"77cfb4d4d6", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"4845863ae6d9300d44534c3cfbf19d75838929c284fda37f85c8807e4a9e9efc", Pod:"calico-apiserver-77cfb4d4d6-4nr5k", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calidd768c08cb3", MAC:"4e:34:63:3d:f8:cc", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:23:24.907139 env[1324]: 2025-11-01 00:23:24.903 [INFO][4241] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="4845863ae6d9300d44534c3cfbf19d75838929c284fda37f85c8807e4a9e9efc" Namespace="calico-apiserver" Pod="calico-apiserver-77cfb4d4d6-4nr5k" WorkloadEndpoint="localhost-k8s-calico--apiserver--77cfb4d4d6--4nr5k-eth0" Nov 1 00:23:24.942440 env[1324]: time="2025-11-01T00:23:24.936962626Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:23:24.942440 env[1324]: time="2025-11-01T00:23:24.936996066Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:23:24.942440 env[1324]: time="2025-11-01T00:23:24.937005946Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:23:24.942440 env[1324]: time="2025-11-01T00:23:24.937115746Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/4845863ae6d9300d44534c3cfbf19d75838929c284fda37f85c8807e4a9e9efc pid=4286 runtime=io.containerd.runc.v2 Nov 1 00:23:25.000430 systemd-resolved[1239]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 1 00:23:25.029473 env[1324]: time="2025-11-01T00:23:25.029387801Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-77cfb4d4d6-4nr5k,Uid:e3dd8f6c-3b39-4d19-a732-fff37a40f25e,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"4845863ae6d9300d44534c3cfbf19d75838929c284fda37f85c8807e4a9e9efc\"" Nov 1 00:23:25.031646 env[1324]: time="2025-11-01T00:23:25.031488602Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 1 00:23:25.032000 audit[4327]: AVC avc: denied { bpf } for pid=4327 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:23:25.032000 audit[4327]: AVC avc: denied { bpf } for pid=4327 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:23:25.032000 audit[4327]: AVC avc: denied { perfmon } for pid=4327 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:23:25.032000 audit[4327]: AVC avc: denied { perfmon } for pid=4327 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:23:25.032000 audit[4327]: AVC avc: denied { perfmon } for pid=4327 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:23:25.032000 audit[4327]: AVC avc: denied { perfmon } for pid=4327 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:23:25.032000 audit[4327]: AVC avc: denied { perfmon } for pid=4327 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:23:25.032000 audit[4327]: AVC avc: denied { bpf } for pid=4327 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:23:25.032000 audit[4327]: AVC avc: denied { bpf } for pid=4327 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:23:25.032000 audit: BPF prog-id=10 op=LOAD Nov 1 00:23:25.032000 audit[4327]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffe156e8f8 a2=98 a3=ffffe156e8e8 items=0 ppid=4261 pid=4327 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:23:25.032000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Nov 1 00:23:25.032000 audit: BPF prog-id=10 op=UNLOAD Nov 1 00:23:25.033000 audit[4327]: AVC avc: denied { bpf } for pid=4327 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:23:25.033000 audit[4327]: AVC avc: denied { bpf } for pid=4327 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:23:25.033000 audit[4327]: AVC avc: denied { perfmon } for pid=4327 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:23:25.033000 audit[4327]: AVC avc: denied { perfmon } for pid=4327 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:23:25.033000 audit[4327]: AVC avc: denied { perfmon } for pid=4327 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:23:25.033000 audit[4327]: AVC avc: denied { perfmon } for pid=4327 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:23:25.033000 audit[4327]: AVC avc: denied { perfmon } for pid=4327 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:23:25.033000 audit[4327]: AVC avc: denied { bpf } for pid=4327 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:23:25.033000 audit[4327]: AVC avc: denied { bpf } for pid=4327 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:23:25.033000 audit: BPF prog-id=11 op=LOAD Nov 1 00:23:25.033000 audit[4327]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffe156e7a8 a2=74 a3=95 items=0 ppid=4261 pid=4327 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:23:25.033000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Nov 1 00:23:25.033000 audit: BPF prog-id=11 op=UNLOAD Nov 1 00:23:25.033000 audit[4327]: AVC avc: denied { bpf } for pid=4327 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:23:25.033000 audit[4327]: AVC avc: denied { bpf } for pid=4327 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:23:25.033000 audit[4327]: AVC avc: denied { perfmon } for pid=4327 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:23:25.033000 audit[4327]: AVC avc: denied { perfmon } for pid=4327 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:23:25.033000 audit[4327]: AVC avc: denied { perfmon } for pid=4327 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:23:25.033000 audit[4327]: AVC avc: denied { perfmon } for pid=4327 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:23:25.033000 audit[4327]: AVC avc: denied { perfmon } for pid=4327 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:23:25.033000 audit[4327]: AVC avc: denied { bpf } for pid=4327 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:23:25.033000 audit[4327]: AVC avc: denied { bpf } for pid=4327 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:23:25.033000 audit: BPF prog-id=12 op=LOAD Nov 1 00:23:25.033000 audit[4327]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffe156e7d8 a2=40 a3=ffffe156e808 items=0 ppid=4261 pid=4327 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:23:25.033000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Nov 1 00:23:25.033000 audit: BPF prog-id=12 op=UNLOAD Nov 1 00:23:25.033000 audit[4327]: AVC avc: denied { perfmon } for pid=4327 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:23:25.033000 audit[4327]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=0 a1=ffffe156e8f0 a2=50 a3=0 items=0 ppid=4261 pid=4327 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:23:25.033000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Nov 1 00:23:25.034000 audit[4328]: AVC avc: denied { bpf } for pid=4328 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:23:25.034000 audit[4328]: AVC avc: denied { bpf } for pid=4328 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:23:25.034000 audit[4328]: AVC avc: denied { perfmon } for pid=4328 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:23:25.034000 audit[4328]: AVC avc: denied { perfmon } for pid=4328 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:23:25.034000 audit[4328]: AVC avc: denied { perfmon } for pid=4328 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:23:25.034000 audit[4328]: AVC avc: denied { perfmon } for pid=4328 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:23:25.034000 audit[4328]: AVC avc: denied { perfmon } for pid=4328 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:23:25.034000 audit[4328]: AVC avc: denied { bpf } for pid=4328 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:23:25.034000 audit[4328]: AVC avc: denied { bpf } for pid=4328 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:23:25.034000 audit: BPF prog-id=13 op=LOAD Nov 1 00:23:25.034000 audit[4328]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=fffff0482e88 a2=98 a3=fffff0482e78 items=0 ppid=4261 pid=4328 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:23:25.034000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Nov 1 00:23:25.034000 audit: BPF prog-id=13 op=UNLOAD Nov 1 00:23:25.034000 audit[4328]: AVC avc: denied { bpf } for pid=4328 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:23:25.034000 audit[4328]: AVC avc: denied { bpf } for pid=4328 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:23:25.034000 audit[4328]: AVC avc: denied { perfmon } for pid=4328 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:23:25.034000 audit[4328]: AVC avc: denied { perfmon } for pid=4328 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:23:25.034000 audit[4328]: AVC avc: denied { perfmon } for pid=4328 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:23:25.034000 audit[4328]: AVC avc: denied { perfmon } for pid=4328 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:23:25.034000 audit[4328]: AVC avc: denied { perfmon } for pid=4328 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:23:25.034000 audit[4328]: AVC avc: denied { bpf } for pid=4328 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:23:25.034000 audit[4328]: AVC avc: denied { bpf } for pid=4328 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:23:25.034000 audit: BPF prog-id=14 op=LOAD Nov 1 00:23:25.034000 audit[4328]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=fffff0482b18 a2=74 a3=95 items=0 ppid=4261 pid=4328 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:23:25.034000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Nov 1 00:23:25.035000 audit: BPF prog-id=14 op=UNLOAD Nov 1 00:23:25.035000 audit[4328]: AVC avc: denied { bpf } for pid=4328 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:23:25.035000 audit[4328]: AVC avc: denied { bpf } for pid=4328 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:23:25.035000 audit[4328]: AVC avc: denied { perfmon } for pid=4328 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:23:25.035000 audit[4328]: AVC avc: denied { perfmon } for pid=4328 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:23:25.035000 audit[4328]: AVC avc: denied { perfmon } for pid=4328 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:23:25.035000 audit[4328]: AVC avc: denied { perfmon } for pid=4328 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:23:25.035000 audit[4328]: AVC avc: denied { perfmon } for pid=4328 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:23:25.035000 audit[4328]: AVC avc: denied { bpf } for pid=4328 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:23:25.035000 audit[4328]: AVC avc: denied { bpf } for pid=4328 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:23:25.035000 audit: BPF prog-id=15 op=LOAD Nov 1 00:23:25.035000 audit[4328]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=fffff0482b78 a2=94 a3=2 items=0 ppid=4261 pid=4328 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:23:25.035000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Nov 1 00:23:25.035000 audit: BPF prog-id=15 op=UNLOAD Nov 1 00:23:25.120000 audit[4328]: AVC avc: denied { bpf } for pid=4328 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:23:25.120000 audit[4328]: AVC avc: denied { bpf } for pid=4328 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:23:25.120000 audit[4328]: AVC avc: denied { perfmon } for pid=4328 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:23:25.120000 audit[4328]: AVC avc: denied { perfmon } for pid=4328 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:23:25.120000 audit[4328]: AVC avc: denied { perfmon } for pid=4328 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:23:25.120000 audit[4328]: AVC avc: denied { perfmon } for pid=4328 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:23:25.120000 audit[4328]: AVC avc: denied { perfmon } for pid=4328 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:23:25.120000 audit[4328]: AVC avc: denied { bpf } for pid=4328 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:23:25.120000 audit[4328]: AVC avc: denied { bpf } for pid=4328 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:23:25.120000 audit: BPF prog-id=16 op=LOAD Nov 1 00:23:25.120000 audit[4328]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=fffff0482b38 a2=40 a3=fffff0482b68 items=0 ppid=4261 pid=4328 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:23:25.120000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Nov 1 00:23:25.121000 audit: BPF prog-id=16 op=UNLOAD Nov 1 00:23:25.121000 audit[4328]: AVC avc: denied { perfmon } for pid=4328 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:23:25.121000 audit[4328]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=0 a1=fffff0482c50 a2=50 a3=0 items=0 ppid=4261 pid=4328 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:23:25.121000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Nov 1 00:23:25.131000 audit[4328]: AVC avc: denied { bpf } for pid=4328 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:23:25.131000 audit[4328]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=12 a1=fffff0482ba8 a2=28 a3=fffff0482cd8 items=0 ppid=4261 pid=4328 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:23:25.131000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Nov 1 00:23:25.131000 audit[4328]: AVC avc: denied { bpf } for pid=4328 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:23:25.131000 audit[4328]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=12 a1=fffff0482bd8 a2=28 a3=fffff0482d08 items=0 ppid=4261 pid=4328 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:23:25.131000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Nov 1 00:23:25.131000 audit[4328]: AVC avc: denied { bpf } for pid=4328 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:23:25.131000 audit[4328]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=12 a1=fffff0482a88 a2=28 a3=fffff0482bb8 items=0 ppid=4261 pid=4328 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:23:25.131000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Nov 1 00:23:25.131000 audit[4328]: AVC avc: denied { bpf } for pid=4328 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:23:25.131000 audit[4328]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=12 a1=fffff0482bf8 a2=28 a3=fffff0482d28 items=0 ppid=4261 pid=4328 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:23:25.131000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Nov 1 00:23:25.131000 audit[4328]: AVC avc: denied { bpf } for pid=4328 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:23:25.131000 audit[4328]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=12 a1=fffff0482bd8 a2=28 a3=fffff0482d08 items=0 ppid=4261 pid=4328 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:23:25.131000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Nov 1 00:23:25.131000 audit[4328]: AVC avc: denied { bpf } for pid=4328 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:23:25.131000 audit[4328]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=12 a1=fffff0482bc8 a2=28 a3=fffff0482cf8 items=0 ppid=4261 pid=4328 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:23:25.131000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Nov 1 00:23:25.131000 audit[4328]: AVC avc: denied { bpf } for pid=4328 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:23:25.131000 audit[4328]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=12 a1=fffff0482bf8 a2=28 a3=fffff0482d28 items=0 ppid=4261 pid=4328 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:23:25.131000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Nov 1 00:23:25.131000 audit[4328]: AVC avc: denied { bpf } for pid=4328 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:23:25.131000 audit[4328]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=12 a1=fffff0482bd8 a2=28 a3=fffff0482d08 items=0 ppid=4261 pid=4328 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:23:25.131000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Nov 1 00:23:25.131000 audit[4328]: AVC avc: denied { bpf } for pid=4328 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:23:25.131000 audit[4328]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=12 a1=fffff0482bf8 a2=28 a3=fffff0482d28 items=0 ppid=4261 pid=4328 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:23:25.131000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Nov 1 00:23:25.131000 audit[4328]: AVC avc: denied { bpf } for pid=4328 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:23:25.131000 audit[4328]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=12 a1=fffff0482bc8 a2=28 a3=fffff0482cf8 items=0 ppid=4261 pid=4328 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:23:25.131000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Nov 1 00:23:25.131000 audit[4328]: AVC avc: denied { bpf } for pid=4328 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:23:25.131000 audit[4328]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=12 a1=fffff0482c48 a2=28 a3=fffff0482d88 items=0 ppid=4261 pid=4328 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:23:25.131000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Nov 1 00:23:25.131000 audit[4328]: AVC avc: denied { perfmon } for pid=4328 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:23:25.131000 audit[4328]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=0 a1=fffff0482980 a2=50 a3=0 items=0 ppid=4261 pid=4328 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:23:25.131000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Nov 1 00:23:25.131000 audit[4328]: AVC avc: denied { bpf } for pid=4328 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:23:25.131000 audit[4328]: AVC avc: denied { bpf } for pid=4328 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:23:25.131000 audit[4328]: AVC avc: denied { perfmon } for pid=4328 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:23:25.131000 audit[4328]: AVC avc: denied { perfmon } for pid=4328 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:23:25.131000 audit[4328]: AVC avc: denied { perfmon } for pid=4328 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:23:25.131000 audit[4328]: AVC avc: denied { perfmon } for pid=4328 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:23:25.131000 audit[4328]: AVC avc: denied { perfmon } for pid=4328 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:23:25.131000 audit[4328]: AVC avc: denied { bpf } for pid=4328 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:23:25.131000 audit[4328]: AVC avc: denied { bpf } for pid=4328 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:23:25.131000 audit: BPF prog-id=17 op=LOAD Nov 1 00:23:25.131000 audit[4328]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=fffff0482988 a2=94 a3=5 items=0 ppid=4261 pid=4328 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:23:25.131000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Nov 1 00:23:25.131000 audit: BPF prog-id=17 op=UNLOAD Nov 1 00:23:25.131000 audit[4328]: AVC avc: denied { perfmon } for pid=4328 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:23:25.131000 audit[4328]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=0 a1=fffff0482a90 a2=50 a3=0 items=0 ppid=4261 pid=4328 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:23:25.131000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Nov 1 00:23:25.131000 audit[4328]: AVC avc: denied { bpf } for pid=4328 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:23:25.131000 audit[4328]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=0 a0=16 a1=fffff0482bd8 a2=4 a3=3 items=0 ppid=4261 pid=4328 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:23:25.131000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Nov 1 00:23:25.131000 audit[4328]: AVC avc: denied { bpf } for pid=4328 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:23:25.131000 audit[4328]: AVC avc: denied { bpf } for pid=4328 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:23:25.131000 audit[4328]: AVC avc: denied { perfmon } for pid=4328 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:23:25.131000 audit[4328]: AVC avc: denied { bpf } for pid=4328 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:23:25.131000 audit[4328]: AVC avc: denied { perfmon } for pid=4328 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:23:25.131000 audit[4328]: AVC avc: denied { perfmon } for pid=4328 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:23:25.131000 audit[4328]: AVC avc: denied { perfmon } for pid=4328 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:23:25.131000 audit[4328]: AVC avc: denied { perfmon } for pid=4328 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:23:25.131000 audit[4328]: AVC avc: denied { perfmon } for pid=4328 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:23:25.131000 audit[4328]: AVC avc: denied { bpf } for pid=4328 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:23:25.131000 audit[4328]: AVC avc: denied { confidentiality } for pid=4328 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Nov 1 00:23:25.131000 audit[4328]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=5 a1=fffff0482bb8 a2=94 a3=6 items=0 ppid=4261 pid=4328 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:23:25.131000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Nov 1 00:23:25.131000 audit[4328]: AVC avc: denied { bpf } for pid=4328 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:23:25.131000 audit[4328]: AVC avc: denied { bpf } for pid=4328 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:23:25.131000 audit[4328]: AVC avc: denied { perfmon } for pid=4328 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:23:25.131000 audit[4328]: AVC avc: denied { bpf } for pid=4328 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:23:25.131000 audit[4328]: AVC avc: denied { perfmon } for pid=4328 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:23:25.131000 audit[4328]: AVC avc: denied { perfmon } for pid=4328 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:23:25.131000 audit[4328]: AVC avc: denied { perfmon } for pid=4328 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:23:25.131000 audit[4328]: AVC avc: denied { perfmon } for pid=4328 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:23:25.131000 audit[4328]: AVC avc: denied { perfmon } for pid=4328 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:23:25.131000 audit[4328]: AVC avc: denied { bpf } for pid=4328 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:23:25.131000 audit[4328]: AVC avc: denied { confidentiality } for pid=4328 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Nov 1 00:23:25.131000 audit[4328]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=5 a1=fffff0482388 a2=94 a3=83 items=0 ppid=4261 pid=4328 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:23:25.131000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Nov 1 00:23:25.132000 audit[4328]: AVC avc: denied { bpf } for pid=4328 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:23:25.132000 audit[4328]: AVC avc: denied { bpf } for pid=4328 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:23:25.132000 audit[4328]: AVC avc: denied { perfmon } for pid=4328 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:23:25.132000 audit[4328]: AVC avc: denied { bpf } for pid=4328 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:23:25.132000 audit[4328]: AVC avc: denied { perfmon } for pid=4328 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:23:25.132000 audit[4328]: AVC avc: denied { perfmon } for pid=4328 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:23:25.132000 audit[4328]: AVC avc: denied { perfmon } for pid=4328 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:23:25.132000 audit[4328]: AVC avc: denied { perfmon } for pid=4328 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:23:25.132000 audit[4328]: AVC avc: denied { perfmon } for pid=4328 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:23:25.132000 audit[4328]: AVC avc: denied { bpf } for pid=4328 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:23:25.132000 audit[4328]: AVC avc: denied { confidentiality } for pid=4328 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Nov 1 00:23:25.132000 audit[4328]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=5 a1=fffff0482388 a2=94 a3=83 items=0 ppid=4261 pid=4328 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:23:25.132000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Nov 1 00:23:25.149000 audit[4332]: AVC avc: denied { bpf } for pid=4332 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:23:25.149000 audit[4332]: AVC avc: denied { bpf } for pid=4332 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:23:25.149000 audit[4332]: AVC avc: denied { perfmon } for pid=4332 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:23:25.149000 audit[4332]: AVC avc: denied { perfmon } for pid=4332 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:23:25.149000 audit[4332]: AVC avc: denied { perfmon } for pid=4332 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:23:25.149000 audit[4332]: AVC avc: denied { perfmon } for pid=4332 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:23:25.149000 audit[4332]: AVC avc: denied { perfmon } for pid=4332 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:23:25.149000 audit[4332]: AVC avc: denied { bpf } for pid=4332 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:23:25.149000 audit[4332]: AVC avc: denied { bpf } for pid=4332 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:23:25.149000 audit: BPF prog-id=18 op=LOAD Nov 1 00:23:25.149000 audit[4332]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffe17c3078 a2=98 a3=ffffe17c3068 items=0 ppid=4261 pid=4332 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:23:25.149000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Nov 1 00:23:25.150000 audit: BPF prog-id=18 op=UNLOAD Nov 1 00:23:25.150000 audit[4332]: AVC avc: denied { bpf } for pid=4332 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:23:25.150000 audit[4332]: AVC avc: denied { bpf } for pid=4332 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:23:25.150000 audit[4332]: AVC avc: denied { perfmon } for pid=4332 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:23:25.150000 audit[4332]: AVC avc: denied { perfmon } for pid=4332 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:23:25.150000 audit[4332]: AVC avc: denied { perfmon } for pid=4332 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:23:25.150000 audit[4332]: AVC avc: denied { perfmon } for pid=4332 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:23:25.150000 audit[4332]: AVC avc: denied { perfmon } for pid=4332 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:23:25.150000 audit[4332]: AVC avc: denied { bpf } for pid=4332 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:23:25.150000 audit[4332]: AVC avc: denied { bpf } for pid=4332 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:23:25.150000 audit: BPF prog-id=19 op=LOAD Nov 1 00:23:25.150000 audit[4332]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffe17c2f28 a2=74 a3=95 items=0 ppid=4261 pid=4332 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:23:25.150000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Nov 1 00:23:25.151000 audit: BPF prog-id=19 op=UNLOAD Nov 1 00:23:25.151000 audit[4332]: AVC avc: denied { bpf } for pid=4332 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:23:25.151000 audit[4332]: AVC avc: denied { bpf } for pid=4332 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:23:25.151000 audit[4332]: AVC avc: denied { perfmon } for pid=4332 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:23:25.151000 audit[4332]: AVC avc: denied { perfmon } for pid=4332 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:23:25.151000 audit[4332]: AVC avc: denied { perfmon } for pid=4332 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:23:25.151000 audit[4332]: AVC avc: denied { perfmon } for pid=4332 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:23:25.151000 audit[4332]: AVC avc: denied { perfmon } for pid=4332 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:23:25.151000 audit[4332]: AVC avc: denied { bpf } for pid=4332 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:23:25.151000 audit[4332]: AVC avc: denied { bpf } for pid=4332 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:23:25.151000 audit: BPF prog-id=20 op=LOAD Nov 1 00:23:25.151000 audit[4332]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffe17c2f58 a2=40 a3=ffffe17c2f88 items=0 ppid=4261 pid=4332 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:23:25.151000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Nov 1 00:23:25.151000 audit: BPF prog-id=20 op=UNLOAD Nov 1 00:23:25.214545 systemd-networkd[1098]: vxlan.calico: Link UP Nov 1 00:23:25.214552 systemd-networkd[1098]: vxlan.calico: Gained carrier Nov 1 00:23:25.215502 env[1324]: time="2025-11-01T00:23:25.215455305Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:23:25.216392 env[1324]: time="2025-11-01T00:23:25.216326986Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 1 00:23:25.216611 kubelet[2124]: E1101 00:23:25.216562 2124 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 00:23:25.216713 kubelet[2124]: E1101 00:23:25.216613 2124 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 00:23:25.216798 kubelet[2124]: E1101 00:23:25.216753 2124 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6gqqt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-77cfb4d4d6-4nr5k_calico-apiserver(e3dd8f6c-3b39-4d19-a732-fff37a40f25e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 1 00:23:25.217963 kubelet[2124]: E1101 00:23:25.217909 2124 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-77cfb4d4d6-4nr5k" podUID="e3dd8f6c-3b39-4d19-a732-fff37a40f25e" Nov 1 00:23:25.239000 audit[4359]: AVC avc: denied { bpf } for pid=4359 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:23:25.239000 audit[4359]: AVC avc: denied { bpf } for pid=4359 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:23:25.239000 audit[4359]: AVC avc: denied { perfmon } for pid=4359 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:23:25.239000 audit[4359]: AVC avc: denied { perfmon } for pid=4359 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:23:25.239000 audit[4359]: AVC avc: denied { perfmon } for pid=4359 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:23:25.239000 audit[4359]: AVC avc: denied { perfmon } for pid=4359 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:23:25.239000 audit[4359]: AVC avc: denied { perfmon } for pid=4359 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:23:25.239000 audit[4359]: AVC avc: denied { bpf } for pid=4359 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:23:25.239000 audit[4359]: AVC avc: denied { bpf } for pid=4359 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:23:25.239000 audit: BPF prog-id=21 op=LOAD Nov 1 00:23:25.239000 audit[4359]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffcc9ea6e8 a2=98 a3=ffffcc9ea6d8 items=0 ppid=4261 pid=4359 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:23:25.239000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Nov 1 00:23:25.239000 audit: BPF prog-id=21 op=UNLOAD Nov 1 00:23:25.239000 audit[4359]: AVC avc: denied { bpf } for pid=4359 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:23:25.239000 audit[4359]: AVC avc: denied { bpf } for pid=4359 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:23:25.239000 audit[4359]: AVC avc: denied { perfmon } for pid=4359 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:23:25.239000 audit[4359]: AVC avc: denied { perfmon } for pid=4359 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:23:25.239000 audit[4359]: AVC avc: denied { perfmon } for pid=4359 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:23:25.239000 audit[4359]: AVC avc: denied { perfmon } for pid=4359 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:23:25.239000 audit[4359]: AVC avc: denied { perfmon } for pid=4359 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:23:25.239000 audit[4359]: AVC avc: denied { bpf } for pid=4359 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:23:25.239000 audit[4359]: AVC avc: denied { bpf } for pid=4359 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:23:25.239000 audit: BPF prog-id=22 op=LOAD Nov 1 00:23:25.239000 audit[4359]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffcc9ea3c8 a2=74 a3=95 items=0 ppid=4261 pid=4359 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:23:25.239000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Nov 1 00:23:25.239000 audit: BPF prog-id=22 op=UNLOAD Nov 1 00:23:25.239000 audit[4359]: AVC avc: denied { bpf } for pid=4359 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:23:25.239000 audit[4359]: AVC avc: denied { bpf } for pid=4359 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:23:25.239000 audit[4359]: AVC avc: denied { perfmon } for pid=4359 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:23:25.239000 audit[4359]: AVC avc: denied { perfmon } for pid=4359 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:23:25.239000 audit[4359]: AVC avc: denied { perfmon } for pid=4359 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:23:25.239000 audit[4359]: AVC avc: denied { perfmon } for pid=4359 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:23:25.239000 audit[4359]: AVC avc: denied { perfmon } for pid=4359 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:23:25.239000 audit[4359]: AVC avc: denied { bpf } for pid=4359 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:23:25.239000 audit[4359]: AVC avc: denied { bpf } for pid=4359 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:23:25.239000 audit: BPF prog-id=23 op=LOAD Nov 1 00:23:25.239000 audit[4359]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffcc9ea428 a2=94 a3=2 items=0 ppid=4261 pid=4359 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:23:25.239000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Nov 1 00:23:25.239000 audit: BPF prog-id=23 op=UNLOAD Nov 1 00:23:25.239000 audit[4359]: AVC avc: denied { bpf } for pid=4359 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:23:25.239000 audit[4359]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=12 a1=ffffcc9ea458 a2=28 a3=ffffcc9ea588 items=0 ppid=4261 pid=4359 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:23:25.239000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Nov 1 00:23:25.239000 audit[4359]: AVC avc: denied { bpf } for pid=4359 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:23:25.239000 audit[4359]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=12 a1=ffffcc9ea488 a2=28 a3=ffffcc9ea5b8 items=0 ppid=4261 pid=4359 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:23:25.239000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Nov 1 00:23:25.239000 audit[4359]: AVC avc: denied { bpf } for pid=4359 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:23:25.239000 audit[4359]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=12 a1=ffffcc9ea338 a2=28 a3=ffffcc9ea468 items=0 ppid=4261 pid=4359 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:23:25.239000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Nov 1 00:23:25.239000 audit[4359]: AVC avc: denied { bpf } for pid=4359 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:23:25.239000 audit[4359]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=12 a1=ffffcc9ea4a8 a2=28 a3=ffffcc9ea5d8 items=0 ppid=4261 pid=4359 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:23:25.239000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Nov 1 00:23:25.239000 audit[4359]: AVC avc: denied { bpf } for pid=4359 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:23:25.239000 audit[4359]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=12 a1=ffffcc9ea488 a2=28 a3=ffffcc9ea5b8 items=0 ppid=4261 pid=4359 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:23:25.239000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Nov 1 00:23:25.239000 audit[4359]: AVC avc: denied { bpf } for pid=4359 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:23:25.239000 audit[4359]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=12 a1=ffffcc9ea478 a2=28 a3=ffffcc9ea5a8 items=0 ppid=4261 pid=4359 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:23:25.239000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Nov 1 00:23:25.241000 audit[4359]: AVC avc: denied { bpf } for pid=4359 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:23:25.241000 audit[4359]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=12 a1=ffffcc9ea4a8 a2=28 a3=ffffcc9ea5d8 items=0 ppid=4261 pid=4359 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:23:25.241000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Nov 1 00:23:25.241000 audit[4359]: AVC avc: denied { bpf } for pid=4359 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:23:25.241000 audit[4359]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=12 a1=ffffcc9ea488 a2=28 a3=ffffcc9ea5b8 items=0 ppid=4261 pid=4359 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:23:25.241000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Nov 1 00:23:25.241000 audit[4359]: AVC avc: denied { bpf } for pid=4359 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:23:25.241000 audit[4359]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=12 a1=ffffcc9ea4a8 a2=28 a3=ffffcc9ea5d8 items=0 ppid=4261 pid=4359 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:23:25.241000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Nov 1 00:23:25.241000 audit[4359]: AVC avc: denied { bpf } for pid=4359 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:23:25.241000 audit[4359]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=12 a1=ffffcc9ea478 a2=28 a3=ffffcc9ea5a8 items=0 ppid=4261 pid=4359 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:23:25.241000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Nov 1 00:23:25.241000 audit[4359]: AVC avc: denied { bpf } for pid=4359 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:23:25.241000 audit[4359]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=12 a1=ffffcc9ea4f8 a2=28 a3=ffffcc9ea638 items=0 ppid=4261 pid=4359 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:23:25.241000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Nov 1 00:23:25.241000 audit[4359]: AVC avc: denied { bpf } for pid=4359 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:23:25.241000 audit[4359]: AVC avc: denied { bpf } for pid=4359 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:23:25.241000 audit[4359]: AVC avc: denied { perfmon } for pid=4359 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:23:25.241000 audit[4359]: AVC avc: denied { perfmon } for pid=4359 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:23:25.241000 audit[4359]: AVC avc: denied { perfmon } for pid=4359 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:23:25.241000 audit[4359]: AVC avc: denied { perfmon } for pid=4359 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:23:25.241000 audit[4359]: AVC avc: denied { perfmon } for pid=4359 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:23:25.241000 audit[4359]: AVC avc: denied { bpf } for pid=4359 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:23:25.241000 audit[4359]: AVC avc: denied { bpf } for pid=4359 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:23:25.241000 audit: BPF prog-id=24 op=LOAD Nov 1 00:23:25.241000 audit[4359]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=ffffcc9ea318 a2=40 a3=ffffcc9ea348 items=0 ppid=4261 pid=4359 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:23:25.241000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Nov 1 00:23:25.241000 audit: BPF prog-id=24 op=UNLOAD Nov 1 00:23:25.241000 audit[4359]: AVC avc: denied { bpf } for pid=4359 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:23:25.241000 audit[4359]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=0 a1=ffffcc9ea340 a2=50 a3=0 items=0 ppid=4261 pid=4359 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:23:25.241000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Nov 1 00:23:25.241000 audit[4359]: AVC avc: denied { bpf } for pid=4359 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:23:25.241000 audit[4359]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=0 a1=ffffcc9ea340 a2=50 a3=0 items=0 ppid=4261 pid=4359 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:23:25.241000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Nov 1 00:23:25.241000 audit[4359]: AVC avc: denied { bpf } for pid=4359 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:23:25.241000 audit[4359]: AVC avc: denied { bpf } for pid=4359 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:23:25.241000 audit[4359]: AVC avc: denied { bpf } for pid=4359 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:23:25.241000 audit[4359]: AVC avc: denied { perfmon } for pid=4359 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:23:25.241000 audit[4359]: AVC avc: denied { perfmon } for pid=4359 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:23:25.241000 audit[4359]: AVC avc: denied { perfmon } for pid=4359 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:23:25.241000 audit[4359]: AVC avc: denied { perfmon } for pid=4359 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:23:25.241000 audit[4359]: AVC avc: denied { perfmon } for pid=4359 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:23:25.241000 audit[4359]: AVC avc: denied { bpf } for pid=4359 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:23:25.241000 audit[4359]: AVC avc: denied { bpf } for pid=4359 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:23:25.241000 audit: BPF prog-id=25 op=LOAD Nov 1 00:23:25.241000 audit[4359]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=ffffcc9e9aa8 a2=94 a3=2 items=0 ppid=4261 pid=4359 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:23:25.241000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Nov 1 00:23:25.241000 audit: BPF prog-id=25 op=UNLOAD Nov 1 00:23:25.241000 audit[4359]: AVC avc: denied { bpf } for pid=4359 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:23:25.241000 audit[4359]: AVC avc: denied { bpf } for pid=4359 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:23:25.241000 audit[4359]: AVC avc: denied { bpf } for pid=4359 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:23:25.241000 audit[4359]: AVC avc: denied { perfmon } for pid=4359 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:23:25.241000 audit[4359]: AVC avc: denied { perfmon } for pid=4359 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:23:25.241000 audit[4359]: AVC avc: denied { perfmon } for pid=4359 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:23:25.241000 audit[4359]: AVC avc: denied { perfmon } for pid=4359 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:23:25.241000 audit[4359]: AVC avc: denied { perfmon } for pid=4359 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:23:25.241000 audit[4359]: AVC avc: denied { bpf } for pid=4359 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:23:25.241000 audit[4359]: AVC avc: denied { bpf } for pid=4359 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:23:25.241000 audit: BPF prog-id=26 op=LOAD Nov 1 00:23:25.241000 audit[4359]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=ffffcc9e9c38 a2=94 a3=30 items=0 ppid=4261 pid=4359 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:23:25.241000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Nov 1 00:23:25.249000 audit[4368]: AVC avc: denied { bpf } for pid=4368 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:23:25.249000 audit[4368]: AVC avc: denied { bpf } for pid=4368 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:23:25.249000 audit[4368]: AVC avc: denied { perfmon } for pid=4368 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:23:25.249000 audit[4368]: AVC avc: denied { perfmon } for pid=4368 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:23:25.249000 audit[4368]: AVC avc: denied { perfmon } for pid=4368 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:23:25.249000 audit[4368]: AVC avc: denied { perfmon } for pid=4368 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:23:25.249000 audit[4368]: AVC avc: denied { perfmon } for pid=4368 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:23:25.249000 audit[4368]: AVC avc: denied { bpf } for pid=4368 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:23:25.249000 audit[4368]: AVC avc: denied { bpf } for pid=4368 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:23:25.249000 audit: BPF prog-id=27 op=LOAD Nov 1 00:23:25.249000 audit[4368]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffc938e598 a2=98 a3=ffffc938e588 items=0 ppid=4261 pid=4368 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:23:25.249000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Nov 1 00:23:25.249000 audit: BPF prog-id=27 op=UNLOAD Nov 1 00:23:25.249000 audit[4368]: AVC avc: denied { bpf } for pid=4368 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:23:25.249000 audit[4368]: AVC avc: denied { bpf } for pid=4368 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:23:25.249000 audit[4368]: AVC avc: denied { perfmon } for pid=4368 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:23:25.249000 audit[4368]: AVC avc: denied { perfmon } for pid=4368 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:23:25.249000 audit[4368]: AVC avc: denied { perfmon } for pid=4368 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:23:25.249000 audit[4368]: AVC avc: denied { perfmon } for pid=4368 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:23:25.249000 audit[4368]: AVC avc: denied { perfmon } for pid=4368 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:23:25.249000 audit[4368]: AVC avc: denied { bpf } for pid=4368 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:23:25.249000 audit[4368]: AVC avc: denied { bpf } for pid=4368 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:23:25.249000 audit: BPF prog-id=28 op=LOAD Nov 1 00:23:25.249000 audit[4368]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=ffffc938e228 a2=74 a3=95 items=0 ppid=4261 pid=4368 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:23:25.249000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Nov 1 00:23:25.249000 audit: BPF prog-id=28 op=UNLOAD Nov 1 00:23:25.249000 audit[4368]: AVC avc: denied { bpf } for pid=4368 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:23:25.249000 audit[4368]: AVC avc: denied { bpf } for pid=4368 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:23:25.249000 audit[4368]: AVC avc: denied { perfmon } for pid=4368 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:23:25.249000 audit[4368]: AVC avc: denied { perfmon } for pid=4368 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:23:25.249000 audit[4368]: AVC avc: denied { perfmon } for pid=4368 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:23:25.249000 audit[4368]: AVC avc: denied { perfmon } for pid=4368 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:23:25.249000 audit[4368]: AVC avc: denied { perfmon } for pid=4368 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:23:25.249000 audit[4368]: AVC avc: denied { bpf } for pid=4368 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:23:25.249000 audit[4368]: AVC avc: denied { bpf } for pid=4368 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:23:25.249000 audit: BPF prog-id=29 op=LOAD Nov 1 00:23:25.249000 audit[4368]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=ffffc938e288 a2=94 a3=2 items=0 ppid=4261 pid=4368 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:23:25.249000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Nov 1 00:23:25.249000 audit: BPF prog-id=29 op=UNLOAD Nov 1 00:23:25.336000 audit[4368]: AVC avc: denied { bpf } for pid=4368 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:23:25.336000 audit[4368]: AVC avc: denied { bpf } for pid=4368 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:23:25.336000 audit[4368]: AVC avc: denied { perfmon } for pid=4368 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:23:25.336000 audit[4368]: AVC avc: denied { perfmon } for pid=4368 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:23:25.336000 audit[4368]: AVC avc: denied { perfmon } for pid=4368 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:23:25.336000 audit[4368]: AVC avc: denied { perfmon } for pid=4368 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:23:25.336000 audit[4368]: AVC avc: denied { perfmon } for pid=4368 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:23:25.336000 audit[4368]: AVC avc: denied { bpf } for pid=4368 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:23:25.336000 audit[4368]: AVC avc: denied { bpf } for pid=4368 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:23:25.336000 audit: BPF prog-id=30 op=LOAD Nov 1 00:23:25.336000 audit[4368]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=ffffc938e248 a2=40 a3=ffffc938e278 items=0 ppid=4261 pid=4368 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:23:25.336000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Nov 1 00:23:25.336000 audit: BPF prog-id=30 op=UNLOAD Nov 1 00:23:25.336000 audit[4368]: AVC avc: denied { perfmon } for pid=4368 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:23:25.336000 audit[4368]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=0 a1=ffffc938e360 a2=50 a3=0 items=0 ppid=4261 pid=4368 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:23:25.336000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Nov 1 00:23:25.344000 audit[4368]: AVC avc: denied { bpf } for pid=4368 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:23:25.344000 audit[4368]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=12 a1=ffffc938e2b8 a2=28 a3=ffffc938e3e8 items=0 ppid=4261 pid=4368 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:23:25.344000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Nov 1 00:23:25.344000 audit[4368]: AVC avc: denied { bpf } for pid=4368 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:23:25.344000 audit[4368]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=12 a1=ffffc938e2e8 a2=28 a3=ffffc938e418 items=0 ppid=4261 pid=4368 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:23:25.344000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Nov 1 00:23:25.344000 audit[4368]: AVC avc: denied { bpf } for pid=4368 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:23:25.344000 audit[4368]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=12 a1=ffffc938e198 a2=28 a3=ffffc938e2c8 items=0 ppid=4261 pid=4368 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:23:25.344000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Nov 1 00:23:25.344000 audit[4368]: AVC avc: denied { bpf } for pid=4368 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:23:25.344000 audit[4368]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=12 a1=ffffc938e308 a2=28 a3=ffffc938e438 items=0 ppid=4261 pid=4368 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:23:25.344000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Nov 1 00:23:25.344000 audit[4368]: AVC avc: denied { bpf } for pid=4368 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:23:25.344000 audit[4368]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=12 a1=ffffc938e2e8 a2=28 a3=ffffc938e418 items=0 ppid=4261 pid=4368 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:23:25.344000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Nov 1 00:23:25.344000 audit[4368]: AVC avc: denied { bpf } for pid=4368 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:23:25.344000 audit[4368]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=12 a1=ffffc938e2d8 a2=28 a3=ffffc938e408 items=0 ppid=4261 pid=4368 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:23:25.344000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Nov 1 00:23:25.345000 audit[4368]: AVC avc: denied { bpf } for pid=4368 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:23:25.345000 audit[4368]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=12 a1=ffffc938e308 a2=28 a3=ffffc938e438 items=0 ppid=4261 pid=4368 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:23:25.345000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Nov 1 00:23:25.345000 audit[4368]: AVC avc: denied { bpf } for pid=4368 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:23:25.345000 audit[4368]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=12 a1=ffffc938e2e8 a2=28 a3=ffffc938e418 items=0 ppid=4261 pid=4368 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:23:25.345000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Nov 1 00:23:25.345000 audit[4368]: AVC avc: denied { bpf } for pid=4368 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:23:25.345000 audit[4368]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=12 a1=ffffc938e308 a2=28 a3=ffffc938e438 items=0 ppid=4261 pid=4368 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:23:25.345000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Nov 1 00:23:25.345000 audit[4368]: AVC avc: denied { bpf } for pid=4368 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:23:25.345000 audit[4368]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=12 a1=ffffc938e2d8 a2=28 a3=ffffc938e408 items=0 ppid=4261 pid=4368 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:23:25.345000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Nov 1 00:23:25.345000 audit[4368]: AVC avc: denied { bpf } for pid=4368 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:23:25.345000 audit[4368]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=12 a1=ffffc938e358 a2=28 a3=ffffc938e498 items=0 ppid=4261 pid=4368 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:23:25.345000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Nov 1 00:23:25.345000 audit[4368]: AVC avc: denied { perfmon } for pid=4368 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:23:25.345000 audit[4368]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=0 a1=ffffc938e090 a2=50 a3=0 items=0 ppid=4261 pid=4368 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:23:25.345000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Nov 1 00:23:25.345000 audit[4368]: AVC avc: denied { bpf } for pid=4368 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:23:25.345000 audit[4368]: AVC avc: denied { bpf } for pid=4368 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:23:25.345000 audit[4368]: AVC avc: denied { perfmon } for pid=4368 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:23:25.345000 audit[4368]: AVC avc: denied { perfmon } for pid=4368 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:23:25.345000 audit[4368]: AVC avc: denied { perfmon } for pid=4368 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:23:25.345000 audit[4368]: AVC avc: denied { perfmon } for pid=4368 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:23:25.345000 audit[4368]: AVC avc: denied { perfmon } for pid=4368 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:23:25.345000 audit[4368]: AVC avc: denied { bpf } for pid=4368 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:23:25.345000 audit[4368]: AVC avc: denied { bpf } for pid=4368 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:23:25.345000 audit: BPF prog-id=31 op=LOAD Nov 1 00:23:25.345000 audit[4368]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=ffffc938e098 a2=94 a3=5 items=0 ppid=4261 pid=4368 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:23:25.345000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Nov 1 00:23:25.345000 audit: BPF prog-id=31 op=UNLOAD Nov 1 00:23:25.345000 audit[4368]: AVC avc: denied { perfmon } for pid=4368 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:23:25.345000 audit[4368]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=0 a1=ffffc938e1a0 a2=50 a3=0 items=0 ppid=4261 pid=4368 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:23:25.345000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Nov 1 00:23:25.345000 audit[4368]: AVC avc: denied { bpf } for pid=4368 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:23:25.345000 audit[4368]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=0 a0=16 a1=ffffc938e2e8 a2=4 a3=3 items=0 ppid=4261 pid=4368 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:23:25.345000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Nov 1 00:23:25.345000 audit[4368]: AVC avc: denied { bpf } for pid=4368 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:23:25.345000 audit[4368]: AVC avc: denied { bpf } for pid=4368 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:23:25.345000 audit[4368]: AVC avc: denied { perfmon } for pid=4368 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:23:25.345000 audit[4368]: AVC avc: denied { bpf } for pid=4368 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:23:25.345000 audit[4368]: AVC avc: denied { perfmon } for pid=4368 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:23:25.345000 audit[4368]: AVC avc: denied { perfmon } for pid=4368 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:23:25.345000 audit[4368]: AVC avc: denied { perfmon } for pid=4368 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:23:25.345000 audit[4368]: AVC avc: denied { perfmon } for pid=4368 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:23:25.345000 audit[4368]: AVC avc: denied { perfmon } for pid=4368 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:23:25.345000 audit[4368]: AVC avc: denied { bpf } for pid=4368 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:23:25.345000 audit[4368]: AVC avc: denied { confidentiality } for pid=4368 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Nov 1 00:23:25.345000 audit[4368]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=5 a1=ffffc938e2c8 a2=94 a3=6 items=0 ppid=4261 pid=4368 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:23:25.345000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Nov 1 00:23:25.345000 audit[4368]: AVC avc: denied { bpf } for pid=4368 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:23:25.345000 audit[4368]: AVC avc: denied { bpf } for pid=4368 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:23:25.345000 audit[4368]: AVC avc: denied { perfmon } for pid=4368 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:23:25.345000 audit[4368]: AVC avc: denied { bpf } for pid=4368 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:23:25.345000 audit[4368]: AVC avc: denied { perfmon } for pid=4368 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:23:25.345000 audit[4368]: AVC avc: denied { perfmon } for pid=4368 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:23:25.345000 audit[4368]: AVC avc: denied { perfmon } for pid=4368 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:23:25.345000 audit[4368]: AVC avc: denied { perfmon } for pid=4368 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:23:25.345000 audit[4368]: AVC avc: denied { perfmon } for pid=4368 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:23:25.345000 audit[4368]: AVC avc: denied { bpf } for pid=4368 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:23:25.345000 audit[4368]: AVC avc: denied { confidentiality } for pid=4368 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Nov 1 00:23:25.345000 audit[4368]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=5 a1=ffffc938da98 a2=94 a3=83 items=0 ppid=4261 pid=4368 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:23:25.345000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Nov 1 00:23:25.345000 audit[4368]: AVC avc: denied { bpf } for pid=4368 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:23:25.345000 audit[4368]: AVC avc: denied { bpf } for pid=4368 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:23:25.345000 audit[4368]: AVC avc: denied { perfmon } for pid=4368 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:23:25.345000 audit[4368]: AVC avc: denied { bpf } for pid=4368 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:23:25.345000 audit[4368]: AVC avc: denied { perfmon } for pid=4368 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:23:25.345000 audit[4368]: AVC avc: denied { perfmon } for pid=4368 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:23:25.345000 audit[4368]: AVC avc: denied { perfmon } for pid=4368 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:23:25.345000 audit[4368]: AVC avc: denied { perfmon } for pid=4368 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:23:25.345000 audit[4368]: AVC avc: denied { perfmon } for pid=4368 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:23:25.345000 audit[4368]: AVC avc: denied { bpf } for pid=4368 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:23:25.345000 audit[4368]: AVC avc: denied { confidentiality } for pid=4368 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Nov 1 00:23:25.345000 audit[4368]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=5 a1=ffffc938da98 a2=94 a3=83 items=0 ppid=4261 pid=4368 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:23:25.345000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Nov 1 00:23:25.346000 audit[4368]: AVC avc: denied { bpf } for pid=4368 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:23:25.346000 audit[4368]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=0 a0=f a1=ffffc938f4d8 a2=10 a3=ffffc938f5c8 items=0 ppid=4261 pid=4368 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:23:25.346000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Nov 1 00:23:25.346000 audit[4368]: AVC avc: denied { bpf } for pid=4368 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:23:25.346000 audit[4368]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=0 a0=f a1=ffffc938f398 a2=10 a3=ffffc938f488 items=0 ppid=4261 pid=4368 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:23:25.346000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Nov 1 00:23:25.346000 audit[4368]: AVC avc: denied { bpf } for pid=4368 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:23:25.346000 audit[4368]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=0 a0=f a1=ffffc938f308 a2=10 a3=ffffc938f488 items=0 ppid=4261 pid=4368 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:23:25.346000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Nov 1 00:23:25.346000 audit[4368]: AVC avc: denied { bpf } for pid=4368 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:23:25.346000 audit[4368]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=0 a0=f a1=ffffc938f308 a2=10 a3=ffffc938f488 items=0 ppid=4261 pid=4368 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:23:25.346000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Nov 1 00:23:25.354000 audit: BPF prog-id=26 op=UNLOAD Nov 1 00:23:25.361531 systemd-networkd[1098]: cali6b455043a5a: Gained IPv6LL Nov 1 00:23:25.416000 audit[4395]: NETFILTER_CFG table=nat:113 family=2 entries=15 op=nft_register_chain pid=4395 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Nov 1 00:23:25.416000 audit[4395]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5084 a0=3 a1=ffffd11ad890 a2=0 a3=ffffb6cf8fa8 items=0 ppid=4261 pid=4395 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:23:25.416000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Nov 1 00:23:25.420000 audit[4396]: NETFILTER_CFG table=raw:114 family=2 entries=21 op=nft_register_chain pid=4396 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Nov 1 00:23:25.420000 audit[4396]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=8452 a0=3 a1=ffffcebff610 a2=0 a3=ffffb6166fa8 items=0 ppid=4261 pid=4396 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:23:25.420000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Nov 1 00:23:25.422000 audit[4399]: NETFILTER_CFG table=mangle:115 family=2 entries=16 op=nft_register_chain pid=4399 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Nov 1 00:23:25.422000 audit[4399]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=6868 a0=3 a1=ffffe88bc110 a2=0 a3=ffffafc65fa8 items=0 ppid=4261 pid=4399 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:23:25.422000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Nov 1 00:23:25.436000 audit[4398]: NETFILTER_CFG table=filter:116 family=2 entries=269 op=nft_register_chain pid=4398 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Nov 1 00:23:25.436000 audit[4398]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=158872 a0=3 a1=fffff00ce660 a2=0 a3=ffff82ed4fa8 items=0 ppid=4261 pid=4398 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:23:25.436000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Nov 1 00:23:25.640485 env[1324]: time="2025-11-01T00:23:25.640437144Z" level=info msg="StopPodSandbox for \"fa406b43a30fa638400c575a6ea7cf741c2e9a12075a2582114651f5a6fa8633\"" Nov 1 00:23:25.724345 env[1324]: 2025-11-01 00:23:25.683 [INFO][4421] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="fa406b43a30fa638400c575a6ea7cf741c2e9a12075a2582114651f5a6fa8633" Nov 1 00:23:25.724345 env[1324]: 2025-11-01 00:23:25.683 [INFO][4421] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="fa406b43a30fa638400c575a6ea7cf741c2e9a12075a2582114651f5a6fa8633" iface="eth0" netns="/var/run/netns/cni-c92a701d-f814-fc01-9a91-9e9690d72d4a" Nov 1 00:23:25.724345 env[1324]: 2025-11-01 00:23:25.683 [INFO][4421] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="fa406b43a30fa638400c575a6ea7cf741c2e9a12075a2582114651f5a6fa8633" iface="eth0" netns="/var/run/netns/cni-c92a701d-f814-fc01-9a91-9e9690d72d4a" Nov 1 00:23:25.724345 env[1324]: 2025-11-01 00:23:25.684 [INFO][4421] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="fa406b43a30fa638400c575a6ea7cf741c2e9a12075a2582114651f5a6fa8633" iface="eth0" netns="/var/run/netns/cni-c92a701d-f814-fc01-9a91-9e9690d72d4a" Nov 1 00:23:25.724345 env[1324]: 2025-11-01 00:23:25.684 [INFO][4421] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="fa406b43a30fa638400c575a6ea7cf741c2e9a12075a2582114651f5a6fa8633" Nov 1 00:23:25.724345 env[1324]: 2025-11-01 00:23:25.684 [INFO][4421] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="fa406b43a30fa638400c575a6ea7cf741c2e9a12075a2582114651f5a6fa8633" Nov 1 00:23:25.724345 env[1324]: 2025-11-01 00:23:25.707 [INFO][4431] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="fa406b43a30fa638400c575a6ea7cf741c2e9a12075a2582114651f5a6fa8633" HandleID="k8s-pod-network.fa406b43a30fa638400c575a6ea7cf741c2e9a12075a2582114651f5a6fa8633" Workload="localhost-k8s-coredns--668d6bf9bc--jfnxh-eth0" Nov 1 00:23:25.724345 env[1324]: 2025-11-01 00:23:25.707 [INFO][4431] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:23:25.724345 env[1324]: 2025-11-01 00:23:25.707 [INFO][4431] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:23:25.724345 env[1324]: 2025-11-01 00:23:25.716 [WARNING][4431] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="fa406b43a30fa638400c575a6ea7cf741c2e9a12075a2582114651f5a6fa8633" HandleID="k8s-pod-network.fa406b43a30fa638400c575a6ea7cf741c2e9a12075a2582114651f5a6fa8633" Workload="localhost-k8s-coredns--668d6bf9bc--jfnxh-eth0" Nov 1 00:23:25.724345 env[1324]: 2025-11-01 00:23:25.716 [INFO][4431] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="fa406b43a30fa638400c575a6ea7cf741c2e9a12075a2582114651f5a6fa8633" HandleID="k8s-pod-network.fa406b43a30fa638400c575a6ea7cf741c2e9a12075a2582114651f5a6fa8633" Workload="localhost-k8s-coredns--668d6bf9bc--jfnxh-eth0" Nov 1 00:23:25.724345 env[1324]: 2025-11-01 00:23:25.719 [INFO][4431] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:23:25.724345 env[1324]: 2025-11-01 00:23:25.722 [INFO][4421] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="fa406b43a30fa638400c575a6ea7cf741c2e9a12075a2582114651f5a6fa8633" Nov 1 00:23:25.727906 env[1324]: time="2025-11-01T00:23:25.727499473Z" level=info msg="TearDown network for sandbox \"fa406b43a30fa638400c575a6ea7cf741c2e9a12075a2582114651f5a6fa8633\" successfully" Nov 1 00:23:25.727906 env[1324]: time="2025-11-01T00:23:25.727533313Z" level=info msg="StopPodSandbox for \"fa406b43a30fa638400c575a6ea7cf741c2e9a12075a2582114651f5a6fa8633\" returns successfully" Nov 1 00:23:25.728975 systemd[1]: run-netns-cni\x2dc92a701d\x2df814\x2dfc01\x2d9a91\x2d9e9690d72d4a.mount: Deactivated successfully. Nov 1 00:23:25.729981 kubelet[2124]: E1101 00:23:25.729955 2124 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:23:25.730720 env[1324]: time="2025-11-01T00:23:25.730667515Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-jfnxh,Uid:2ceacdf1-40ab-4971-87a7-298d59c91848,Namespace:kube-system,Attempt:1,}" Nov 1 00:23:25.809876 kubelet[2124]: E1101 00:23:25.808750 2124 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:23:25.809607 systemd-networkd[1098]: cali66e52162bd1: Gained IPv6LL Nov 1 00:23:25.811249 kubelet[2124]: E1101 00:23:25.811021 2124 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-52pqx" podUID="a584285f-c40b-477a-8ddb-bfa9e3439fe6" Nov 1 00:23:25.811507 kubelet[2124]: E1101 00:23:25.811447 2124 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-cp7jx" podUID="bf1893fd-31bf-427a-928e-11685512f41a" Nov 1 00:23:25.811862 kubelet[2124]: E1101 00:23:25.811555 2124 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-77cfb4d4d6-4nr5k" podUID="e3dd8f6c-3b39-4d19-a732-fff37a40f25e" Nov 1 00:23:25.853000 audit[4464]: NETFILTER_CFG table=filter:117 family=2 entries=14 op=nft_register_rule pid=4464 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 00:23:25.853000 audit[4464]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=ffffee2da1c0 a2=0 a3=1 items=0 ppid=2274 pid=4464 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:23:25.853000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 00:23:25.860000 audit[4464]: NETFILTER_CFG table=nat:118 family=2 entries=20 op=nft_register_rule pid=4464 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 00:23:25.860000 audit[4464]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5772 a0=3 a1=ffffee2da1c0 a2=0 a3=1 items=0 ppid=2274 pid=4464 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:23:25.860000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 00:23:25.873786 systemd-networkd[1098]: calib4e7c11f848: Link UP Nov 1 00:23:25.875263 systemd-networkd[1098]: calib4e7c11f848: Gained carrier Nov 1 00:23:25.875532 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): calib4e7c11f848: link becomes ready Nov 1 00:23:25.889075 env[1324]: 2025-11-01 00:23:25.783 [INFO][4439] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--668d6bf9bc--jfnxh-eth0 coredns-668d6bf9bc- kube-system 2ceacdf1-40ab-4971-87a7-298d59c91848 1106 0 2025-11-01 00:22:48 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-668d6bf9bc-jfnxh eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calib4e7c11f848 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="ec78ae91e8e588f233f99904687ccdfc431ec28f7de3af40081b4ddf5e3c8874" Namespace="kube-system" Pod="coredns-668d6bf9bc-jfnxh" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--jfnxh-" Nov 1 00:23:25.889075 env[1324]: 2025-11-01 00:23:25.783 [INFO][4439] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ec78ae91e8e588f233f99904687ccdfc431ec28f7de3af40081b4ddf5e3c8874" Namespace="kube-system" Pod="coredns-668d6bf9bc-jfnxh" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--jfnxh-eth0" Nov 1 00:23:25.889075 env[1324]: 2025-11-01 00:23:25.814 [INFO][4455] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ec78ae91e8e588f233f99904687ccdfc431ec28f7de3af40081b4ddf5e3c8874" HandleID="k8s-pod-network.ec78ae91e8e588f233f99904687ccdfc431ec28f7de3af40081b4ddf5e3c8874" Workload="localhost-k8s-coredns--668d6bf9bc--jfnxh-eth0" Nov 1 00:23:25.889075 env[1324]: 2025-11-01 00:23:25.814 [INFO][4455] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="ec78ae91e8e588f233f99904687ccdfc431ec28f7de3af40081b4ddf5e3c8874" HandleID="k8s-pod-network.ec78ae91e8e588f233f99904687ccdfc431ec28f7de3af40081b4ddf5e3c8874" Workload="localhost-k8s-coredns--668d6bf9bc--jfnxh-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004df30), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-668d6bf9bc-jfnxh", "timestamp":"2025-11-01 00:23:25.814500722 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 1 00:23:25.889075 env[1324]: 2025-11-01 00:23:25.814 [INFO][4455] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:23:25.889075 env[1324]: 2025-11-01 00:23:25.814 [INFO][4455] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:23:25.889075 env[1324]: 2025-11-01 00:23:25.814 [INFO][4455] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 1 00:23:25.889075 env[1324]: 2025-11-01 00:23:25.826 [INFO][4455] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.ec78ae91e8e588f233f99904687ccdfc431ec28f7de3af40081b4ddf5e3c8874" host="localhost" Nov 1 00:23:25.889075 env[1324]: 2025-11-01 00:23:25.836 [INFO][4455] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 1 00:23:25.889075 env[1324]: 2025-11-01 00:23:25.843 [INFO][4455] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 1 00:23:25.889075 env[1324]: 2025-11-01 00:23:25.846 [INFO][4455] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 1 00:23:25.889075 env[1324]: 2025-11-01 00:23:25.855 [INFO][4455] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 1 00:23:25.889075 env[1324]: 2025-11-01 00:23:25.855 [INFO][4455] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.ec78ae91e8e588f233f99904687ccdfc431ec28f7de3af40081b4ddf5e3c8874" host="localhost" Nov 1 00:23:25.889075 env[1324]: 2025-11-01 00:23:25.857 [INFO][4455] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.ec78ae91e8e588f233f99904687ccdfc431ec28f7de3af40081b4ddf5e3c8874 Nov 1 00:23:25.889075 env[1324]: 2025-11-01 00:23:25.861 [INFO][4455] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.ec78ae91e8e588f233f99904687ccdfc431ec28f7de3af40081b4ddf5e3c8874" host="localhost" Nov 1 00:23:25.889075 env[1324]: 2025-11-01 00:23:25.867 [INFO][4455] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.ec78ae91e8e588f233f99904687ccdfc431ec28f7de3af40081b4ddf5e3c8874" host="localhost" Nov 1 00:23:25.889075 env[1324]: 2025-11-01 00:23:25.868 [INFO][4455] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.ec78ae91e8e588f233f99904687ccdfc431ec28f7de3af40081b4ddf5e3c8874" host="localhost" Nov 1 00:23:25.889075 env[1324]: 2025-11-01 00:23:25.868 [INFO][4455] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:23:25.889075 env[1324]: 2025-11-01 00:23:25.868 [INFO][4455] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="ec78ae91e8e588f233f99904687ccdfc431ec28f7de3af40081b4ddf5e3c8874" HandleID="k8s-pod-network.ec78ae91e8e588f233f99904687ccdfc431ec28f7de3af40081b4ddf5e3c8874" Workload="localhost-k8s-coredns--668d6bf9bc--jfnxh-eth0" Nov 1 00:23:25.889701 env[1324]: 2025-11-01 00:23:25.870 [INFO][4439] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ec78ae91e8e588f233f99904687ccdfc431ec28f7de3af40081b4ddf5e3c8874" Namespace="kube-system" Pod="coredns-668d6bf9bc-jfnxh" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--jfnxh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--jfnxh-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"2ceacdf1-40ab-4971-87a7-298d59c91848", ResourceVersion:"1106", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 22, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-668d6bf9bc-jfnxh", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib4e7c11f848", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:23:25.889701 env[1324]: 2025-11-01 00:23:25.870 [INFO][4439] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="ec78ae91e8e588f233f99904687ccdfc431ec28f7de3af40081b4ddf5e3c8874" Namespace="kube-system" Pod="coredns-668d6bf9bc-jfnxh" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--jfnxh-eth0" Nov 1 00:23:25.889701 env[1324]: 2025-11-01 00:23:25.870 [INFO][4439] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib4e7c11f848 ContainerID="ec78ae91e8e588f233f99904687ccdfc431ec28f7de3af40081b4ddf5e3c8874" Namespace="kube-system" Pod="coredns-668d6bf9bc-jfnxh" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--jfnxh-eth0" Nov 1 00:23:25.889701 env[1324]: 2025-11-01 00:23:25.876 [INFO][4439] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ec78ae91e8e588f233f99904687ccdfc431ec28f7de3af40081b4ddf5e3c8874" Namespace="kube-system" Pod="coredns-668d6bf9bc-jfnxh" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--jfnxh-eth0" Nov 1 00:23:25.889701 env[1324]: 2025-11-01 00:23:25.876 [INFO][4439] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ec78ae91e8e588f233f99904687ccdfc431ec28f7de3af40081b4ddf5e3c8874" Namespace="kube-system" Pod="coredns-668d6bf9bc-jfnxh" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--jfnxh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--jfnxh-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"2ceacdf1-40ab-4971-87a7-298d59c91848", ResourceVersion:"1106", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 22, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"ec78ae91e8e588f233f99904687ccdfc431ec28f7de3af40081b4ddf5e3c8874", Pod:"coredns-668d6bf9bc-jfnxh", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib4e7c11f848", MAC:"e6:db:6a:9c:42:3b", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:23:25.889701 env[1324]: 2025-11-01 00:23:25.886 [INFO][4439] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ec78ae91e8e588f233f99904687ccdfc431ec28f7de3af40081b4ddf5e3c8874" Namespace="kube-system" Pod="coredns-668d6bf9bc-jfnxh" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--jfnxh-eth0" Nov 1 00:23:25.898609 env[1324]: time="2025-11-01T00:23:25.898308209Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:23:25.898609 env[1324]: time="2025-11-01T00:23:25.898354729Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:23:25.898609 env[1324]: time="2025-11-01T00:23:25.898364769Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:23:25.898609 env[1324]: time="2025-11-01T00:23:25.898494609Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/ec78ae91e8e588f233f99904687ccdfc431ec28f7de3af40081b4ddf5e3c8874 pid=4481 runtime=io.containerd.runc.v2 Nov 1 00:23:25.902000 audit[4492]: NETFILTER_CFG table=filter:119 family=2 entries=48 op=nft_register_chain pid=4492 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Nov 1 00:23:25.902000 audit[4492]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=22704 a0=3 a1=ffffdcdb6000 a2=0 a3=ffff9500efa8 items=0 ppid=4261 pid=4492 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:23:25.902000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Nov 1 00:23:25.929926 systemd-resolved[1239]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 1 00:23:25.945638 env[1324]: time="2025-11-01T00:23:25.945598116Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-jfnxh,Uid:2ceacdf1-40ab-4971-87a7-298d59c91848,Namespace:kube-system,Attempt:1,} returns sandbox id \"ec78ae91e8e588f233f99904687ccdfc431ec28f7de3af40081b4ddf5e3c8874\"" Nov 1 00:23:25.946361 kubelet[2124]: E1101 00:23:25.946332 2124 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:23:25.949574 env[1324]: time="2025-11-01T00:23:25.949536478Z" level=info msg="CreateContainer within sandbox \"ec78ae91e8e588f233f99904687ccdfc431ec28f7de3af40081b4ddf5e3c8874\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 1 00:23:25.965584 env[1324]: time="2025-11-01T00:23:25.965530767Z" level=info msg="CreateContainer within sandbox \"ec78ae91e8e588f233f99904687ccdfc431ec28f7de3af40081b4ddf5e3c8874\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"80008ab821c27c46c6a27ac79a63123dd86871f25ef9794c128ca82ec7dd8dd3\"" Nov 1 00:23:25.968392 env[1324]: time="2025-11-01T00:23:25.968336969Z" level=info msg="StartContainer for \"80008ab821c27c46c6a27ac79a63123dd86871f25ef9794c128ca82ec7dd8dd3\"" Nov 1 00:23:26.018727 env[1324]: time="2025-11-01T00:23:26.016974355Z" level=info msg="StartContainer for \"80008ab821c27c46c6a27ac79a63123dd86871f25ef9794c128ca82ec7dd8dd3\" returns successfully" Nov 1 00:23:26.577564 systemd-networkd[1098]: calidd768c08cb3: Gained IPv6LL Nov 1 00:23:26.641085 env[1324]: time="2025-11-01T00:23:26.640884964Z" level=info msg="StopPodSandbox for \"8daf4c7352584ba96785ece68ac010b475d962722fdef4b04147dcba5b784e2f\"" Nov 1 00:23:26.642515 systemd-networkd[1098]: vxlan.calico: Gained IPv6LL Nov 1 00:23:26.732670 env[1324]: 2025-11-01 00:23:26.690 [INFO][4566] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="8daf4c7352584ba96785ece68ac010b475d962722fdef4b04147dcba5b784e2f" Nov 1 00:23:26.732670 env[1324]: 2025-11-01 00:23:26.690 [INFO][4566] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="8daf4c7352584ba96785ece68ac010b475d962722fdef4b04147dcba5b784e2f" iface="eth0" netns="/var/run/netns/cni-0bea56d2-bc85-1d58-c153-9239d2cf31ee" Nov 1 00:23:26.732670 env[1324]: 2025-11-01 00:23:26.691 [INFO][4566] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="8daf4c7352584ba96785ece68ac010b475d962722fdef4b04147dcba5b784e2f" iface="eth0" netns="/var/run/netns/cni-0bea56d2-bc85-1d58-c153-9239d2cf31ee" Nov 1 00:23:26.732670 env[1324]: 2025-11-01 00:23:26.691 [INFO][4566] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="8daf4c7352584ba96785ece68ac010b475d962722fdef4b04147dcba5b784e2f" iface="eth0" netns="/var/run/netns/cni-0bea56d2-bc85-1d58-c153-9239d2cf31ee" Nov 1 00:23:26.732670 env[1324]: 2025-11-01 00:23:26.691 [INFO][4566] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="8daf4c7352584ba96785ece68ac010b475d962722fdef4b04147dcba5b784e2f" Nov 1 00:23:26.732670 env[1324]: 2025-11-01 00:23:26.691 [INFO][4566] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8daf4c7352584ba96785ece68ac010b475d962722fdef4b04147dcba5b784e2f" Nov 1 00:23:26.732670 env[1324]: 2025-11-01 00:23:26.710 [INFO][4576] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="8daf4c7352584ba96785ece68ac010b475d962722fdef4b04147dcba5b784e2f" HandleID="k8s-pod-network.8daf4c7352584ba96785ece68ac010b475d962722fdef4b04147dcba5b784e2f" Workload="localhost-k8s-calico--kube--controllers--866bcf4d9f--tt9sm-eth0" Nov 1 00:23:26.732670 env[1324]: 2025-11-01 00:23:26.711 [INFO][4576] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:23:26.732670 env[1324]: 2025-11-01 00:23:26.711 [INFO][4576] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:23:26.732670 env[1324]: 2025-11-01 00:23:26.727 [WARNING][4576] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="8daf4c7352584ba96785ece68ac010b475d962722fdef4b04147dcba5b784e2f" HandleID="k8s-pod-network.8daf4c7352584ba96785ece68ac010b475d962722fdef4b04147dcba5b784e2f" Workload="localhost-k8s-calico--kube--controllers--866bcf4d9f--tt9sm-eth0" Nov 1 00:23:26.732670 env[1324]: 2025-11-01 00:23:26.727 [INFO][4576] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="8daf4c7352584ba96785ece68ac010b475d962722fdef4b04147dcba5b784e2f" HandleID="k8s-pod-network.8daf4c7352584ba96785ece68ac010b475d962722fdef4b04147dcba5b784e2f" Workload="localhost-k8s-calico--kube--controllers--866bcf4d9f--tt9sm-eth0" Nov 1 00:23:26.732670 env[1324]: 2025-11-01 00:23:26.729 [INFO][4576] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:23:26.732670 env[1324]: 2025-11-01 00:23:26.730 [INFO][4566] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="8daf4c7352584ba96785ece68ac010b475d962722fdef4b04147dcba5b784e2f" Nov 1 00:23:26.733124 env[1324]: time="2025-11-01T00:23:26.732798293Z" level=info msg="TearDown network for sandbox \"8daf4c7352584ba96785ece68ac010b475d962722fdef4b04147dcba5b784e2f\" successfully" Nov 1 00:23:26.733124 env[1324]: time="2025-11-01T00:23:26.732829533Z" level=info msg="StopPodSandbox for \"8daf4c7352584ba96785ece68ac010b475d962722fdef4b04147dcba5b784e2f\" returns successfully" Nov 1 00:23:26.735657 env[1324]: time="2025-11-01T00:23:26.735621814Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-866bcf4d9f-tt9sm,Uid:b83bf5c0-405f-4b6b-b82a-0980cae1df67,Namespace:calico-system,Attempt:1,}" Nov 1 00:23:26.811831 kubelet[2124]: E1101 00:23:26.811588 2124 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:23:26.812401 kubelet[2124]: E1101 00:23:26.812370 2124 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-77cfb4d4d6-4nr5k" podUID="e3dd8f6c-3b39-4d19-a732-fff37a40f25e" Nov 1 00:23:26.829350 kubelet[2124]: I1101 00:23:26.828199 2124 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-jfnxh" podStartSLOduration=38.828180623 podStartE2EDuration="38.828180623s" podCreationTimestamp="2025-11-01 00:22:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 00:23:26.828004983 +0000 UTC m=+44.291898161" watchObservedRunningTime="2025-11-01 00:23:26.828180623 +0000 UTC m=+44.292073841" Nov 1 00:23:26.836000 audit[4607]: NETFILTER_CFG table=filter:120 family=2 entries=14 op=nft_register_rule pid=4607 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 00:23:26.836000 audit[4607]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=ffffc02f8630 a2=0 a3=1 items=0 ppid=2274 pid=4607 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:23:26.836000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 00:23:26.844000 audit[4607]: NETFILTER_CFG table=nat:121 family=2 entries=44 op=nft_register_rule pid=4607 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 00:23:26.844000 audit[4607]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=14196 a0=3 a1=ffffc02f8630 a2=0 a3=1 items=0 ppid=2274 pid=4607 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:23:26.844000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 00:23:26.867277 systemd-networkd[1098]: calid77ac858a35: Link UP Nov 1 00:23:26.868190 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Nov 1 00:23:26.868261 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): calid77ac858a35: link becomes ready Nov 1 00:23:26.868311 systemd-networkd[1098]: calid77ac858a35: Gained carrier Nov 1 00:23:26.883522 env[1324]: 2025-11-01 00:23:26.782 [INFO][4583] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--866bcf4d9f--tt9sm-eth0 calico-kube-controllers-866bcf4d9f- calico-system b83bf5c0-405f-4b6b-b82a-0980cae1df67 1136 0 2025-11-01 00:23:04 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:866bcf4d9f projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-866bcf4d9f-tt9sm eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calid77ac858a35 [] [] }} ContainerID="cf4f923649f47278276a4352c66e5395200b053c0ea97ca3f738b8bd298cbc58" Namespace="calico-system" Pod="calico-kube-controllers-866bcf4d9f-tt9sm" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--866bcf4d9f--tt9sm-" Nov 1 00:23:26.883522 env[1324]: 2025-11-01 00:23:26.783 [INFO][4583] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="cf4f923649f47278276a4352c66e5395200b053c0ea97ca3f738b8bd298cbc58" Namespace="calico-system" Pod="calico-kube-controllers-866bcf4d9f-tt9sm" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--866bcf4d9f--tt9sm-eth0" Nov 1 00:23:26.883522 env[1324]: 2025-11-01 00:23:26.807 [INFO][4598] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="cf4f923649f47278276a4352c66e5395200b053c0ea97ca3f738b8bd298cbc58" HandleID="k8s-pod-network.cf4f923649f47278276a4352c66e5395200b053c0ea97ca3f738b8bd298cbc58" Workload="localhost-k8s-calico--kube--controllers--866bcf4d9f--tt9sm-eth0" Nov 1 00:23:26.883522 env[1324]: 2025-11-01 00:23:26.807 [INFO][4598] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="cf4f923649f47278276a4352c66e5395200b053c0ea97ca3f738b8bd298cbc58" HandleID="k8s-pod-network.cf4f923649f47278276a4352c66e5395200b053c0ea97ca3f738b8bd298cbc58" Workload="localhost-k8s-calico--kube--controllers--866bcf4d9f--tt9sm-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002c2fd0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-866bcf4d9f-tt9sm", "timestamp":"2025-11-01 00:23:26.807189452 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 1 00:23:26.883522 env[1324]: 2025-11-01 00:23:26.807 [INFO][4598] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:23:26.883522 env[1324]: 2025-11-01 00:23:26.807 [INFO][4598] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:23:26.883522 env[1324]: 2025-11-01 00:23:26.807 [INFO][4598] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 1 00:23:26.883522 env[1324]: 2025-11-01 00:23:26.818 [INFO][4598] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.cf4f923649f47278276a4352c66e5395200b053c0ea97ca3f738b8bd298cbc58" host="localhost" Nov 1 00:23:26.883522 env[1324]: 2025-11-01 00:23:26.825 [INFO][4598] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 1 00:23:26.883522 env[1324]: 2025-11-01 00:23:26.836 [INFO][4598] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 1 00:23:26.883522 env[1324]: 2025-11-01 00:23:26.838 [INFO][4598] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 1 00:23:26.883522 env[1324]: 2025-11-01 00:23:26.843 [INFO][4598] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 1 00:23:26.883522 env[1324]: 2025-11-01 00:23:26.843 [INFO][4598] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.cf4f923649f47278276a4352c66e5395200b053c0ea97ca3f738b8bd298cbc58" host="localhost" Nov 1 00:23:26.883522 env[1324]: 2025-11-01 00:23:26.845 [INFO][4598] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.cf4f923649f47278276a4352c66e5395200b053c0ea97ca3f738b8bd298cbc58 Nov 1 00:23:26.883522 env[1324]: 2025-11-01 00:23:26.848 [INFO][4598] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.cf4f923649f47278276a4352c66e5395200b053c0ea97ca3f738b8bd298cbc58" host="localhost" Nov 1 00:23:26.883522 env[1324]: 2025-11-01 00:23:26.861 [INFO][4598] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.cf4f923649f47278276a4352c66e5395200b053c0ea97ca3f738b8bd298cbc58" host="localhost" Nov 1 00:23:26.883522 env[1324]: 2025-11-01 00:23:26.862 [INFO][4598] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.cf4f923649f47278276a4352c66e5395200b053c0ea97ca3f738b8bd298cbc58" host="localhost" Nov 1 00:23:26.883522 env[1324]: 2025-11-01 00:23:26.862 [INFO][4598] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:23:26.883522 env[1324]: 2025-11-01 00:23:26.862 [INFO][4598] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="cf4f923649f47278276a4352c66e5395200b053c0ea97ca3f738b8bd298cbc58" HandleID="k8s-pod-network.cf4f923649f47278276a4352c66e5395200b053c0ea97ca3f738b8bd298cbc58" Workload="localhost-k8s-calico--kube--controllers--866bcf4d9f--tt9sm-eth0" Nov 1 00:23:26.884128 env[1324]: 2025-11-01 00:23:26.865 [INFO][4583] cni-plugin/k8s.go 418: Populated endpoint ContainerID="cf4f923649f47278276a4352c66e5395200b053c0ea97ca3f738b8bd298cbc58" Namespace="calico-system" Pod="calico-kube-controllers-866bcf4d9f-tt9sm" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--866bcf4d9f--tt9sm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--866bcf4d9f--tt9sm-eth0", GenerateName:"calico-kube-controllers-866bcf4d9f-", Namespace:"calico-system", SelfLink:"", UID:"b83bf5c0-405f-4b6b-b82a-0980cae1df67", ResourceVersion:"1136", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 23, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"866bcf4d9f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-866bcf4d9f-tt9sm", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calid77ac858a35", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:23:26.884128 env[1324]: 2025-11-01 00:23:26.865 [INFO][4583] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="cf4f923649f47278276a4352c66e5395200b053c0ea97ca3f738b8bd298cbc58" Namespace="calico-system" Pod="calico-kube-controllers-866bcf4d9f-tt9sm" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--866bcf4d9f--tt9sm-eth0" Nov 1 00:23:26.884128 env[1324]: 2025-11-01 00:23:26.865 [INFO][4583] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid77ac858a35 ContainerID="cf4f923649f47278276a4352c66e5395200b053c0ea97ca3f738b8bd298cbc58" Namespace="calico-system" Pod="calico-kube-controllers-866bcf4d9f-tt9sm" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--866bcf4d9f--tt9sm-eth0" Nov 1 00:23:26.884128 env[1324]: 2025-11-01 00:23:26.868 [INFO][4583] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="cf4f923649f47278276a4352c66e5395200b053c0ea97ca3f738b8bd298cbc58" Namespace="calico-system" Pod="calico-kube-controllers-866bcf4d9f-tt9sm" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--866bcf4d9f--tt9sm-eth0" Nov 1 00:23:26.884128 env[1324]: 2025-11-01 00:23:26.868 [INFO][4583] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="cf4f923649f47278276a4352c66e5395200b053c0ea97ca3f738b8bd298cbc58" Namespace="calico-system" Pod="calico-kube-controllers-866bcf4d9f-tt9sm" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--866bcf4d9f--tt9sm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--866bcf4d9f--tt9sm-eth0", GenerateName:"calico-kube-controllers-866bcf4d9f-", Namespace:"calico-system", SelfLink:"", UID:"b83bf5c0-405f-4b6b-b82a-0980cae1df67", ResourceVersion:"1136", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 23, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"866bcf4d9f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"cf4f923649f47278276a4352c66e5395200b053c0ea97ca3f738b8bd298cbc58", Pod:"calico-kube-controllers-866bcf4d9f-tt9sm", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calid77ac858a35", MAC:"b2:6f:78:b2:ab:85", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:23:26.884128 env[1324]: 2025-11-01 00:23:26.880 [INFO][4583] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="cf4f923649f47278276a4352c66e5395200b053c0ea97ca3f738b8bd298cbc58" Namespace="calico-system" Pod="calico-kube-controllers-866bcf4d9f-tt9sm" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--866bcf4d9f--tt9sm-eth0" Nov 1 00:23:26.894374 env[1324]: time="2025-11-01T00:23:26.894306138Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:23:26.894374 env[1324]: time="2025-11-01T00:23:26.894345898Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:23:26.894604 env[1324]: time="2025-11-01T00:23:26.894356498Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:23:26.894854 env[1324]: time="2025-11-01T00:23:26.894810218Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/cf4f923649f47278276a4352c66e5395200b053c0ea97ca3f738b8bd298cbc58 pid=4624 runtime=io.containerd.runc.v2 Nov 1 00:23:26.899000 audit[4635]: NETFILTER_CFG table=filter:122 family=2 entries=62 op=nft_register_chain pid=4635 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Nov 1 00:23:26.899000 audit[4635]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=28352 a0=3 a1=ffffea2880f0 a2=0 a3=ffff7f616fa8 items=0 ppid=4261 pid=4635 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:23:26.899000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Nov 1 00:23:26.904111 systemd[1]: run-netns-cni\x2d0bea56d2\x2dbc85\x2d1d58\x2dc153\x2d9239d2cf31ee.mount: Deactivated successfully. Nov 1 00:23:26.926016 systemd-resolved[1239]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 1 00:23:26.942241 env[1324]: time="2025-11-01T00:23:26.942195003Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-866bcf4d9f-tt9sm,Uid:b83bf5c0-405f-4b6b-b82a-0980cae1df67,Namespace:calico-system,Attempt:1,} returns sandbox id \"cf4f923649f47278276a4352c66e5395200b053c0ea97ca3f738b8bd298cbc58\"" Nov 1 00:23:26.943495 env[1324]: time="2025-11-01T00:23:26.943467204Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 1 00:23:27.155707 env[1324]: time="2025-11-01T00:23:27.154194030Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:23:27.157263 env[1324]: time="2025-11-01T00:23:27.156995151Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 1 00:23:27.157653 kubelet[2124]: E1101 00:23:27.157597 2124 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 1 00:23:27.157724 kubelet[2124]: E1101 00:23:27.157678 2124 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 1 00:23:27.163504 kubelet[2124]: E1101 00:23:27.158058 2124 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-74m96,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-866bcf4d9f-tt9sm_calico-system(b83bf5c0-405f-4b6b-b82a-0980cae1df67): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 1 00:23:27.164812 kubelet[2124]: E1101 00:23:27.164356 2124 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-866bcf4d9f-tt9sm" podUID="b83bf5c0-405f-4b6b-b82a-0980cae1df67" Nov 1 00:23:27.537618 systemd-networkd[1098]: calib4e7c11f848: Gained IPv6LL Nov 1 00:23:27.815076 kubelet[2124]: E1101 00:23:27.814982 2124 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:23:27.817733 kubelet[2124]: E1101 00:23:27.817658 2124 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-866bcf4d9f-tt9sm" podUID="b83bf5c0-405f-4b6b-b82a-0980cae1df67" Nov 1 00:23:27.863000 audit[4659]: NETFILTER_CFG table=filter:123 family=2 entries=14 op=nft_register_rule pid=4659 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 00:23:27.863000 audit[4659]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=ffffec12dce0 a2=0 a3=1 items=0 ppid=2274 pid=4659 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:23:27.863000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 00:23:27.877000 audit[4659]: NETFILTER_CFG table=nat:124 family=2 entries=56 op=nft_register_chain pid=4659 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 00:23:27.877000 audit[4659]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=19860 a0=3 a1=ffffec12dce0 a2=0 a3=1 items=0 ppid=2274 pid=4659 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:23:27.877000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 00:23:28.448838 systemd[1]: Started sshd@8-10.0.0.92:22-10.0.0.1:35536.service. Nov 1 00:23:28.447000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.0.0.92:22-10.0.0.1:35536 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:23:28.501000 audit[4661]: USER_ACCT pid=4661 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Nov 1 00:23:28.503048 sshd[4661]: Accepted publickey for core from 10.0.0.1 port 35536 ssh2: RSA SHA256:kb3suJ2QTjzwtL4e8dR0lwVlSg216vRjckn55cG0Sc4 Nov 1 00:23:28.502000 audit[4661]: CRED_ACQ pid=4661 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Nov 1 00:23:28.503000 audit[4661]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffd601d940 a2=3 a3=1 items=0 ppid=1 pid=4661 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=9 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:23:28.503000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Nov 1 00:23:28.504909 sshd[4661]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:23:28.509202 systemd-logind[1310]: New session 9 of user core. Nov 1 00:23:28.510024 systemd[1]: Started session-9.scope. Nov 1 00:23:28.513000 audit[4661]: USER_START pid=4661 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Nov 1 00:23:28.514000 audit[4664]: CRED_ACQ pid=4664 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Nov 1 00:23:28.674353 sshd[4661]: pam_unix(sshd:session): session closed for user core Nov 1 00:23:28.673000 audit[4661]: USER_END pid=4661 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Nov 1 00:23:28.674000 audit[4661]: CRED_DISP pid=4661 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Nov 1 00:23:28.675000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.0.0.92:22-10.0.0.1:35536 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:23:28.676750 systemd[1]: sshd@8-10.0.0.92:22-10.0.0.1:35536.service: Deactivated successfully. Nov 1 00:23:28.677924 systemd-logind[1310]: Session 9 logged out. Waiting for processes to exit. Nov 1 00:23:28.677989 systemd[1]: session-9.scope: Deactivated successfully. Nov 1 00:23:28.678828 systemd-logind[1310]: Removed session 9. Nov 1 00:23:28.689515 systemd-networkd[1098]: calid77ac858a35: Gained IPv6LL Nov 1 00:23:28.817761 kubelet[2124]: E1101 00:23:28.817653 2124 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:23:28.818453 kubelet[2124]: E1101 00:23:28.817819 2124 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-866bcf4d9f-tt9sm" podUID="b83bf5c0-405f-4b6b-b82a-0980cae1df67" Nov 1 00:23:30.530340 kubelet[2124]: I1101 00:23:30.530294 2124 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 1 00:23:30.530759 kubelet[2124]: E1101 00:23:30.530741 2124 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:23:30.556985 systemd[1]: run-containerd-runc-k8s.io-7621a62555f564289e10dfe2974ae3aa0367415a3746d1a3dee81946f4941aae-runc.zderoa.mount: Deactivated successfully. Nov 1 00:23:30.821096 kubelet[2124]: E1101 00:23:30.820991 2124 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:23:33.677000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.0.0.92:22-10.0.0.1:55132 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:23:33.679214 systemd[1]: Started sshd@9-10.0.0.92:22-10.0.0.1:55132.service. Nov 1 00:23:33.680123 kernel: kauditd_printk_skb: 565 callbacks suppressed Nov 1 00:23:33.680189 kernel: audit: type=1130 audit(1761956613.677:445): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.0.0.92:22-10.0.0.1:55132 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:23:33.730000 audit[4728]: USER_ACCT pid=4728 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Nov 1 00:23:33.731824 sshd[4728]: Accepted publickey for core from 10.0.0.1 port 55132 ssh2: RSA SHA256:kb3suJ2QTjzwtL4e8dR0lwVlSg216vRjckn55cG0Sc4 Nov 1 00:23:33.733495 sshd[4728]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:23:33.731000 audit[4728]: CRED_ACQ pid=4728 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Nov 1 00:23:33.737500 kernel: audit: type=1101 audit(1761956613.730:446): pid=4728 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Nov 1 00:23:33.737575 kernel: audit: type=1103 audit(1761956613.731:447): pid=4728 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Nov 1 00:23:33.737599 kernel: audit: type=1006 audit(1761956613.731:448): pid=4728 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=10 res=1 Nov 1 00:23:33.731000 audit[4728]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffd02eeb70 a2=3 a3=1 items=0 ppid=1 pid=4728 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:23:33.738851 systemd-logind[1310]: New session 10 of user core. Nov 1 00:23:33.739206 systemd[1]: Started session-10.scope. Nov 1 00:23:33.742223 kernel: audit: type=1300 audit(1761956613.731:448): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffd02eeb70 a2=3 a3=1 items=0 ppid=1 pid=4728 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:23:33.742297 kernel: audit: type=1327 audit(1761956613.731:448): proctitle=737368643A20636F7265205B707269765D Nov 1 00:23:33.731000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Nov 1 00:23:33.748000 audit[4728]: USER_START pid=4728 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Nov 1 00:23:33.749000 audit[4731]: CRED_ACQ pid=4731 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Nov 1 00:23:33.756002 kernel: audit: type=1105 audit(1761956613.748:449): pid=4728 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Nov 1 00:23:33.756065 kernel: audit: type=1103 audit(1761956613.749:450): pid=4731 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Nov 1 00:23:33.882636 sshd[4728]: pam_unix(sshd:session): session closed for user core Nov 1 00:23:33.885000 audit[4728]: USER_END pid=4728 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Nov 1 00:23:33.888972 systemd[1]: Started sshd@10-10.0.0.92:22-10.0.0.1:55146.service. Nov 1 00:23:33.889527 systemd[1]: sshd@9-10.0.0.92:22-10.0.0.1:55132.service: Deactivated successfully. Nov 1 00:23:33.885000 audit[4728]: CRED_DISP pid=4728 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Nov 1 00:23:33.890656 systemd[1]: session-10.scope: Deactivated successfully. Nov 1 00:23:33.890756 systemd-logind[1310]: Session 10 logged out. Waiting for processes to exit. Nov 1 00:23:33.893182 kernel: audit: type=1106 audit(1761956613.885:451): pid=4728 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Nov 1 00:23:33.893237 kernel: audit: type=1104 audit(1761956613.885:452): pid=4728 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Nov 1 00:23:33.887000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.0.0.92:22-10.0.0.1:55146 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:23:33.888000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.0.0.92:22-10.0.0.1:55132 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:23:33.894368 systemd-logind[1310]: Removed session 10. Nov 1 00:23:33.934000 audit[4742]: USER_ACCT pid=4742 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Nov 1 00:23:33.936087 sshd[4742]: Accepted publickey for core from 10.0.0.1 port 55146 ssh2: RSA SHA256:kb3suJ2QTjzwtL4e8dR0lwVlSg216vRjckn55cG0Sc4 Nov 1 00:23:33.935000 audit[4742]: CRED_ACQ pid=4742 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Nov 1 00:23:33.935000 audit[4742]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffed46d670 a2=3 a3=1 items=0 ppid=1 pid=4742 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:23:33.935000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Nov 1 00:23:33.937203 sshd[4742]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:23:33.943613 systemd-logind[1310]: New session 11 of user core. Nov 1 00:23:33.944385 systemd[1]: Started session-11.scope. Nov 1 00:23:33.947000 audit[4742]: USER_START pid=4742 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Nov 1 00:23:33.948000 audit[4747]: CRED_ACQ pid=4747 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Nov 1 00:23:34.102373 sshd[4742]: pam_unix(sshd:session): session closed for user core Nov 1 00:23:34.103000 audit[4742]: USER_END pid=4742 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Nov 1 00:23:34.103000 audit[4742]: CRED_DISP pid=4742 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Nov 1 00:23:34.105000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.0.0.92:22-10.0.0.1:55156 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:23:34.107000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.0.0.92:22-10.0.0.1:55146 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:23:34.106947 systemd[1]: Started sshd@11-10.0.0.92:22-10.0.0.1:55156.service. Nov 1 00:23:34.109024 systemd[1]: sshd@10-10.0.0.92:22-10.0.0.1:55146.service: Deactivated successfully. Nov 1 00:23:34.109899 systemd[1]: session-11.scope: Deactivated successfully. Nov 1 00:23:34.115674 systemd-logind[1310]: Session 11 logged out. Waiting for processes to exit. Nov 1 00:23:34.120455 systemd-logind[1310]: Removed session 11. Nov 1 00:23:34.161000 audit[4755]: USER_ACCT pid=4755 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Nov 1 00:23:34.162854 sshd[4755]: Accepted publickey for core from 10.0.0.1 port 55156 ssh2: RSA SHA256:kb3suJ2QTjzwtL4e8dR0lwVlSg216vRjckn55cG0Sc4 Nov 1 00:23:34.162000 audit[4755]: CRED_ACQ pid=4755 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Nov 1 00:23:34.162000 audit[4755]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffee87f730 a2=3 a3=1 items=0 ppid=1 pid=4755 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:23:34.162000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Nov 1 00:23:34.164113 sshd[4755]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:23:34.168097 systemd-logind[1310]: New session 12 of user core. Nov 1 00:23:34.168912 systemd[1]: Started session-12.scope. Nov 1 00:23:34.171000 audit[4755]: USER_START pid=4755 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Nov 1 00:23:34.173000 audit[4760]: CRED_ACQ pid=4760 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Nov 1 00:23:34.297196 sshd[4755]: pam_unix(sshd:session): session closed for user core Nov 1 00:23:34.297000 audit[4755]: USER_END pid=4755 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Nov 1 00:23:34.297000 audit[4755]: CRED_DISP pid=4755 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Nov 1 00:23:34.300106 systemd-logind[1310]: Session 12 logged out. Waiting for processes to exit. Nov 1 00:23:34.300326 systemd[1]: sshd@11-10.0.0.92:22-10.0.0.1:55156.service: Deactivated successfully. Nov 1 00:23:34.299000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.0.0.92:22-10.0.0.1:55156 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:23:34.301339 systemd[1]: session-12.scope: Deactivated successfully. Nov 1 00:23:34.302122 systemd-logind[1310]: Removed session 12. Nov 1 00:23:35.641315 env[1324]: time="2025-11-01T00:23:35.641265132Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 1 00:23:35.851395 env[1324]: time="2025-11-01T00:23:35.851328674Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:23:35.853243 env[1324]: time="2025-11-01T00:23:35.853174034Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 1 00:23:35.853493 kubelet[2124]: E1101 00:23:35.853441 2124 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 1 00:23:35.853791 kubelet[2124]: E1101 00:23:35.853489 2124 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 1 00:23:35.853791 kubelet[2124]: E1101 00:23:35.853605 2124 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:5acdfbcf0fe34a9f88c8ad5a16543143,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-jtvw6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-69fb5f888d-pptgw_calico-system(8bcd59ea-9151-4aaf-9b6c-77893bc394d7): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 1 00:23:35.855720 env[1324]: time="2025-11-01T00:23:35.855693675Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 1 00:23:36.071195 env[1324]: time="2025-11-01T00:23:36.071135657Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:23:36.072704 env[1324]: time="2025-11-01T00:23:36.072659178Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 1 00:23:36.073111 kubelet[2124]: E1101 00:23:36.073013 2124 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 1 00:23:36.073111 kubelet[2124]: E1101 00:23:36.073076 2124 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 1 00:23:36.073337 kubelet[2124]: E1101 00:23:36.073202 2124 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jtvw6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-69fb5f888d-pptgw_calico-system(8bcd59ea-9151-4aaf-9b6c-77893bc394d7): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 1 00:23:36.074726 kubelet[2124]: E1101 00:23:36.074658 2124 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-69fb5f888d-pptgw" podUID="8bcd59ea-9151-4aaf-9b6c-77893bc394d7" Nov 1 00:23:37.640831 env[1324]: time="2025-11-01T00:23:37.640788920Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 1 00:23:37.845468 env[1324]: time="2025-11-01T00:23:37.845396213Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:23:37.846401 env[1324]: time="2025-11-01T00:23:37.846351054Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 1 00:23:37.846589 kubelet[2124]: E1101 00:23:37.846538 2124 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 00:23:37.846589 kubelet[2124]: E1101 00:23:37.846583 2124 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 00:23:37.846885 kubelet[2124]: E1101 00:23:37.846703 2124 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bfv7s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-77cfb4d4d6-l78bq_calico-apiserver(098f9c0f-a24a-4001-88bf-ea4e44e957ea): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 1 00:23:37.847925 kubelet[2124]: E1101 00:23:37.847886 2124 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-77cfb4d4d6-l78bq" podUID="098f9c0f-a24a-4001-88bf-ea4e44e957ea" Nov 1 00:23:38.644312 env[1324]: time="2025-11-01T00:23:38.644159770Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 1 00:23:38.839816 env[1324]: time="2025-11-01T00:23:38.839767218Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:23:38.840976 env[1324]: time="2025-11-01T00:23:38.840932978Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 1 00:23:38.841285 kubelet[2124]: E1101 00:23:38.841238 2124 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 00:23:38.841361 kubelet[2124]: E1101 00:23:38.841294 2124 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 00:23:38.841478 kubelet[2124]: E1101 00:23:38.841437 2124 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6gqqt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-77cfb4d4d6-4nr5k_calico-apiserver(e3dd8f6c-3b39-4d19-a732-fff37a40f25e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 1 00:23:38.842927 kubelet[2124]: E1101 00:23:38.842894 2124 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-77cfb4d4d6-4nr5k" podUID="e3dd8f6c-3b39-4d19-a732-fff37a40f25e" Nov 1 00:23:39.299000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.0.0.92:22-10.0.0.1:36302 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:23:39.301041 systemd[1]: Started sshd@12-10.0.0.92:22-10.0.0.1:36302.service. Nov 1 00:23:39.301873 kernel: kauditd_printk_skb: 23 callbacks suppressed Nov 1 00:23:39.301915 kernel: audit: type=1130 audit(1761956619.299:472): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.0.0.92:22-10.0.0.1:36302 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:23:39.342000 audit[4785]: USER_ACCT pid=4785 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Nov 1 00:23:39.343979 sshd[4785]: Accepted publickey for core from 10.0.0.1 port 36302 ssh2: RSA SHA256:kb3suJ2QTjzwtL4e8dR0lwVlSg216vRjckn55cG0Sc4 Nov 1 00:23:39.345016 sshd[4785]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:23:39.343000 audit[4785]: CRED_ACQ pid=4785 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Nov 1 00:23:39.350118 kernel: audit: type=1101 audit(1761956619.342:473): pid=4785 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Nov 1 00:23:39.350177 kernel: audit: type=1103 audit(1761956619.343:474): pid=4785 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Nov 1 00:23:39.350207 kernel: audit: type=1006 audit(1761956619.343:475): pid=4785 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=13 res=1 Nov 1 00:23:39.349969 systemd[1]: Started session-13.scope. Nov 1 00:23:39.350315 systemd-logind[1310]: New session 13 of user core. Nov 1 00:23:39.343000 audit[4785]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffd505abb0 a2=3 a3=1 items=0 ppid=1 pid=4785 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:23:39.354250 kernel: audit: type=1300 audit(1761956619.343:475): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffd505abb0 a2=3 a3=1 items=0 ppid=1 pid=4785 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:23:39.354303 kernel: audit: type=1327 audit(1761956619.343:475): proctitle=737368643A20636F7265205B707269765D Nov 1 00:23:39.343000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Nov 1 00:23:39.355300 kernel: audit: type=1105 audit(1761956619.352:476): pid=4785 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Nov 1 00:23:39.352000 audit[4785]: USER_START pid=4785 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Nov 1 00:23:39.354000 audit[4788]: CRED_ACQ pid=4788 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Nov 1 00:23:39.361023 kernel: audit: type=1103 audit(1761956619.354:477): pid=4788 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Nov 1 00:23:39.465259 sshd[4785]: pam_unix(sshd:session): session closed for user core Nov 1 00:23:39.464000 audit[4785]: USER_END pid=4785 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Nov 1 00:23:39.468040 systemd-logind[1310]: Session 13 logged out. Waiting for processes to exit. Nov 1 00:23:39.468262 systemd[1]: sshd@12-10.0.0.92:22-10.0.0.1:36302.service: Deactivated successfully. Nov 1 00:23:39.465000 audit[4785]: CRED_DISP pid=4785 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Nov 1 00:23:39.469091 systemd[1]: session-13.scope: Deactivated successfully. Nov 1 00:23:39.469480 systemd-logind[1310]: Removed session 13. Nov 1 00:23:39.472284 kernel: audit: type=1106 audit(1761956619.464:478): pid=4785 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Nov 1 00:23:39.472343 kernel: audit: type=1104 audit(1761956619.465:479): pid=4785 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Nov 1 00:23:39.467000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.0.0.92:22-10.0.0.1:36302 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:23:39.643303 env[1324]: time="2025-11-01T00:23:39.642986283Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 1 00:23:39.854516 env[1324]: time="2025-11-01T00:23:39.854466891Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:23:39.855410 env[1324]: time="2025-11-01T00:23:39.855363411Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 1 00:23:39.855623 kubelet[2124]: E1101 00:23:39.855584 2124 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 1 00:23:39.855856 kubelet[2124]: E1101 00:23:39.855633 2124 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 1 00:23:39.856361 kubelet[2124]: E1101 00:23:39.855887 2124 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jhn7d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-cp7jx_calico-system(bf1893fd-31bf-427a-928e-11685512f41a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 1 00:23:39.856534 env[1324]: time="2025-11-01T00:23:39.856093132Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 1 00:23:39.857783 kubelet[2124]: E1101 00:23:39.857752 2124 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-cp7jx" podUID="bf1893fd-31bf-427a-928e-11685512f41a" Nov 1 00:23:40.064454 env[1324]: time="2025-11-01T00:23:40.064370218Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:23:40.065567 env[1324]: time="2025-11-01T00:23:40.065516578Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 1 00:23:40.065859 kubelet[2124]: E1101 00:23:40.065821 2124 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 1 00:23:40.065917 kubelet[2124]: E1101 00:23:40.065875 2124 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 1 00:23:40.066026 kubelet[2124]: E1101 00:23:40.065987 2124 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ppkz8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-52pqx_calico-system(a584285f-c40b-477a-8ddb-bfa9e3439fe6): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 1 00:23:40.067920 env[1324]: time="2025-11-01T00:23:40.067898339Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 1 00:23:40.263375 env[1324]: time="2025-11-01T00:23:40.263325901Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:23:40.264380 env[1324]: time="2025-11-01T00:23:40.264338981Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 1 00:23:40.264606 kubelet[2124]: E1101 00:23:40.264555 2124 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 1 00:23:40.264673 kubelet[2124]: E1101 00:23:40.264613 2124 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 1 00:23:40.265090 kubelet[2124]: E1101 00:23:40.264735 2124 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ppkz8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-52pqx_calico-system(a584285f-c40b-477a-8ddb-bfa9e3439fe6): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 1 00:23:40.266278 kubelet[2124]: E1101 00:23:40.266228 2124 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-52pqx" podUID="a584285f-c40b-477a-8ddb-bfa9e3439fe6" Nov 1 00:23:40.642568 env[1324]: time="2025-11-01T00:23:40.642489262Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 1 00:23:40.850518 env[1324]: time="2025-11-01T00:23:40.850475466Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:23:40.851559 env[1324]: time="2025-11-01T00:23:40.851499026Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 1 00:23:40.851876 kubelet[2124]: E1101 00:23:40.851832 2124 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 1 00:23:40.851985 kubelet[2124]: E1101 00:23:40.851966 2124 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 1 00:23:40.852198 kubelet[2124]: E1101 00:23:40.852149 2124 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-74m96,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-866bcf4d9f-tt9sm_calico-system(b83bf5c0-405f-4b6b-b82a-0980cae1df67): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 1 00:23:40.853496 kubelet[2124]: E1101 00:23:40.853467 2124 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-866bcf4d9f-tt9sm" podUID="b83bf5c0-405f-4b6b-b82a-0980cae1df67" Nov 1 00:23:42.596207 env[1324]: time="2025-11-01T00:23:42.596144050Z" level=info msg="StopPodSandbox for \"b4bdbcd9fc0808043b166066c935f2205113269a5c6f10e3fbe294d92cc3f43d\"" Nov 1 00:23:42.671184 env[1324]: 2025-11-01 00:23:42.627 [WARNING][4809] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b4bdbcd9fc0808043b166066c935f2205113269a5c6f10e3fbe294d92cc3f43d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--52pqx-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"a584285f-c40b-477a-8ddb-bfa9e3439fe6", ResourceVersion:"1125", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 23, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"05d9aa1010caa1ce1264155c0494bcaed2f600b70ec27172e9105fcf1ba7abb7", Pod:"csi-node-driver-52pqx", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali66e52162bd1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:23:42.671184 env[1324]: 2025-11-01 00:23:42.627 [INFO][4809] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="b4bdbcd9fc0808043b166066c935f2205113269a5c6f10e3fbe294d92cc3f43d" Nov 1 00:23:42.671184 env[1324]: 2025-11-01 00:23:42.627 [INFO][4809] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b4bdbcd9fc0808043b166066c935f2205113269a5c6f10e3fbe294d92cc3f43d" iface="eth0" netns="" Nov 1 00:23:42.671184 env[1324]: 2025-11-01 00:23:42.628 [INFO][4809] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="b4bdbcd9fc0808043b166066c935f2205113269a5c6f10e3fbe294d92cc3f43d" Nov 1 00:23:42.671184 env[1324]: 2025-11-01 00:23:42.628 [INFO][4809] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b4bdbcd9fc0808043b166066c935f2205113269a5c6f10e3fbe294d92cc3f43d" Nov 1 00:23:42.671184 env[1324]: 2025-11-01 00:23:42.655 [INFO][4818] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="b4bdbcd9fc0808043b166066c935f2205113269a5c6f10e3fbe294d92cc3f43d" HandleID="k8s-pod-network.b4bdbcd9fc0808043b166066c935f2205113269a5c6f10e3fbe294d92cc3f43d" Workload="localhost-k8s-csi--node--driver--52pqx-eth0" Nov 1 00:23:42.671184 env[1324]: 2025-11-01 00:23:42.656 [INFO][4818] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:23:42.671184 env[1324]: 2025-11-01 00:23:42.656 [INFO][4818] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:23:42.671184 env[1324]: 2025-11-01 00:23:42.666 [WARNING][4818] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="b4bdbcd9fc0808043b166066c935f2205113269a5c6f10e3fbe294d92cc3f43d" HandleID="k8s-pod-network.b4bdbcd9fc0808043b166066c935f2205113269a5c6f10e3fbe294d92cc3f43d" Workload="localhost-k8s-csi--node--driver--52pqx-eth0" Nov 1 00:23:42.671184 env[1324]: 2025-11-01 00:23:42.666 [INFO][4818] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="b4bdbcd9fc0808043b166066c935f2205113269a5c6f10e3fbe294d92cc3f43d" HandleID="k8s-pod-network.b4bdbcd9fc0808043b166066c935f2205113269a5c6f10e3fbe294d92cc3f43d" Workload="localhost-k8s-csi--node--driver--52pqx-eth0" Nov 1 00:23:42.671184 env[1324]: 2025-11-01 00:23:42.667 [INFO][4818] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:23:42.671184 env[1324]: 2025-11-01 00:23:42.669 [INFO][4809] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="b4bdbcd9fc0808043b166066c935f2205113269a5c6f10e3fbe294d92cc3f43d" Nov 1 00:23:42.671657 env[1324]: time="2025-11-01T00:23:42.671219424Z" level=info msg="TearDown network for sandbox \"b4bdbcd9fc0808043b166066c935f2205113269a5c6f10e3fbe294d92cc3f43d\" successfully" Nov 1 00:23:42.671657 env[1324]: time="2025-11-01T00:23:42.671249704Z" level=info msg="StopPodSandbox for \"b4bdbcd9fc0808043b166066c935f2205113269a5c6f10e3fbe294d92cc3f43d\" returns successfully" Nov 1 00:23:42.672178 env[1324]: time="2025-11-01T00:23:42.672152264Z" level=info msg="RemovePodSandbox for \"b4bdbcd9fc0808043b166066c935f2205113269a5c6f10e3fbe294d92cc3f43d\"" Nov 1 00:23:42.672232 env[1324]: time="2025-11-01T00:23:42.672189824Z" level=info msg="Forcibly stopping sandbox \"b4bdbcd9fc0808043b166066c935f2205113269a5c6f10e3fbe294d92cc3f43d\"" Nov 1 00:23:42.738570 env[1324]: 2025-11-01 00:23:42.706 [WARNING][4838] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b4bdbcd9fc0808043b166066c935f2205113269a5c6f10e3fbe294d92cc3f43d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--52pqx-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"a584285f-c40b-477a-8ddb-bfa9e3439fe6", ResourceVersion:"1125", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 23, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"05d9aa1010caa1ce1264155c0494bcaed2f600b70ec27172e9105fcf1ba7abb7", Pod:"csi-node-driver-52pqx", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali66e52162bd1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:23:42.738570 env[1324]: 2025-11-01 00:23:42.707 [INFO][4838] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="b4bdbcd9fc0808043b166066c935f2205113269a5c6f10e3fbe294d92cc3f43d" Nov 1 00:23:42.738570 env[1324]: 2025-11-01 00:23:42.707 [INFO][4838] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b4bdbcd9fc0808043b166066c935f2205113269a5c6f10e3fbe294d92cc3f43d" iface="eth0" netns="" Nov 1 00:23:42.738570 env[1324]: 2025-11-01 00:23:42.707 [INFO][4838] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="b4bdbcd9fc0808043b166066c935f2205113269a5c6f10e3fbe294d92cc3f43d" Nov 1 00:23:42.738570 env[1324]: 2025-11-01 00:23:42.707 [INFO][4838] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b4bdbcd9fc0808043b166066c935f2205113269a5c6f10e3fbe294d92cc3f43d" Nov 1 00:23:42.738570 env[1324]: 2025-11-01 00:23:42.725 [INFO][4847] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="b4bdbcd9fc0808043b166066c935f2205113269a5c6f10e3fbe294d92cc3f43d" HandleID="k8s-pod-network.b4bdbcd9fc0808043b166066c935f2205113269a5c6f10e3fbe294d92cc3f43d" Workload="localhost-k8s-csi--node--driver--52pqx-eth0" Nov 1 00:23:42.738570 env[1324]: 2025-11-01 00:23:42.725 [INFO][4847] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:23:42.738570 env[1324]: 2025-11-01 00:23:42.725 [INFO][4847] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:23:42.738570 env[1324]: 2025-11-01 00:23:42.733 [WARNING][4847] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="b4bdbcd9fc0808043b166066c935f2205113269a5c6f10e3fbe294d92cc3f43d" HandleID="k8s-pod-network.b4bdbcd9fc0808043b166066c935f2205113269a5c6f10e3fbe294d92cc3f43d" Workload="localhost-k8s-csi--node--driver--52pqx-eth0" Nov 1 00:23:42.738570 env[1324]: 2025-11-01 00:23:42.733 [INFO][4847] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="b4bdbcd9fc0808043b166066c935f2205113269a5c6f10e3fbe294d92cc3f43d" HandleID="k8s-pod-network.b4bdbcd9fc0808043b166066c935f2205113269a5c6f10e3fbe294d92cc3f43d" Workload="localhost-k8s-csi--node--driver--52pqx-eth0" Nov 1 00:23:42.738570 env[1324]: 2025-11-01 00:23:42.735 [INFO][4847] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:23:42.738570 env[1324]: 2025-11-01 00:23:42.737 [INFO][4838] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="b4bdbcd9fc0808043b166066c935f2205113269a5c6f10e3fbe294d92cc3f43d" Nov 1 00:23:42.739032 env[1324]: time="2025-11-01T00:23:42.738587997Z" level=info msg="TearDown network for sandbox \"b4bdbcd9fc0808043b166066c935f2205113269a5c6f10e3fbe294d92cc3f43d\" successfully" Nov 1 00:23:42.745957 env[1324]: time="2025-11-01T00:23:42.745894598Z" level=info msg="RemovePodSandbox \"b4bdbcd9fc0808043b166066c935f2205113269a5c6f10e3fbe294d92cc3f43d\" returns successfully" Nov 1 00:23:42.746451 env[1324]: time="2025-11-01T00:23:42.746422758Z" level=info msg="StopPodSandbox for \"9edb2d9a08c95b3b3d6993efca4621934b5f50fd9a0165021f3363739452b767\"" Nov 1 00:23:42.832013 env[1324]: 2025-11-01 00:23:42.794 [WARNING][4865] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="9edb2d9a08c95b3b3d6993efca4621934b5f50fd9a0165021f3363739452b767" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--8c5b8-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"e95bf48f-c761-42b0-aba2-5fe9b024e3f4", ResourceVersion:"1045", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 22, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"4afb09ea9d1b833d3c4f9ae13f764b0a3cb7c26128f3e9e10a9e17f774d3977f", Pod:"coredns-668d6bf9bc-8c5b8", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif63ed2b1edc", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:23:42.832013 env[1324]: 2025-11-01 00:23:42.795 [INFO][4865] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="9edb2d9a08c95b3b3d6993efca4621934b5f50fd9a0165021f3363739452b767" Nov 1 00:23:42.832013 env[1324]: 2025-11-01 00:23:42.796 [INFO][4865] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9edb2d9a08c95b3b3d6993efca4621934b5f50fd9a0165021f3363739452b767" iface="eth0" netns="" Nov 1 00:23:42.832013 env[1324]: 2025-11-01 00:23:42.796 [INFO][4865] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="9edb2d9a08c95b3b3d6993efca4621934b5f50fd9a0165021f3363739452b767" Nov 1 00:23:42.832013 env[1324]: 2025-11-01 00:23:42.796 [INFO][4865] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9edb2d9a08c95b3b3d6993efca4621934b5f50fd9a0165021f3363739452b767" Nov 1 00:23:42.832013 env[1324]: 2025-11-01 00:23:42.816 [INFO][4876] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="9edb2d9a08c95b3b3d6993efca4621934b5f50fd9a0165021f3363739452b767" HandleID="k8s-pod-network.9edb2d9a08c95b3b3d6993efca4621934b5f50fd9a0165021f3363739452b767" Workload="localhost-k8s-coredns--668d6bf9bc--8c5b8-eth0" Nov 1 00:23:42.832013 env[1324]: 2025-11-01 00:23:42.816 [INFO][4876] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:23:42.832013 env[1324]: 2025-11-01 00:23:42.816 [INFO][4876] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:23:42.832013 env[1324]: 2025-11-01 00:23:42.826 [WARNING][4876] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="9edb2d9a08c95b3b3d6993efca4621934b5f50fd9a0165021f3363739452b767" HandleID="k8s-pod-network.9edb2d9a08c95b3b3d6993efca4621934b5f50fd9a0165021f3363739452b767" Workload="localhost-k8s-coredns--668d6bf9bc--8c5b8-eth0" Nov 1 00:23:42.832013 env[1324]: 2025-11-01 00:23:42.826 [INFO][4876] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="9edb2d9a08c95b3b3d6993efca4621934b5f50fd9a0165021f3363739452b767" HandleID="k8s-pod-network.9edb2d9a08c95b3b3d6993efca4621934b5f50fd9a0165021f3363739452b767" Workload="localhost-k8s-coredns--668d6bf9bc--8c5b8-eth0" Nov 1 00:23:42.832013 env[1324]: 2025-11-01 00:23:42.828 [INFO][4876] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:23:42.832013 env[1324]: 2025-11-01 00:23:42.830 [INFO][4865] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="9edb2d9a08c95b3b3d6993efca4621934b5f50fd9a0165021f3363739452b767" Nov 1 00:23:42.832464 env[1324]: time="2025-11-01T00:23:42.832041374Z" level=info msg="TearDown network for sandbox \"9edb2d9a08c95b3b3d6993efca4621934b5f50fd9a0165021f3363739452b767\" successfully" Nov 1 00:23:42.832464 env[1324]: time="2025-11-01T00:23:42.832072774Z" level=info msg="StopPodSandbox for \"9edb2d9a08c95b3b3d6993efca4621934b5f50fd9a0165021f3363739452b767\" returns successfully" Nov 1 00:23:42.832930 env[1324]: time="2025-11-01T00:23:42.832906335Z" level=info msg="RemovePodSandbox for \"9edb2d9a08c95b3b3d6993efca4621934b5f50fd9a0165021f3363739452b767\"" Nov 1 00:23:42.832976 env[1324]: time="2025-11-01T00:23:42.832938975Z" level=info msg="Forcibly stopping sandbox \"9edb2d9a08c95b3b3d6993efca4621934b5f50fd9a0165021f3363739452b767\"" Nov 1 00:23:42.904365 env[1324]: 2025-11-01 00:23:42.869 [WARNING][4894] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="9edb2d9a08c95b3b3d6993efca4621934b5f50fd9a0165021f3363739452b767" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--8c5b8-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"e95bf48f-c761-42b0-aba2-5fe9b024e3f4", ResourceVersion:"1045", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 22, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"4afb09ea9d1b833d3c4f9ae13f764b0a3cb7c26128f3e9e10a9e17f774d3977f", Pod:"coredns-668d6bf9bc-8c5b8", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif63ed2b1edc", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:23:42.904365 env[1324]: 2025-11-01 00:23:42.869 [INFO][4894] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="9edb2d9a08c95b3b3d6993efca4621934b5f50fd9a0165021f3363739452b767" Nov 1 00:23:42.904365 env[1324]: 2025-11-01 00:23:42.870 [INFO][4894] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9edb2d9a08c95b3b3d6993efca4621934b5f50fd9a0165021f3363739452b767" iface="eth0" netns="" Nov 1 00:23:42.904365 env[1324]: 2025-11-01 00:23:42.870 [INFO][4894] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="9edb2d9a08c95b3b3d6993efca4621934b5f50fd9a0165021f3363739452b767" Nov 1 00:23:42.904365 env[1324]: 2025-11-01 00:23:42.870 [INFO][4894] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9edb2d9a08c95b3b3d6993efca4621934b5f50fd9a0165021f3363739452b767" Nov 1 00:23:42.904365 env[1324]: 2025-11-01 00:23:42.887 [INFO][4903] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="9edb2d9a08c95b3b3d6993efca4621934b5f50fd9a0165021f3363739452b767" HandleID="k8s-pod-network.9edb2d9a08c95b3b3d6993efca4621934b5f50fd9a0165021f3363739452b767" Workload="localhost-k8s-coredns--668d6bf9bc--8c5b8-eth0" Nov 1 00:23:42.904365 env[1324]: 2025-11-01 00:23:42.888 [INFO][4903] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:23:42.904365 env[1324]: 2025-11-01 00:23:42.888 [INFO][4903] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:23:42.904365 env[1324]: 2025-11-01 00:23:42.896 [WARNING][4903] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="9edb2d9a08c95b3b3d6993efca4621934b5f50fd9a0165021f3363739452b767" HandleID="k8s-pod-network.9edb2d9a08c95b3b3d6993efca4621934b5f50fd9a0165021f3363739452b767" Workload="localhost-k8s-coredns--668d6bf9bc--8c5b8-eth0" Nov 1 00:23:42.904365 env[1324]: 2025-11-01 00:23:42.896 [INFO][4903] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="9edb2d9a08c95b3b3d6993efca4621934b5f50fd9a0165021f3363739452b767" HandleID="k8s-pod-network.9edb2d9a08c95b3b3d6993efca4621934b5f50fd9a0165021f3363739452b767" Workload="localhost-k8s-coredns--668d6bf9bc--8c5b8-eth0" Nov 1 00:23:42.904365 env[1324]: 2025-11-01 00:23:42.901 [INFO][4903] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:23:42.904365 env[1324]: 2025-11-01 00:23:42.902 [INFO][4894] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="9edb2d9a08c95b3b3d6993efca4621934b5f50fd9a0165021f3363739452b767" Nov 1 00:23:42.906046 env[1324]: time="2025-11-01T00:23:42.904342428Z" level=info msg="TearDown network for sandbox \"9edb2d9a08c95b3b3d6993efca4621934b5f50fd9a0165021f3363739452b767\" successfully" Nov 1 00:23:42.909167 env[1324]: time="2025-11-01T00:23:42.909108109Z" level=info msg="RemovePodSandbox \"9edb2d9a08c95b3b3d6993efca4621934b5f50fd9a0165021f3363739452b767\" returns successfully" Nov 1 00:23:42.909582 env[1324]: time="2025-11-01T00:23:42.909553869Z" level=info msg="StopPodSandbox for \"8daf4c7352584ba96785ece68ac010b475d962722fdef4b04147dcba5b784e2f\"" Nov 1 00:23:42.977533 env[1324]: 2025-11-01 00:23:42.943 [WARNING][4921] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="8daf4c7352584ba96785ece68ac010b475d962722fdef4b04147dcba5b784e2f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--866bcf4d9f--tt9sm-eth0", GenerateName:"calico-kube-controllers-866bcf4d9f-", Namespace:"calico-system", SelfLink:"", UID:"b83bf5c0-405f-4b6b-b82a-0980cae1df67", ResourceVersion:"1176", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 23, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"866bcf4d9f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"cf4f923649f47278276a4352c66e5395200b053c0ea97ca3f738b8bd298cbc58", Pod:"calico-kube-controllers-866bcf4d9f-tt9sm", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calid77ac858a35", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:23:42.977533 env[1324]: 2025-11-01 00:23:42.943 [INFO][4921] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="8daf4c7352584ba96785ece68ac010b475d962722fdef4b04147dcba5b784e2f" Nov 1 00:23:42.977533 env[1324]: 2025-11-01 00:23:42.943 [INFO][4921] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="8daf4c7352584ba96785ece68ac010b475d962722fdef4b04147dcba5b784e2f" iface="eth0" netns="" Nov 1 00:23:42.977533 env[1324]: 2025-11-01 00:23:42.943 [INFO][4921] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="8daf4c7352584ba96785ece68ac010b475d962722fdef4b04147dcba5b784e2f" Nov 1 00:23:42.977533 env[1324]: 2025-11-01 00:23:42.943 [INFO][4921] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8daf4c7352584ba96785ece68ac010b475d962722fdef4b04147dcba5b784e2f" Nov 1 00:23:42.977533 env[1324]: 2025-11-01 00:23:42.960 [INFO][4931] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="8daf4c7352584ba96785ece68ac010b475d962722fdef4b04147dcba5b784e2f" HandleID="k8s-pod-network.8daf4c7352584ba96785ece68ac010b475d962722fdef4b04147dcba5b784e2f" Workload="localhost-k8s-calico--kube--controllers--866bcf4d9f--tt9sm-eth0" Nov 1 00:23:42.977533 env[1324]: 2025-11-01 00:23:42.961 [INFO][4931] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:23:42.977533 env[1324]: 2025-11-01 00:23:42.961 [INFO][4931] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:23:42.977533 env[1324]: 2025-11-01 00:23:42.972 [WARNING][4931] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="8daf4c7352584ba96785ece68ac010b475d962722fdef4b04147dcba5b784e2f" HandleID="k8s-pod-network.8daf4c7352584ba96785ece68ac010b475d962722fdef4b04147dcba5b784e2f" Workload="localhost-k8s-calico--kube--controllers--866bcf4d9f--tt9sm-eth0" Nov 1 00:23:42.977533 env[1324]: 2025-11-01 00:23:42.972 [INFO][4931] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="8daf4c7352584ba96785ece68ac010b475d962722fdef4b04147dcba5b784e2f" HandleID="k8s-pod-network.8daf4c7352584ba96785ece68ac010b475d962722fdef4b04147dcba5b784e2f" Workload="localhost-k8s-calico--kube--controllers--866bcf4d9f--tt9sm-eth0" Nov 1 00:23:42.977533 env[1324]: 2025-11-01 00:23:42.974 [INFO][4931] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:23:42.977533 env[1324]: 2025-11-01 00:23:42.976 [INFO][4921] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="8daf4c7352584ba96785ece68ac010b475d962722fdef4b04147dcba5b784e2f" Nov 1 00:23:42.977979 env[1324]: time="2025-11-01T00:23:42.977567442Z" level=info msg="TearDown network for sandbox \"8daf4c7352584ba96785ece68ac010b475d962722fdef4b04147dcba5b784e2f\" successfully" Nov 1 00:23:42.977979 env[1324]: time="2025-11-01T00:23:42.977599762Z" level=info msg="StopPodSandbox for \"8daf4c7352584ba96785ece68ac010b475d962722fdef4b04147dcba5b784e2f\" returns successfully" Nov 1 00:23:42.978081 env[1324]: time="2025-11-01T00:23:42.978050282Z" level=info msg="RemovePodSandbox for \"8daf4c7352584ba96785ece68ac010b475d962722fdef4b04147dcba5b784e2f\"" Nov 1 00:23:42.978134 env[1324]: time="2025-11-01T00:23:42.978088322Z" level=info msg="Forcibly stopping sandbox \"8daf4c7352584ba96785ece68ac010b475d962722fdef4b04147dcba5b784e2f\"" Nov 1 00:23:43.043880 env[1324]: 2025-11-01 00:23:43.011 [WARNING][4950] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="8daf4c7352584ba96785ece68ac010b475d962722fdef4b04147dcba5b784e2f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--866bcf4d9f--tt9sm-eth0", GenerateName:"calico-kube-controllers-866bcf4d9f-", Namespace:"calico-system", SelfLink:"", UID:"b83bf5c0-405f-4b6b-b82a-0980cae1df67", ResourceVersion:"1176", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 23, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"866bcf4d9f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"cf4f923649f47278276a4352c66e5395200b053c0ea97ca3f738b8bd298cbc58", Pod:"calico-kube-controllers-866bcf4d9f-tt9sm", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calid77ac858a35", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:23:43.043880 env[1324]: 2025-11-01 00:23:43.012 [INFO][4950] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="8daf4c7352584ba96785ece68ac010b475d962722fdef4b04147dcba5b784e2f" Nov 1 00:23:43.043880 env[1324]: 2025-11-01 00:23:43.012 [INFO][4950] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="8daf4c7352584ba96785ece68ac010b475d962722fdef4b04147dcba5b784e2f" iface="eth0" netns="" Nov 1 00:23:43.043880 env[1324]: 2025-11-01 00:23:43.012 [INFO][4950] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="8daf4c7352584ba96785ece68ac010b475d962722fdef4b04147dcba5b784e2f" Nov 1 00:23:43.043880 env[1324]: 2025-11-01 00:23:43.012 [INFO][4950] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8daf4c7352584ba96785ece68ac010b475d962722fdef4b04147dcba5b784e2f" Nov 1 00:23:43.043880 env[1324]: 2025-11-01 00:23:43.030 [INFO][4958] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="8daf4c7352584ba96785ece68ac010b475d962722fdef4b04147dcba5b784e2f" HandleID="k8s-pod-network.8daf4c7352584ba96785ece68ac010b475d962722fdef4b04147dcba5b784e2f" Workload="localhost-k8s-calico--kube--controllers--866bcf4d9f--tt9sm-eth0" Nov 1 00:23:43.043880 env[1324]: 2025-11-01 00:23:43.030 [INFO][4958] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:23:43.043880 env[1324]: 2025-11-01 00:23:43.030 [INFO][4958] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:23:43.043880 env[1324]: 2025-11-01 00:23:43.039 [WARNING][4958] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="8daf4c7352584ba96785ece68ac010b475d962722fdef4b04147dcba5b784e2f" HandleID="k8s-pod-network.8daf4c7352584ba96785ece68ac010b475d962722fdef4b04147dcba5b784e2f" Workload="localhost-k8s-calico--kube--controllers--866bcf4d9f--tt9sm-eth0" Nov 1 00:23:43.043880 env[1324]: 2025-11-01 00:23:43.039 [INFO][4958] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="8daf4c7352584ba96785ece68ac010b475d962722fdef4b04147dcba5b784e2f" HandleID="k8s-pod-network.8daf4c7352584ba96785ece68ac010b475d962722fdef4b04147dcba5b784e2f" Workload="localhost-k8s-calico--kube--controllers--866bcf4d9f--tt9sm-eth0" Nov 1 00:23:43.043880 env[1324]: 2025-11-01 00:23:43.040 [INFO][4958] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:23:43.043880 env[1324]: 2025-11-01 00:23:43.042 [INFO][4950] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="8daf4c7352584ba96785ece68ac010b475d962722fdef4b04147dcba5b784e2f" Nov 1 00:23:43.044360 env[1324]: time="2025-11-01T00:23:43.043910294Z" level=info msg="TearDown network for sandbox \"8daf4c7352584ba96785ece68ac010b475d962722fdef4b04147dcba5b784e2f\" successfully" Nov 1 00:23:43.048831 env[1324]: time="2025-11-01T00:23:43.048792414Z" level=info msg="RemovePodSandbox \"8daf4c7352584ba96785ece68ac010b475d962722fdef4b04147dcba5b784e2f\" returns successfully" Nov 1 00:23:43.049336 env[1324]: time="2025-11-01T00:23:43.049307615Z" level=info msg="StopPodSandbox for \"6f0b79189a902f2b024e5033689b722a6b6fef17a4e921a2816283b12410a496\"" Nov 1 00:23:43.113728 env[1324]: 2025-11-01 00:23:43.081 [WARNING][4975] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="6f0b79189a902f2b024e5033689b722a6b6fef17a4e921a2816283b12410a496" WorkloadEndpoint="localhost-k8s-whisker--595d769df8--7v79c-eth0" Nov 1 00:23:43.113728 env[1324]: 2025-11-01 00:23:43.081 [INFO][4975] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="6f0b79189a902f2b024e5033689b722a6b6fef17a4e921a2816283b12410a496" Nov 1 00:23:43.113728 env[1324]: 2025-11-01 00:23:43.081 [INFO][4975] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6f0b79189a902f2b024e5033689b722a6b6fef17a4e921a2816283b12410a496" iface="eth0" netns="" Nov 1 00:23:43.113728 env[1324]: 2025-11-01 00:23:43.081 [INFO][4975] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="6f0b79189a902f2b024e5033689b722a6b6fef17a4e921a2816283b12410a496" Nov 1 00:23:43.113728 env[1324]: 2025-11-01 00:23:43.081 [INFO][4975] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6f0b79189a902f2b024e5033689b722a6b6fef17a4e921a2816283b12410a496" Nov 1 00:23:43.113728 env[1324]: 2025-11-01 00:23:43.098 [INFO][4983] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="6f0b79189a902f2b024e5033689b722a6b6fef17a4e921a2816283b12410a496" HandleID="k8s-pod-network.6f0b79189a902f2b024e5033689b722a6b6fef17a4e921a2816283b12410a496" Workload="localhost-k8s-whisker--595d769df8--7v79c-eth0" Nov 1 00:23:43.113728 env[1324]: 2025-11-01 00:23:43.098 [INFO][4983] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:23:43.113728 env[1324]: 2025-11-01 00:23:43.099 [INFO][4983] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:23:43.113728 env[1324]: 2025-11-01 00:23:43.108 [WARNING][4983] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="6f0b79189a902f2b024e5033689b722a6b6fef17a4e921a2816283b12410a496" HandleID="k8s-pod-network.6f0b79189a902f2b024e5033689b722a6b6fef17a4e921a2816283b12410a496" Workload="localhost-k8s-whisker--595d769df8--7v79c-eth0" Nov 1 00:23:43.113728 env[1324]: 2025-11-01 00:23:43.108 [INFO][4983] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="6f0b79189a902f2b024e5033689b722a6b6fef17a4e921a2816283b12410a496" HandleID="k8s-pod-network.6f0b79189a902f2b024e5033689b722a6b6fef17a4e921a2816283b12410a496" Workload="localhost-k8s-whisker--595d769df8--7v79c-eth0" Nov 1 00:23:43.113728 env[1324]: 2025-11-01 00:23:43.110 [INFO][4983] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:23:43.113728 env[1324]: 2025-11-01 00:23:43.112 [INFO][4975] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="6f0b79189a902f2b024e5033689b722a6b6fef17a4e921a2816283b12410a496" Nov 1 00:23:43.114113 env[1324]: time="2025-11-01T00:23:43.113757266Z" level=info msg="TearDown network for sandbox \"6f0b79189a902f2b024e5033689b722a6b6fef17a4e921a2816283b12410a496\" successfully" Nov 1 00:23:43.114113 env[1324]: time="2025-11-01T00:23:43.113786466Z" level=info msg="StopPodSandbox for \"6f0b79189a902f2b024e5033689b722a6b6fef17a4e921a2816283b12410a496\" returns successfully" Nov 1 00:23:43.114414 env[1324]: time="2025-11-01T00:23:43.114376266Z" level=info msg="RemovePodSandbox for \"6f0b79189a902f2b024e5033689b722a6b6fef17a4e921a2816283b12410a496\"" Nov 1 00:23:43.114535 env[1324]: time="2025-11-01T00:23:43.114498626Z" level=info msg="Forcibly stopping sandbox \"6f0b79189a902f2b024e5033689b722a6b6fef17a4e921a2816283b12410a496\"" Nov 1 00:23:43.177816 env[1324]: 2025-11-01 00:23:43.145 [WARNING][5001] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="6f0b79189a902f2b024e5033689b722a6b6fef17a4e921a2816283b12410a496" WorkloadEndpoint="localhost-k8s-whisker--595d769df8--7v79c-eth0" Nov 1 00:23:43.177816 env[1324]: 2025-11-01 00:23:43.145 [INFO][5001] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="6f0b79189a902f2b024e5033689b722a6b6fef17a4e921a2816283b12410a496" Nov 1 00:23:43.177816 env[1324]: 2025-11-01 00:23:43.145 [INFO][5001] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6f0b79189a902f2b024e5033689b722a6b6fef17a4e921a2816283b12410a496" iface="eth0" netns="" Nov 1 00:23:43.177816 env[1324]: 2025-11-01 00:23:43.145 [INFO][5001] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="6f0b79189a902f2b024e5033689b722a6b6fef17a4e921a2816283b12410a496" Nov 1 00:23:43.177816 env[1324]: 2025-11-01 00:23:43.145 [INFO][5001] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6f0b79189a902f2b024e5033689b722a6b6fef17a4e921a2816283b12410a496" Nov 1 00:23:43.177816 env[1324]: 2025-11-01 00:23:43.164 [INFO][5010] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="6f0b79189a902f2b024e5033689b722a6b6fef17a4e921a2816283b12410a496" HandleID="k8s-pod-network.6f0b79189a902f2b024e5033689b722a6b6fef17a4e921a2816283b12410a496" Workload="localhost-k8s-whisker--595d769df8--7v79c-eth0" Nov 1 00:23:43.177816 env[1324]: 2025-11-01 00:23:43.164 [INFO][5010] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:23:43.177816 env[1324]: 2025-11-01 00:23:43.164 [INFO][5010] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:23:43.177816 env[1324]: 2025-11-01 00:23:43.172 [WARNING][5010] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="6f0b79189a902f2b024e5033689b722a6b6fef17a4e921a2816283b12410a496" HandleID="k8s-pod-network.6f0b79189a902f2b024e5033689b722a6b6fef17a4e921a2816283b12410a496" Workload="localhost-k8s-whisker--595d769df8--7v79c-eth0" Nov 1 00:23:43.177816 env[1324]: 2025-11-01 00:23:43.172 [INFO][5010] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="6f0b79189a902f2b024e5033689b722a6b6fef17a4e921a2816283b12410a496" HandleID="k8s-pod-network.6f0b79189a902f2b024e5033689b722a6b6fef17a4e921a2816283b12410a496" Workload="localhost-k8s-whisker--595d769df8--7v79c-eth0" Nov 1 00:23:43.177816 env[1324]: 2025-11-01 00:23:43.174 [INFO][5010] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:23:43.177816 env[1324]: 2025-11-01 00:23:43.176 [INFO][5001] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="6f0b79189a902f2b024e5033689b722a6b6fef17a4e921a2816283b12410a496" Nov 1 00:23:43.178269 env[1324]: time="2025-11-01T00:23:43.178224277Z" level=info msg="TearDown network for sandbox \"6f0b79189a902f2b024e5033689b722a6b6fef17a4e921a2816283b12410a496\" successfully" Nov 1 00:23:43.183389 env[1324]: time="2025-11-01T00:23:43.183335798Z" level=info msg="RemovePodSandbox \"6f0b79189a902f2b024e5033689b722a6b6fef17a4e921a2816283b12410a496\" returns successfully" Nov 1 00:23:43.184051 env[1324]: time="2025-11-01T00:23:43.183998798Z" level=info msg="StopPodSandbox for \"fa406b43a30fa638400c575a6ea7cf741c2e9a12075a2582114651f5a6fa8633\"" Nov 1 00:23:43.254123 env[1324]: 2025-11-01 00:23:43.219 [WARNING][5027] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="fa406b43a30fa638400c575a6ea7cf741c2e9a12075a2582114651f5a6fa8633" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--jfnxh-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"2ceacdf1-40ab-4971-87a7-298d59c91848", ResourceVersion:"1148", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 22, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"ec78ae91e8e588f233f99904687ccdfc431ec28f7de3af40081b4ddf5e3c8874", Pod:"coredns-668d6bf9bc-jfnxh", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib4e7c11f848", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:23:43.254123 env[1324]: 2025-11-01 00:23:43.219 [INFO][5027] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="fa406b43a30fa638400c575a6ea7cf741c2e9a12075a2582114651f5a6fa8633" Nov 1 00:23:43.254123 env[1324]: 2025-11-01 00:23:43.219 [INFO][5027] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="fa406b43a30fa638400c575a6ea7cf741c2e9a12075a2582114651f5a6fa8633" iface="eth0" netns="" Nov 1 00:23:43.254123 env[1324]: 2025-11-01 00:23:43.219 [INFO][5027] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="fa406b43a30fa638400c575a6ea7cf741c2e9a12075a2582114651f5a6fa8633" Nov 1 00:23:43.254123 env[1324]: 2025-11-01 00:23:43.219 [INFO][5027] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="fa406b43a30fa638400c575a6ea7cf741c2e9a12075a2582114651f5a6fa8633" Nov 1 00:23:43.254123 env[1324]: 2025-11-01 00:23:43.238 [INFO][5036] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="fa406b43a30fa638400c575a6ea7cf741c2e9a12075a2582114651f5a6fa8633" HandleID="k8s-pod-network.fa406b43a30fa638400c575a6ea7cf741c2e9a12075a2582114651f5a6fa8633" Workload="localhost-k8s-coredns--668d6bf9bc--jfnxh-eth0" Nov 1 00:23:43.254123 env[1324]: 2025-11-01 00:23:43.238 [INFO][5036] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:23:43.254123 env[1324]: 2025-11-01 00:23:43.238 [INFO][5036] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:23:43.254123 env[1324]: 2025-11-01 00:23:43.249 [WARNING][5036] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="fa406b43a30fa638400c575a6ea7cf741c2e9a12075a2582114651f5a6fa8633" HandleID="k8s-pod-network.fa406b43a30fa638400c575a6ea7cf741c2e9a12075a2582114651f5a6fa8633" Workload="localhost-k8s-coredns--668d6bf9bc--jfnxh-eth0" Nov 1 00:23:43.254123 env[1324]: 2025-11-01 00:23:43.249 [INFO][5036] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="fa406b43a30fa638400c575a6ea7cf741c2e9a12075a2582114651f5a6fa8633" HandleID="k8s-pod-network.fa406b43a30fa638400c575a6ea7cf741c2e9a12075a2582114651f5a6fa8633" Workload="localhost-k8s-coredns--668d6bf9bc--jfnxh-eth0" Nov 1 00:23:43.254123 env[1324]: 2025-11-01 00:23:43.250 [INFO][5036] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:23:43.254123 env[1324]: 2025-11-01 00:23:43.252 [INFO][5027] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="fa406b43a30fa638400c575a6ea7cf741c2e9a12075a2582114651f5a6fa8633" Nov 1 00:23:43.254619 env[1324]: time="2025-11-01T00:23:43.254140331Z" level=info msg="TearDown network for sandbox \"fa406b43a30fa638400c575a6ea7cf741c2e9a12075a2582114651f5a6fa8633\" successfully" Nov 1 00:23:43.254619 env[1324]: time="2025-11-01T00:23:43.254171171Z" level=info msg="StopPodSandbox for \"fa406b43a30fa638400c575a6ea7cf741c2e9a12075a2582114651f5a6fa8633\" returns successfully" Nov 1 00:23:43.254773 env[1324]: time="2025-11-01T00:23:43.254750291Z" level=info msg="RemovePodSandbox for \"fa406b43a30fa638400c575a6ea7cf741c2e9a12075a2582114651f5a6fa8633\"" Nov 1 00:23:43.254826 env[1324]: time="2025-11-01T00:23:43.254785891Z" level=info msg="Forcibly stopping sandbox \"fa406b43a30fa638400c575a6ea7cf741c2e9a12075a2582114651f5a6fa8633\"" Nov 1 00:23:43.316237 env[1324]: 2025-11-01 00:23:43.286 [WARNING][5054] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="fa406b43a30fa638400c575a6ea7cf741c2e9a12075a2582114651f5a6fa8633" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--jfnxh-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"2ceacdf1-40ab-4971-87a7-298d59c91848", ResourceVersion:"1148", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 22, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"ec78ae91e8e588f233f99904687ccdfc431ec28f7de3af40081b4ddf5e3c8874", Pod:"coredns-668d6bf9bc-jfnxh", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib4e7c11f848", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:23:43.316237 env[1324]: 2025-11-01 00:23:43.286 [INFO][5054] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="fa406b43a30fa638400c575a6ea7cf741c2e9a12075a2582114651f5a6fa8633" Nov 1 00:23:43.316237 env[1324]: 2025-11-01 00:23:43.286 [INFO][5054] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="fa406b43a30fa638400c575a6ea7cf741c2e9a12075a2582114651f5a6fa8633" iface="eth0" netns="" Nov 1 00:23:43.316237 env[1324]: 2025-11-01 00:23:43.286 [INFO][5054] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="fa406b43a30fa638400c575a6ea7cf741c2e9a12075a2582114651f5a6fa8633" Nov 1 00:23:43.316237 env[1324]: 2025-11-01 00:23:43.286 [INFO][5054] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="fa406b43a30fa638400c575a6ea7cf741c2e9a12075a2582114651f5a6fa8633" Nov 1 00:23:43.316237 env[1324]: 2025-11-01 00:23:43.302 [INFO][5063] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="fa406b43a30fa638400c575a6ea7cf741c2e9a12075a2582114651f5a6fa8633" HandleID="k8s-pod-network.fa406b43a30fa638400c575a6ea7cf741c2e9a12075a2582114651f5a6fa8633" Workload="localhost-k8s-coredns--668d6bf9bc--jfnxh-eth0" Nov 1 00:23:43.316237 env[1324]: 2025-11-01 00:23:43.302 [INFO][5063] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:23:43.316237 env[1324]: 2025-11-01 00:23:43.302 [INFO][5063] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:23:43.316237 env[1324]: 2025-11-01 00:23:43.311 [WARNING][5063] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="fa406b43a30fa638400c575a6ea7cf741c2e9a12075a2582114651f5a6fa8633" HandleID="k8s-pod-network.fa406b43a30fa638400c575a6ea7cf741c2e9a12075a2582114651f5a6fa8633" Workload="localhost-k8s-coredns--668d6bf9bc--jfnxh-eth0" Nov 1 00:23:43.316237 env[1324]: 2025-11-01 00:23:43.311 [INFO][5063] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="fa406b43a30fa638400c575a6ea7cf741c2e9a12075a2582114651f5a6fa8633" HandleID="k8s-pod-network.fa406b43a30fa638400c575a6ea7cf741c2e9a12075a2582114651f5a6fa8633" Workload="localhost-k8s-coredns--668d6bf9bc--jfnxh-eth0" Nov 1 00:23:43.316237 env[1324]: 2025-11-01 00:23:43.313 [INFO][5063] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:23:43.316237 env[1324]: 2025-11-01 00:23:43.314 [INFO][5054] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="fa406b43a30fa638400c575a6ea7cf741c2e9a12075a2582114651f5a6fa8633" Nov 1 00:23:43.316237 env[1324]: time="2025-11-01T00:23:43.316226062Z" level=info msg="TearDown network for sandbox \"fa406b43a30fa638400c575a6ea7cf741c2e9a12075a2582114651f5a6fa8633\" successfully" Nov 1 00:23:43.321685 env[1324]: time="2025-11-01T00:23:43.321641302Z" level=info msg="RemovePodSandbox \"fa406b43a30fa638400c575a6ea7cf741c2e9a12075a2582114651f5a6fa8633\" returns successfully" Nov 1 00:23:43.322275 env[1324]: time="2025-11-01T00:23:43.322229823Z" level=info msg="StopPodSandbox for \"b5d69b86e4f937472c3a94e15103c703bc3d1f5d028f08fa0af3e9103cf1ef15\"" Nov 1 00:23:43.387837 env[1324]: 2025-11-01 00:23:43.354 [WARNING][5081] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b5d69b86e4f937472c3a94e15103c703bc3d1f5d028f08fa0af3e9103cf1ef15" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--77cfb4d4d6--4nr5k-eth0", GenerateName:"calico-apiserver-77cfb4d4d6-", Namespace:"calico-apiserver", SelfLink:"", UID:"e3dd8f6c-3b39-4d19-a732-fff37a40f25e", ResourceVersion:"1145", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 22, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"77cfb4d4d6", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"4845863ae6d9300d44534c3cfbf19d75838929c284fda37f85c8807e4a9e9efc", Pod:"calico-apiserver-77cfb4d4d6-4nr5k", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calidd768c08cb3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:23:43.387837 env[1324]: 2025-11-01 00:23:43.355 [INFO][5081] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="b5d69b86e4f937472c3a94e15103c703bc3d1f5d028f08fa0af3e9103cf1ef15" Nov 1 00:23:43.387837 env[1324]: 2025-11-01 00:23:43.355 [INFO][5081] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b5d69b86e4f937472c3a94e15103c703bc3d1f5d028f08fa0af3e9103cf1ef15" iface="eth0" netns="" Nov 1 00:23:43.387837 env[1324]: 2025-11-01 00:23:43.355 [INFO][5081] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="b5d69b86e4f937472c3a94e15103c703bc3d1f5d028f08fa0af3e9103cf1ef15" Nov 1 00:23:43.387837 env[1324]: 2025-11-01 00:23:43.355 [INFO][5081] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b5d69b86e4f937472c3a94e15103c703bc3d1f5d028f08fa0af3e9103cf1ef15" Nov 1 00:23:43.387837 env[1324]: 2025-11-01 00:23:43.373 [INFO][5090] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="b5d69b86e4f937472c3a94e15103c703bc3d1f5d028f08fa0af3e9103cf1ef15" HandleID="k8s-pod-network.b5d69b86e4f937472c3a94e15103c703bc3d1f5d028f08fa0af3e9103cf1ef15" Workload="localhost-k8s-calico--apiserver--77cfb4d4d6--4nr5k-eth0" Nov 1 00:23:43.387837 env[1324]: 2025-11-01 00:23:43.373 [INFO][5090] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:23:43.387837 env[1324]: 2025-11-01 00:23:43.373 [INFO][5090] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:23:43.387837 env[1324]: 2025-11-01 00:23:43.383 [WARNING][5090] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="b5d69b86e4f937472c3a94e15103c703bc3d1f5d028f08fa0af3e9103cf1ef15" HandleID="k8s-pod-network.b5d69b86e4f937472c3a94e15103c703bc3d1f5d028f08fa0af3e9103cf1ef15" Workload="localhost-k8s-calico--apiserver--77cfb4d4d6--4nr5k-eth0" Nov 1 00:23:43.387837 env[1324]: 2025-11-01 00:23:43.383 [INFO][5090] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="b5d69b86e4f937472c3a94e15103c703bc3d1f5d028f08fa0af3e9103cf1ef15" HandleID="k8s-pod-network.b5d69b86e4f937472c3a94e15103c703bc3d1f5d028f08fa0af3e9103cf1ef15" Workload="localhost-k8s-calico--apiserver--77cfb4d4d6--4nr5k-eth0" Nov 1 00:23:43.387837 env[1324]: 2025-11-01 00:23:43.384 [INFO][5090] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:23:43.387837 env[1324]: 2025-11-01 00:23:43.386 [INFO][5081] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="b5d69b86e4f937472c3a94e15103c703bc3d1f5d028f08fa0af3e9103cf1ef15" Nov 1 00:23:43.388366 env[1324]: time="2025-11-01T00:23:43.387859714Z" level=info msg="TearDown network for sandbox \"b5d69b86e4f937472c3a94e15103c703bc3d1f5d028f08fa0af3e9103cf1ef15\" successfully" Nov 1 00:23:43.388366 env[1324]: time="2025-11-01T00:23:43.387891394Z" level=info msg="StopPodSandbox for \"b5d69b86e4f937472c3a94e15103c703bc3d1f5d028f08fa0af3e9103cf1ef15\" returns successfully" Nov 1 00:23:43.388366 env[1324]: time="2025-11-01T00:23:43.388346674Z" level=info msg="RemovePodSandbox for \"b5d69b86e4f937472c3a94e15103c703bc3d1f5d028f08fa0af3e9103cf1ef15\"" Nov 1 00:23:43.388476 env[1324]: time="2025-11-01T00:23:43.388380394Z" level=info msg="Forcibly stopping sandbox \"b5d69b86e4f937472c3a94e15103c703bc3d1f5d028f08fa0af3e9103cf1ef15\"" Nov 1 00:23:43.460026 env[1324]: 2025-11-01 00:23:43.426 [WARNING][5109] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b5d69b86e4f937472c3a94e15103c703bc3d1f5d028f08fa0af3e9103cf1ef15" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--77cfb4d4d6--4nr5k-eth0", GenerateName:"calico-apiserver-77cfb4d4d6-", Namespace:"calico-apiserver", SelfLink:"", UID:"e3dd8f6c-3b39-4d19-a732-fff37a40f25e", ResourceVersion:"1145", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 22, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"77cfb4d4d6", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"4845863ae6d9300d44534c3cfbf19d75838929c284fda37f85c8807e4a9e9efc", Pod:"calico-apiserver-77cfb4d4d6-4nr5k", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calidd768c08cb3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:23:43.460026 env[1324]: 2025-11-01 00:23:43.426 [INFO][5109] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="b5d69b86e4f937472c3a94e15103c703bc3d1f5d028f08fa0af3e9103cf1ef15" Nov 1 00:23:43.460026 env[1324]: 2025-11-01 00:23:43.426 [INFO][5109] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b5d69b86e4f937472c3a94e15103c703bc3d1f5d028f08fa0af3e9103cf1ef15" iface="eth0" netns="" Nov 1 00:23:43.460026 env[1324]: 2025-11-01 00:23:43.426 [INFO][5109] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="b5d69b86e4f937472c3a94e15103c703bc3d1f5d028f08fa0af3e9103cf1ef15" Nov 1 00:23:43.460026 env[1324]: 2025-11-01 00:23:43.427 [INFO][5109] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b5d69b86e4f937472c3a94e15103c703bc3d1f5d028f08fa0af3e9103cf1ef15" Nov 1 00:23:43.460026 env[1324]: 2025-11-01 00:23:43.446 [INFO][5119] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="b5d69b86e4f937472c3a94e15103c703bc3d1f5d028f08fa0af3e9103cf1ef15" HandleID="k8s-pod-network.b5d69b86e4f937472c3a94e15103c703bc3d1f5d028f08fa0af3e9103cf1ef15" Workload="localhost-k8s-calico--apiserver--77cfb4d4d6--4nr5k-eth0" Nov 1 00:23:43.460026 env[1324]: 2025-11-01 00:23:43.446 [INFO][5119] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:23:43.460026 env[1324]: 2025-11-01 00:23:43.446 [INFO][5119] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:23:43.460026 env[1324]: 2025-11-01 00:23:43.455 [WARNING][5119] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="b5d69b86e4f937472c3a94e15103c703bc3d1f5d028f08fa0af3e9103cf1ef15" HandleID="k8s-pod-network.b5d69b86e4f937472c3a94e15103c703bc3d1f5d028f08fa0af3e9103cf1ef15" Workload="localhost-k8s-calico--apiserver--77cfb4d4d6--4nr5k-eth0" Nov 1 00:23:43.460026 env[1324]: 2025-11-01 00:23:43.455 [INFO][5119] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="b5d69b86e4f937472c3a94e15103c703bc3d1f5d028f08fa0af3e9103cf1ef15" HandleID="k8s-pod-network.b5d69b86e4f937472c3a94e15103c703bc3d1f5d028f08fa0af3e9103cf1ef15" Workload="localhost-k8s-calico--apiserver--77cfb4d4d6--4nr5k-eth0" Nov 1 00:23:43.460026 env[1324]: 2025-11-01 00:23:43.456 [INFO][5119] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:23:43.460026 env[1324]: 2025-11-01 00:23:43.458 [INFO][5109] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="b5d69b86e4f937472c3a94e15103c703bc3d1f5d028f08fa0af3e9103cf1ef15" Nov 1 00:23:43.460532 env[1324]: time="2025-11-01T00:23:43.460065647Z" level=info msg="TearDown network for sandbox \"b5d69b86e4f937472c3a94e15103c703bc3d1f5d028f08fa0af3e9103cf1ef15\" successfully" Nov 1 00:23:43.463070 env[1324]: time="2025-11-01T00:23:43.463024487Z" level=info msg="RemovePodSandbox \"b5d69b86e4f937472c3a94e15103c703bc3d1f5d028f08fa0af3e9103cf1ef15\" returns successfully" Nov 1 00:23:43.463542 env[1324]: time="2025-11-01T00:23:43.463514407Z" level=info msg="StopPodSandbox for \"f0852cc89cb223aa7610806523ff706e346ec5eb1bd18c75b2310cfbea3cd578\"" Nov 1 00:23:43.528685 env[1324]: 2025-11-01 00:23:43.495 [WARNING][5136] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f0852cc89cb223aa7610806523ff706e346ec5eb1bd18c75b2310cfbea3cd578" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--77cfb4d4d6--l78bq-eth0", GenerateName:"calico-apiserver-77cfb4d4d6-", Namespace:"calico-apiserver", SelfLink:"", UID:"098f9c0f-a24a-4001-88bf-ea4e44e957ea", ResourceVersion:"1085", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 22, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"77cfb4d4d6", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"ad04a470ac12c4465d4d74ee8beea9bf567f3de479d41654686d5bb595dd269d", Pod:"calico-apiserver-77cfb4d4d6-l78bq", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali2b36af0aaf2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:23:43.528685 env[1324]: 2025-11-01 00:23:43.495 [INFO][5136] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="f0852cc89cb223aa7610806523ff706e346ec5eb1bd18c75b2310cfbea3cd578" Nov 1 00:23:43.528685 env[1324]: 2025-11-01 00:23:43.495 [INFO][5136] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f0852cc89cb223aa7610806523ff706e346ec5eb1bd18c75b2310cfbea3cd578" iface="eth0" netns="" Nov 1 00:23:43.528685 env[1324]: 2025-11-01 00:23:43.495 [INFO][5136] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="f0852cc89cb223aa7610806523ff706e346ec5eb1bd18c75b2310cfbea3cd578" Nov 1 00:23:43.528685 env[1324]: 2025-11-01 00:23:43.495 [INFO][5136] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f0852cc89cb223aa7610806523ff706e346ec5eb1bd18c75b2310cfbea3cd578" Nov 1 00:23:43.528685 env[1324]: 2025-11-01 00:23:43.514 [INFO][5145] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="f0852cc89cb223aa7610806523ff706e346ec5eb1bd18c75b2310cfbea3cd578" HandleID="k8s-pod-network.f0852cc89cb223aa7610806523ff706e346ec5eb1bd18c75b2310cfbea3cd578" Workload="localhost-k8s-calico--apiserver--77cfb4d4d6--l78bq-eth0" Nov 1 00:23:43.528685 env[1324]: 2025-11-01 00:23:43.514 [INFO][5145] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:23:43.528685 env[1324]: 2025-11-01 00:23:43.514 [INFO][5145] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:23:43.528685 env[1324]: 2025-11-01 00:23:43.523 [WARNING][5145] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="f0852cc89cb223aa7610806523ff706e346ec5eb1bd18c75b2310cfbea3cd578" HandleID="k8s-pod-network.f0852cc89cb223aa7610806523ff706e346ec5eb1bd18c75b2310cfbea3cd578" Workload="localhost-k8s-calico--apiserver--77cfb4d4d6--l78bq-eth0" Nov 1 00:23:43.528685 env[1324]: 2025-11-01 00:23:43.523 [INFO][5145] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="f0852cc89cb223aa7610806523ff706e346ec5eb1bd18c75b2310cfbea3cd578" HandleID="k8s-pod-network.f0852cc89cb223aa7610806523ff706e346ec5eb1bd18c75b2310cfbea3cd578" Workload="localhost-k8s-calico--apiserver--77cfb4d4d6--l78bq-eth0" Nov 1 00:23:43.528685 env[1324]: 2025-11-01 00:23:43.525 [INFO][5145] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:23:43.528685 env[1324]: 2025-11-01 00:23:43.527 [INFO][5136] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="f0852cc89cb223aa7610806523ff706e346ec5eb1bd18c75b2310cfbea3cd578" Nov 1 00:23:43.529128 env[1324]: time="2025-11-01T00:23:43.528715939Z" level=info msg="TearDown network for sandbox \"f0852cc89cb223aa7610806523ff706e346ec5eb1bd18c75b2310cfbea3cd578\" successfully" Nov 1 00:23:43.529128 env[1324]: time="2025-11-01T00:23:43.528748939Z" level=info msg="StopPodSandbox for \"f0852cc89cb223aa7610806523ff706e346ec5eb1bd18c75b2310cfbea3cd578\" returns successfully" Nov 1 00:23:43.529259 env[1324]: time="2025-11-01T00:23:43.529220419Z" level=info msg="RemovePodSandbox for \"f0852cc89cb223aa7610806523ff706e346ec5eb1bd18c75b2310cfbea3cd578\"" Nov 1 00:23:43.529296 env[1324]: time="2025-11-01T00:23:43.529268659Z" level=info msg="Forcibly stopping sandbox \"f0852cc89cb223aa7610806523ff706e346ec5eb1bd18c75b2310cfbea3cd578\"" Nov 1 00:23:43.595036 env[1324]: 2025-11-01 00:23:43.561 [WARNING][5163] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f0852cc89cb223aa7610806523ff706e346ec5eb1bd18c75b2310cfbea3cd578" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--77cfb4d4d6--l78bq-eth0", GenerateName:"calico-apiserver-77cfb4d4d6-", Namespace:"calico-apiserver", SelfLink:"", UID:"098f9c0f-a24a-4001-88bf-ea4e44e957ea", ResourceVersion:"1085", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 22, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"77cfb4d4d6", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"ad04a470ac12c4465d4d74ee8beea9bf567f3de479d41654686d5bb595dd269d", Pod:"calico-apiserver-77cfb4d4d6-l78bq", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali2b36af0aaf2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:23:43.595036 env[1324]: 2025-11-01 00:23:43.561 [INFO][5163] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="f0852cc89cb223aa7610806523ff706e346ec5eb1bd18c75b2310cfbea3cd578" Nov 1 00:23:43.595036 env[1324]: 2025-11-01 00:23:43.561 [INFO][5163] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f0852cc89cb223aa7610806523ff706e346ec5eb1bd18c75b2310cfbea3cd578" iface="eth0" netns="" Nov 1 00:23:43.595036 env[1324]: 2025-11-01 00:23:43.561 [INFO][5163] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="f0852cc89cb223aa7610806523ff706e346ec5eb1bd18c75b2310cfbea3cd578" Nov 1 00:23:43.595036 env[1324]: 2025-11-01 00:23:43.561 [INFO][5163] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f0852cc89cb223aa7610806523ff706e346ec5eb1bd18c75b2310cfbea3cd578" Nov 1 00:23:43.595036 env[1324]: 2025-11-01 00:23:43.581 [INFO][5172] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="f0852cc89cb223aa7610806523ff706e346ec5eb1bd18c75b2310cfbea3cd578" HandleID="k8s-pod-network.f0852cc89cb223aa7610806523ff706e346ec5eb1bd18c75b2310cfbea3cd578" Workload="localhost-k8s-calico--apiserver--77cfb4d4d6--l78bq-eth0" Nov 1 00:23:43.595036 env[1324]: 2025-11-01 00:23:43.581 [INFO][5172] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:23:43.595036 env[1324]: 2025-11-01 00:23:43.581 [INFO][5172] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:23:43.595036 env[1324]: 2025-11-01 00:23:43.589 [WARNING][5172] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="f0852cc89cb223aa7610806523ff706e346ec5eb1bd18c75b2310cfbea3cd578" HandleID="k8s-pod-network.f0852cc89cb223aa7610806523ff706e346ec5eb1bd18c75b2310cfbea3cd578" Workload="localhost-k8s-calico--apiserver--77cfb4d4d6--l78bq-eth0" Nov 1 00:23:43.595036 env[1324]: 2025-11-01 00:23:43.589 [INFO][5172] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="f0852cc89cb223aa7610806523ff706e346ec5eb1bd18c75b2310cfbea3cd578" HandleID="k8s-pod-network.f0852cc89cb223aa7610806523ff706e346ec5eb1bd18c75b2310cfbea3cd578" Workload="localhost-k8s-calico--apiserver--77cfb4d4d6--l78bq-eth0" Nov 1 00:23:43.595036 env[1324]: 2025-11-01 00:23:43.591 [INFO][5172] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:23:43.595036 env[1324]: 2025-11-01 00:23:43.593 [INFO][5163] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="f0852cc89cb223aa7610806523ff706e346ec5eb1bd18c75b2310cfbea3cd578" Nov 1 00:23:43.595513 env[1324]: time="2025-11-01T00:23:43.595075951Z" level=info msg="TearDown network for sandbox \"f0852cc89cb223aa7610806523ff706e346ec5eb1bd18c75b2310cfbea3cd578\" successfully" Nov 1 00:23:43.601099 env[1324]: time="2025-11-01T00:23:43.601060072Z" level=info msg="RemovePodSandbox \"f0852cc89cb223aa7610806523ff706e346ec5eb1bd18c75b2310cfbea3cd578\" returns successfully" Nov 1 00:23:43.601832 env[1324]: time="2025-11-01T00:23:43.601800112Z" level=info msg="StopPodSandbox for \"f80c0e08d467894b039f948b8f86ba376ec5e81932e79cbeb44ff3a13e6573ce\"" Nov 1 00:23:43.676356 env[1324]: 2025-11-01 00:23:43.638 [WARNING][5190] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f80c0e08d467894b039f948b8f86ba376ec5e81932e79cbeb44ff3a13e6573ce" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--666569f655--cp7jx-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"bf1893fd-31bf-427a-928e-11685512f41a", ResourceVersion:"1121", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 23, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"a68c2dc2f36d87cfee73ffd4e0884765db1815b211809ec5c8e3a8e16fc05113", Pod:"goldmane-666569f655-cp7jx", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali6b455043a5a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:23:43.676356 env[1324]: 2025-11-01 00:23:43.638 [INFO][5190] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="f80c0e08d467894b039f948b8f86ba376ec5e81932e79cbeb44ff3a13e6573ce" Nov 1 00:23:43.676356 env[1324]: 2025-11-01 00:23:43.638 [INFO][5190] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f80c0e08d467894b039f948b8f86ba376ec5e81932e79cbeb44ff3a13e6573ce" iface="eth0" netns="" Nov 1 00:23:43.676356 env[1324]: 2025-11-01 00:23:43.638 [INFO][5190] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="f80c0e08d467894b039f948b8f86ba376ec5e81932e79cbeb44ff3a13e6573ce" Nov 1 00:23:43.676356 env[1324]: 2025-11-01 00:23:43.638 [INFO][5190] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f80c0e08d467894b039f948b8f86ba376ec5e81932e79cbeb44ff3a13e6573ce" Nov 1 00:23:43.676356 env[1324]: 2025-11-01 00:23:43.660 [INFO][5199] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="f80c0e08d467894b039f948b8f86ba376ec5e81932e79cbeb44ff3a13e6573ce" HandleID="k8s-pod-network.f80c0e08d467894b039f948b8f86ba376ec5e81932e79cbeb44ff3a13e6573ce" Workload="localhost-k8s-goldmane--666569f655--cp7jx-eth0" Nov 1 00:23:43.676356 env[1324]: 2025-11-01 00:23:43.660 [INFO][5199] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:23:43.676356 env[1324]: 2025-11-01 00:23:43.660 [INFO][5199] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:23:43.676356 env[1324]: 2025-11-01 00:23:43.669 [WARNING][5199] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="f80c0e08d467894b039f948b8f86ba376ec5e81932e79cbeb44ff3a13e6573ce" HandleID="k8s-pod-network.f80c0e08d467894b039f948b8f86ba376ec5e81932e79cbeb44ff3a13e6573ce" Workload="localhost-k8s-goldmane--666569f655--cp7jx-eth0" Nov 1 00:23:43.676356 env[1324]: 2025-11-01 00:23:43.669 [INFO][5199] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="f80c0e08d467894b039f948b8f86ba376ec5e81932e79cbeb44ff3a13e6573ce" HandleID="k8s-pod-network.f80c0e08d467894b039f948b8f86ba376ec5e81932e79cbeb44ff3a13e6573ce" Workload="localhost-k8s-goldmane--666569f655--cp7jx-eth0" Nov 1 00:23:43.676356 env[1324]: 2025-11-01 00:23:43.673 [INFO][5199] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:23:43.676356 env[1324]: 2025-11-01 00:23:43.674 [INFO][5190] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="f80c0e08d467894b039f948b8f86ba376ec5e81932e79cbeb44ff3a13e6573ce" Nov 1 00:23:43.676814 env[1324]: time="2025-11-01T00:23:43.676386845Z" level=info msg="TearDown network for sandbox \"f80c0e08d467894b039f948b8f86ba376ec5e81932e79cbeb44ff3a13e6573ce\" successfully" Nov 1 00:23:43.676814 env[1324]: time="2025-11-01T00:23:43.676425405Z" level=info msg="StopPodSandbox for \"f80c0e08d467894b039f948b8f86ba376ec5e81932e79cbeb44ff3a13e6573ce\" returns successfully" Nov 1 00:23:43.677013 env[1324]: time="2025-11-01T00:23:43.676983365Z" level=info msg="RemovePodSandbox for \"f80c0e08d467894b039f948b8f86ba376ec5e81932e79cbeb44ff3a13e6573ce\"" Nov 1 00:23:43.677071 env[1324]: time="2025-11-01T00:23:43.677034685Z" level=info msg="Forcibly stopping sandbox \"f80c0e08d467894b039f948b8f86ba376ec5e81932e79cbeb44ff3a13e6573ce\"" Nov 1 00:23:43.744941 env[1324]: 2025-11-01 00:23:43.710 [WARNING][5216] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f80c0e08d467894b039f948b8f86ba376ec5e81932e79cbeb44ff3a13e6573ce" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--666569f655--cp7jx-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"bf1893fd-31bf-427a-928e-11685512f41a", ResourceVersion:"1121", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 23, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"a68c2dc2f36d87cfee73ffd4e0884765db1815b211809ec5c8e3a8e16fc05113", Pod:"goldmane-666569f655-cp7jx", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali6b455043a5a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:23:43.744941 env[1324]: 2025-11-01 00:23:43.710 [INFO][5216] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="f80c0e08d467894b039f948b8f86ba376ec5e81932e79cbeb44ff3a13e6573ce" Nov 1 00:23:43.744941 env[1324]: 2025-11-01 00:23:43.710 [INFO][5216] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f80c0e08d467894b039f948b8f86ba376ec5e81932e79cbeb44ff3a13e6573ce" iface="eth0" netns="" Nov 1 00:23:43.744941 env[1324]: 2025-11-01 00:23:43.710 [INFO][5216] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="f80c0e08d467894b039f948b8f86ba376ec5e81932e79cbeb44ff3a13e6573ce" Nov 1 00:23:43.744941 env[1324]: 2025-11-01 00:23:43.710 [INFO][5216] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f80c0e08d467894b039f948b8f86ba376ec5e81932e79cbeb44ff3a13e6573ce" Nov 1 00:23:43.744941 env[1324]: 2025-11-01 00:23:43.730 [INFO][5225] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="f80c0e08d467894b039f948b8f86ba376ec5e81932e79cbeb44ff3a13e6573ce" HandleID="k8s-pod-network.f80c0e08d467894b039f948b8f86ba376ec5e81932e79cbeb44ff3a13e6573ce" Workload="localhost-k8s-goldmane--666569f655--cp7jx-eth0" Nov 1 00:23:43.744941 env[1324]: 2025-11-01 00:23:43.730 [INFO][5225] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:23:43.744941 env[1324]: 2025-11-01 00:23:43.730 [INFO][5225] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:23:43.744941 env[1324]: 2025-11-01 00:23:43.740 [WARNING][5225] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="f80c0e08d467894b039f948b8f86ba376ec5e81932e79cbeb44ff3a13e6573ce" HandleID="k8s-pod-network.f80c0e08d467894b039f948b8f86ba376ec5e81932e79cbeb44ff3a13e6573ce" Workload="localhost-k8s-goldmane--666569f655--cp7jx-eth0" Nov 1 00:23:43.744941 env[1324]: 2025-11-01 00:23:43.740 [INFO][5225] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="f80c0e08d467894b039f948b8f86ba376ec5e81932e79cbeb44ff3a13e6573ce" HandleID="k8s-pod-network.f80c0e08d467894b039f948b8f86ba376ec5e81932e79cbeb44ff3a13e6573ce" Workload="localhost-k8s-goldmane--666569f655--cp7jx-eth0" Nov 1 00:23:43.744941 env[1324]: 2025-11-01 00:23:43.741 [INFO][5225] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:23:43.744941 env[1324]: 2025-11-01 00:23:43.743 [INFO][5216] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="f80c0e08d467894b039f948b8f86ba376ec5e81932e79cbeb44ff3a13e6573ce" Nov 1 00:23:43.744941 env[1324]: time="2025-11-01T00:23:43.744915697Z" level=info msg="TearDown network for sandbox \"f80c0e08d467894b039f948b8f86ba376ec5e81932e79cbeb44ff3a13e6573ce\" successfully" Nov 1 00:23:43.766676 env[1324]: time="2025-11-01T00:23:43.766629901Z" level=info msg="RemovePodSandbox \"f80c0e08d467894b039f948b8f86ba376ec5e81932e79cbeb44ff3a13e6573ce\" returns successfully" Nov 1 00:23:44.467000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.0.0.92:22-10.0.0.1:36308 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:23:44.468507 systemd[1]: Started sshd@13-10.0.0.92:22-10.0.0.1:36308.service. Nov 1 00:23:44.472765 kernel: kauditd_printk_skb: 1 callbacks suppressed Nov 1 00:23:44.472861 kernel: audit: type=1130 audit(1761956624.467:481): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.0.0.92:22-10.0.0.1:36308 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:23:44.513000 audit[5233]: USER_ACCT pid=5233 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Nov 1 00:23:44.515482 sshd[5233]: Accepted publickey for core from 10.0.0.1 port 36308 ssh2: RSA SHA256:kb3suJ2QTjzwtL4e8dR0lwVlSg216vRjckn55cG0Sc4 Nov 1 00:23:44.516982 sshd[5233]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:23:44.514000 audit[5233]: CRED_ACQ pid=5233 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Nov 1 00:23:44.521581 kernel: audit: type=1101 audit(1761956624.513:482): pid=5233 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Nov 1 00:23:44.521630 kernel: audit: type=1103 audit(1761956624.514:483): pid=5233 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Nov 1 00:23:44.521660 kernel: audit: type=1006 audit(1761956624.515:484): pid=5233 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=14 res=1 Nov 1 00:23:44.520564 systemd-logind[1310]: New session 14 of user core. Nov 1 00:23:44.521394 systemd[1]: Started session-14.scope. Nov 1 00:23:44.515000 audit[5233]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffc21d5b70 a2=3 a3=1 items=0 ppid=1 pid=5233 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:23:44.526507 kernel: audit: type=1300 audit(1761956624.515:484): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffc21d5b70 a2=3 a3=1 items=0 ppid=1 pid=5233 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:23:44.526567 kernel: audit: type=1327 audit(1761956624.515:484): proctitle=737368643A20636F7265205B707269765D Nov 1 00:23:44.515000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Nov 1 00:23:44.527589 kernel: audit: type=1105 audit(1761956624.523:485): pid=5233 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Nov 1 00:23:44.523000 audit[5233]: USER_START pid=5233 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Nov 1 00:23:44.525000 audit[5236]: CRED_ACQ pid=5236 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Nov 1 00:23:44.533052 kernel: audit: type=1103 audit(1761956624.525:486): pid=5236 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Nov 1 00:23:44.646956 sshd[5233]: pam_unix(sshd:session): session closed for user core Nov 1 00:23:44.646000 audit[5233]: USER_END pid=5233 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Nov 1 00:23:44.649424 systemd-logind[1310]: Session 14 logged out. Waiting for processes to exit. Nov 1 00:23:44.649625 systemd[1]: sshd@13-10.0.0.92:22-10.0.0.1:36308.service: Deactivated successfully. Nov 1 00:23:44.650473 systemd[1]: session-14.scope: Deactivated successfully. Nov 1 00:23:44.650852 systemd-logind[1310]: Removed session 14. Nov 1 00:23:44.646000 audit[5233]: CRED_DISP pid=5233 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Nov 1 00:23:44.653921 kernel: audit: type=1106 audit(1761956624.646:487): pid=5233 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Nov 1 00:23:44.653976 kernel: audit: type=1104 audit(1761956624.646:488): pid=5233 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Nov 1 00:23:44.648000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.0.0.92:22-10.0.0.1:36308 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:23:46.641436 kubelet[2124]: E1101 00:23:46.641384 2124 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-69fb5f888d-pptgw" podUID="8bcd59ea-9151-4aaf-9b6c-77893bc394d7" Nov 1 00:23:49.650339 systemd[1]: Started sshd@14-10.0.0.92:22-10.0.0.1:38234.service. Nov 1 00:23:49.649000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.0.0.92:22-10.0.0.1:38234 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:23:49.653778 kernel: kauditd_printk_skb: 1 callbacks suppressed Nov 1 00:23:49.653863 kernel: audit: type=1130 audit(1761956629.649:490): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.0.0.92:22-10.0.0.1:38234 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:23:49.699000 audit[5257]: USER_ACCT pid=5257 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Nov 1 00:23:49.701153 sshd[5257]: Accepted publickey for core from 10.0.0.1 port 38234 ssh2: RSA SHA256:kb3suJ2QTjzwtL4e8dR0lwVlSg216vRjckn55cG0Sc4 Nov 1 00:23:49.702462 sshd[5257]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:23:49.700000 audit[5257]: CRED_ACQ pid=5257 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Nov 1 00:23:49.707062 kernel: audit: type=1101 audit(1761956629.699:491): pid=5257 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Nov 1 00:23:49.707124 kernel: audit: type=1103 audit(1761956629.700:492): pid=5257 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Nov 1 00:23:49.707158 kernel: audit: type=1006 audit(1761956629.700:493): pid=5257 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=15 res=1 Nov 1 00:23:49.706756 systemd-logind[1310]: New session 15 of user core. Nov 1 00:23:49.707705 systemd[1]: Started session-15.scope. Nov 1 00:23:49.700000 audit[5257]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffc4b3c4e0 a2=3 a3=1 items=0 ppid=1 pid=5257 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:23:49.711618 kernel: audit: type=1300 audit(1761956629.700:493): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffc4b3c4e0 a2=3 a3=1 items=0 ppid=1 pid=5257 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:23:49.711668 kernel: audit: type=1327 audit(1761956629.700:493): proctitle=737368643A20636F7265205B707269765D Nov 1 00:23:49.700000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Nov 1 00:23:49.714000 audit[5257]: USER_START pid=5257 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Nov 1 00:23:49.715000 audit[5260]: CRED_ACQ pid=5260 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Nov 1 00:23:49.722981 kernel: audit: type=1105 audit(1761956629.714:494): pid=5257 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Nov 1 00:23:49.723026 kernel: audit: type=1103 audit(1761956629.715:495): pid=5260 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Nov 1 00:23:49.838496 sshd[5257]: pam_unix(sshd:session): session closed for user core Nov 1 00:23:49.838000 audit[5257]: USER_END pid=5257 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Nov 1 00:23:49.840000 audit[5257]: CRED_DISP pid=5257 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Nov 1 00:23:49.843048 systemd[1]: sshd@14-10.0.0.92:22-10.0.0.1:38234.service: Deactivated successfully. Nov 1 00:23:49.844320 systemd[1]: session-15.scope: Deactivated successfully. Nov 1 00:23:49.844700 systemd-logind[1310]: Session 15 logged out. Waiting for processes to exit. Nov 1 00:23:49.845546 systemd-logind[1310]: Removed session 15. Nov 1 00:23:49.846439 kernel: audit: type=1106 audit(1761956629.838:496): pid=5257 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Nov 1 00:23:49.846502 kernel: audit: type=1104 audit(1761956629.840:497): pid=5257 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Nov 1 00:23:49.841000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.0.0.92:22-10.0.0.1:38234 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:23:52.641626 kubelet[2124]: E1101 00:23:52.641584 2124 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-cp7jx" podUID="bf1893fd-31bf-427a-928e-11685512f41a" Nov 1 00:23:52.642021 kubelet[2124]: E1101 00:23:52.641602 2124 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-77cfb4d4d6-l78bq" podUID="098f9c0f-a24a-4001-88bf-ea4e44e957ea" Nov 1 00:23:52.642153 kubelet[2124]: E1101 00:23:52.641656 2124 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-77cfb4d4d6-4nr5k" podUID="e3dd8f6c-3b39-4d19-a732-fff37a40f25e" Nov 1 00:23:53.641078 kubelet[2124]: E1101 00:23:53.641029 2124 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-866bcf4d9f-tt9sm" podUID="b83bf5c0-405f-4b6b-b82a-0980cae1df67" Nov 1 00:23:53.642012 kubelet[2124]: E1101 00:23:53.641944 2124 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-52pqx" podUID="a584285f-c40b-477a-8ddb-bfa9e3439fe6" Nov 1 00:23:54.841000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.0.0.92:22-10.0.0.1:38236 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:23:54.842454 systemd[1]: Started sshd@15-10.0.0.92:22-10.0.0.1:38236.service. Nov 1 00:23:54.845855 kernel: kauditd_printk_skb: 1 callbacks suppressed Nov 1 00:23:54.845907 kernel: audit: type=1130 audit(1761956634.841:499): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.0.0.92:22-10.0.0.1:38236 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:23:54.883000 audit[5271]: USER_ACCT pid=5271 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Nov 1 00:23:54.884780 sshd[5271]: Accepted publickey for core from 10.0.0.1 port 38236 ssh2: RSA SHA256:kb3suJ2QTjzwtL4e8dR0lwVlSg216vRjckn55cG0Sc4 Nov 1 00:23:54.886171 sshd[5271]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:23:54.884000 audit[5271]: CRED_ACQ pid=5271 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Nov 1 00:23:54.890471 kernel: audit: type=1101 audit(1761956634.883:500): pid=5271 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Nov 1 00:23:54.890528 kernel: audit: type=1103 audit(1761956634.884:501): pid=5271 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Nov 1 00:23:54.892736 kernel: audit: type=1006 audit(1761956634.884:502): pid=5271 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=16 res=1 Nov 1 00:23:54.892813 kernel: audit: type=1300 audit(1761956634.884:502): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=fffff2c41740 a2=3 a3=1 items=0 ppid=1 pid=5271 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:23:54.884000 audit[5271]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=fffff2c41740 a2=3 a3=1 items=0 ppid=1 pid=5271 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:23:54.893859 systemd[1]: Started session-16.scope. Nov 1 00:23:54.894370 systemd-logind[1310]: New session 16 of user core. Nov 1 00:23:54.884000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Nov 1 00:23:54.897035 kernel: audit: type=1327 audit(1761956634.884:502): proctitle=737368643A20636F7265205B707269765D Nov 1 00:23:54.898000 audit[5271]: USER_START pid=5271 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Nov 1 00:23:54.899000 audit[5274]: CRED_ACQ pid=5274 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Nov 1 00:23:54.906073 kernel: audit: type=1105 audit(1761956634.898:503): pid=5271 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Nov 1 00:23:54.906151 kernel: audit: type=1103 audit(1761956634.899:504): pid=5274 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Nov 1 00:23:55.037512 sshd[5271]: pam_unix(sshd:session): session closed for user core Nov 1 00:23:55.038962 systemd[1]: Started sshd@16-10.0.0.92:22-10.0.0.1:38242.service. Nov 1 00:23:55.037000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.0.0.92:22-10.0.0.1:38242 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:23:55.042448 kernel: audit: type=1130 audit(1761956635.037:505): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.0.0.92:22-10.0.0.1:38242 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:23:55.041000 audit[5271]: USER_END pid=5271 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Nov 1 00:23:55.044537 systemd[1]: sshd@15-10.0.0.92:22-10.0.0.1:38236.service: Deactivated successfully. Nov 1 00:23:55.045440 systemd[1]: session-16.scope: Deactivated successfully. Nov 1 00:23:55.041000 audit[5271]: CRED_DISP pid=5271 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Nov 1 00:23:55.043000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.0.0.92:22-10.0.0.1:38236 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:23:55.046938 systemd-logind[1310]: Session 16 logged out. Waiting for processes to exit. Nov 1 00:23:55.047433 kernel: audit: type=1106 audit(1761956635.041:506): pid=5271 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Nov 1 00:23:55.047701 systemd-logind[1310]: Removed session 16. Nov 1 00:23:55.083000 audit[5283]: USER_ACCT pid=5283 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Nov 1 00:23:55.084985 sshd[5283]: Accepted publickey for core from 10.0.0.1 port 38242 ssh2: RSA SHA256:kb3suJ2QTjzwtL4e8dR0lwVlSg216vRjckn55cG0Sc4 Nov 1 00:23:55.087000 audit[5283]: CRED_ACQ pid=5283 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Nov 1 00:23:55.087000 audit[5283]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffef464d40 a2=3 a3=1 items=0 ppid=1 pid=5283 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:23:55.087000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Nov 1 00:23:55.088887 sshd[5283]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:23:55.099614 systemd[1]: Started session-17.scope. Nov 1 00:23:55.100008 systemd-logind[1310]: New session 17 of user core. Nov 1 00:23:55.104000 audit[5283]: USER_START pid=5283 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Nov 1 00:23:55.105000 audit[5288]: CRED_ACQ pid=5288 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Nov 1 00:23:55.354057 sshd[5283]: pam_unix(sshd:session): session closed for user core Nov 1 00:23:55.356257 systemd[1]: Started sshd@17-10.0.0.92:22-10.0.0.1:38254.service. Nov 1 00:23:55.354000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.0.0.92:22-10.0.0.1:38254 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:23:55.355000 audit[5283]: USER_END pid=5283 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Nov 1 00:23:55.356000 audit[5283]: CRED_DISP pid=5283 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Nov 1 00:23:55.361575 systemd-logind[1310]: Session 17 logged out. Waiting for processes to exit. Nov 1 00:23:55.361751 systemd[1]: sshd@16-10.0.0.92:22-10.0.0.1:38242.service: Deactivated successfully. Nov 1 00:23:55.360000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.0.0.92:22-10.0.0.1:38242 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:23:55.362612 systemd[1]: session-17.scope: Deactivated successfully. Nov 1 00:23:55.363064 systemd-logind[1310]: Removed session 17. Nov 1 00:23:55.399000 audit[5295]: USER_ACCT pid=5295 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Nov 1 00:23:55.401107 sshd[5295]: Accepted publickey for core from 10.0.0.1 port 38254 ssh2: RSA SHA256:kb3suJ2QTjzwtL4e8dR0lwVlSg216vRjckn55cG0Sc4 Nov 1 00:23:55.400000 audit[5295]: CRED_ACQ pid=5295 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Nov 1 00:23:55.400000 audit[5295]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=fffff58d4f30 a2=3 a3=1 items=0 ppid=1 pid=5295 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:23:55.400000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Nov 1 00:23:55.402641 sshd[5295]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:23:55.406142 systemd-logind[1310]: New session 18 of user core. Nov 1 00:23:55.407126 systemd[1]: Started session-18.scope. Nov 1 00:23:55.409000 audit[5295]: USER_START pid=5295 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Nov 1 00:23:55.410000 audit[5300]: CRED_ACQ pid=5300 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Nov 1 00:23:55.998000 audit[5312]: NETFILTER_CFG table=filter:125 family=2 entries=26 op=nft_register_rule pid=5312 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 00:23:55.998000 audit[5312]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=14176 a0=3 a1=ffffd4ddb120 a2=0 a3=1 items=0 ppid=2274 pid=5312 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:23:55.998000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 00:23:56.002134 sshd[5295]: pam_unix(sshd:session): session closed for user core Nov 1 00:23:56.004294 systemd[1]: Started sshd@18-10.0.0.92:22-10.0.0.1:38258.service. Nov 1 00:23:56.002000 audit[5295]: USER_END pid=5295 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Nov 1 00:23:56.002000 audit[5295]: CRED_DISP pid=5295 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Nov 1 00:23:56.003000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.0.0.92:22-10.0.0.1:38258 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:23:56.005766 systemd[1]: sshd@17-10.0.0.92:22-10.0.0.1:38254.service: Deactivated successfully. Nov 1 00:23:56.006689 systemd[1]: session-18.scope: Deactivated successfully. Nov 1 00:23:56.004000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.0.0.92:22-10.0.0.1:38254 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:23:56.004000 audit[5312]: NETFILTER_CFG table=nat:126 family=2 entries=20 op=nft_register_rule pid=5312 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 00:23:56.004000 audit[5312]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5772 a0=3 a1=ffffd4ddb120 a2=0 a3=1 items=0 ppid=2274 pid=5312 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:23:56.004000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 00:23:56.010003 systemd-logind[1310]: Session 18 logged out. Waiting for processes to exit. Nov 1 00:23:56.011027 systemd-logind[1310]: Removed session 18. Nov 1 00:23:56.021000 audit[5318]: NETFILTER_CFG table=filter:127 family=2 entries=38 op=nft_register_rule pid=5318 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 00:23:56.021000 audit[5318]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=14176 a0=3 a1=ffffdabcbf20 a2=0 a3=1 items=0 ppid=2274 pid=5318 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:23:56.021000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 00:23:56.027000 audit[5318]: NETFILTER_CFG table=nat:128 family=2 entries=20 op=nft_register_rule pid=5318 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 00:23:56.027000 audit[5318]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5772 a0=3 a1=ffffdabcbf20 a2=0 a3=1 items=0 ppid=2274 pid=5318 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:23:56.027000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 00:23:56.048550 sshd[5313]: Accepted publickey for core from 10.0.0.1 port 38258 ssh2: RSA SHA256:kb3suJ2QTjzwtL4e8dR0lwVlSg216vRjckn55cG0Sc4 Nov 1 00:23:56.047000 audit[5313]: USER_ACCT pid=5313 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Nov 1 00:23:56.050177 sshd[5313]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:23:56.048000 audit[5313]: CRED_ACQ pid=5313 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Nov 1 00:23:56.048000 audit[5313]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=fffffa3e0850 a2=3 a3=1 items=0 ppid=1 pid=5313 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:23:56.048000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Nov 1 00:23:56.054213 systemd-logind[1310]: New session 19 of user core. Nov 1 00:23:56.054623 systemd[1]: Started session-19.scope. Nov 1 00:23:56.057000 audit[5313]: USER_START pid=5313 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Nov 1 00:23:56.059000 audit[5320]: CRED_ACQ pid=5320 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Nov 1 00:23:56.305000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.0.0.92:22-10.0.0.1:38260 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:23:56.307230 systemd[1]: Started sshd@19-10.0.0.92:22-10.0.0.1:38260.service. Nov 1 00:23:56.305257 sshd[5313]: pam_unix(sshd:session): session closed for user core Nov 1 00:23:56.312000 audit[5313]: USER_END pid=5313 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Nov 1 00:23:56.312000 audit[5313]: CRED_DISP pid=5313 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Nov 1 00:23:56.315475 systemd[1]: sshd@18-10.0.0.92:22-10.0.0.1:38258.service: Deactivated successfully. Nov 1 00:23:56.314000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.0.0.92:22-10.0.0.1:38258 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:23:56.317535 systemd-logind[1310]: Session 19 logged out. Waiting for processes to exit. Nov 1 00:23:56.317558 systemd[1]: session-19.scope: Deactivated successfully. Nov 1 00:23:56.318898 systemd-logind[1310]: Removed session 19. Nov 1 00:23:56.348000 audit[5328]: USER_ACCT pid=5328 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Nov 1 00:23:56.350318 sshd[5328]: Accepted publickey for core from 10.0.0.1 port 38260 ssh2: RSA SHA256:kb3suJ2QTjzwtL4e8dR0lwVlSg216vRjckn55cG0Sc4 Nov 1 00:23:56.349000 audit[5328]: CRED_ACQ pid=5328 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Nov 1 00:23:56.349000 audit[5328]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffc084d6a0 a2=3 a3=1 items=0 ppid=1 pid=5328 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=20 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:23:56.349000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Nov 1 00:23:56.351436 sshd[5328]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:23:56.355036 systemd-logind[1310]: New session 20 of user core. Nov 1 00:23:56.355744 systemd[1]: Started session-20.scope. Nov 1 00:23:56.360000 audit[5328]: USER_START pid=5328 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Nov 1 00:23:56.362000 audit[5333]: CRED_ACQ pid=5333 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Nov 1 00:23:56.485348 sshd[5328]: pam_unix(sshd:session): session closed for user core Nov 1 00:23:56.484000 audit[5328]: USER_END pid=5328 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Nov 1 00:23:56.485000 audit[5328]: CRED_DISP pid=5328 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Nov 1 00:23:56.487000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.0.0.92:22-10.0.0.1:38260 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:23:56.488192 systemd[1]: sshd@19-10.0.0.92:22-10.0.0.1:38260.service: Deactivated successfully. Nov 1 00:23:56.489113 systemd[1]: session-20.scope: Deactivated successfully. Nov 1 00:23:56.490255 systemd-logind[1310]: Session 20 logged out. Waiting for processes to exit. Nov 1 00:23:56.490977 systemd-logind[1310]: Removed session 20. Nov 1 00:24:00.641766 kubelet[2124]: E1101 00:24:00.641704 2124 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:24:00.644307 env[1324]: time="2025-11-01T00:24:00.644252848Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 1 00:24:00.660166 systemd[1]: run-containerd-runc-k8s.io-7621a62555f564289e10dfe2974ae3aa0367415a3746d1a3dee81946f4941aae-runc.0zZ0TE.mount: Deactivated successfully. Nov 1 00:24:00.882364 env[1324]: time="2025-11-01T00:24:00.882296502Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:24:00.883465 env[1324]: time="2025-11-01T00:24:00.883419952Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 1 00:24:00.883679 kubelet[2124]: E1101 00:24:00.883636 2124 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 1 00:24:00.883772 kubelet[2124]: E1101 00:24:00.883755 2124 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 1 00:24:00.883970 kubelet[2124]: E1101 00:24:00.883933 2124 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:5acdfbcf0fe34a9f88c8ad5a16543143,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-jtvw6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-69fb5f888d-pptgw_calico-system(8bcd59ea-9151-4aaf-9b6c-77893bc394d7): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 1 00:24:00.886037 env[1324]: time="2025-11-01T00:24:00.886005814Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 1 00:24:01.094486 env[1324]: time="2025-11-01T00:24:01.094437111Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:24:01.095721 env[1324]: time="2025-11-01T00:24:01.095667802Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 1 00:24:01.096160 kubelet[2124]: E1101 00:24:01.096091 2124 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 1 00:24:01.096222 kubelet[2124]: E1101 00:24:01.096167 2124 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 1 00:24:01.096339 kubelet[2124]: E1101 00:24:01.096301 2124 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jtvw6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-69fb5f888d-pptgw_calico-system(8bcd59ea-9151-4aaf-9b6c-77893bc394d7): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 1 00:24:01.097544 kubelet[2124]: E1101 00:24:01.097478 2124 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-69fb5f888d-pptgw" podUID="8bcd59ea-9151-4aaf-9b6c-77893bc394d7" Nov 1 00:24:01.102000 audit[5370]: NETFILTER_CFG table=filter:129 family=2 entries=26 op=nft_register_rule pid=5370 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 00:24:01.105982 kernel: kauditd_printk_skb: 57 callbacks suppressed Nov 1 00:24:01.106043 kernel: audit: type=1325 audit(1761956641.102:548): table=filter:129 family=2 entries=26 op=nft_register_rule pid=5370 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 00:24:01.106118 kernel: audit: type=1300 audit(1761956641.102:548): arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=ffffe83c6320 a2=0 a3=1 items=0 ppid=2274 pid=5370 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:24:01.102000 audit[5370]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=ffffe83c6320 a2=0 a3=1 items=0 ppid=2274 pid=5370 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:24:01.102000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 00:24:01.111451 kernel: audit: type=1327 audit(1761956641.102:548): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 00:24:01.118000 audit[5370]: NETFILTER_CFG table=nat:130 family=2 entries=104 op=nft_register_chain pid=5370 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 00:24:01.118000 audit[5370]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=48684 a0=3 a1=ffffe83c6320 a2=0 a3=1 items=0 ppid=2274 pid=5370 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:24:01.125503 kernel: audit: type=1325 audit(1761956641.118:549): table=nat:130 family=2 entries=104 op=nft_register_chain pid=5370 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 00:24:01.125563 kernel: audit: type=1300 audit(1761956641.118:549): arch=c00000b7 syscall=211 success=yes exit=48684 a0=3 a1=ffffe83c6320 a2=0 a3=1 items=0 ppid=2274 pid=5370 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:24:01.125584 kernel: audit: type=1327 audit(1761956641.118:549): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 00:24:01.118000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 00:24:01.488625 systemd[1]: Started sshd@20-10.0.0.92:22-10.0.0.1:60876.service. Nov 1 00:24:01.488000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.0.0.92:22-10.0.0.1:60876 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:24:01.492424 kernel: audit: type=1130 audit(1761956641.488:550): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.0.0.92:22-10.0.0.1:60876 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:24:01.529000 audit[5371]: USER_ACCT pid=5371 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Nov 1 00:24:01.530539 sshd[5371]: Accepted publickey for core from 10.0.0.1 port 60876 ssh2: RSA SHA256:kb3suJ2QTjzwtL4e8dR0lwVlSg216vRjckn55cG0Sc4 Nov 1 00:24:01.531863 sshd[5371]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:24:01.530000 audit[5371]: CRED_ACQ pid=5371 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Nov 1 00:24:01.536427 kernel: audit: type=1101 audit(1761956641.529:551): pid=5371 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Nov 1 00:24:01.536478 kernel: audit: type=1103 audit(1761956641.530:552): pid=5371 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Nov 1 00:24:01.536511 kernel: audit: type=1006 audit(1761956641.531:553): pid=5371 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=21 res=1 Nov 1 00:24:01.531000 audit[5371]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffe55aba20 a2=3 a3=1 items=0 ppid=1 pid=5371 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=21 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:24:01.531000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Nov 1 00:24:01.540014 systemd-logind[1310]: New session 21 of user core. Nov 1 00:24:01.540860 systemd[1]: Started session-21.scope. Nov 1 00:24:01.544000 audit[5371]: USER_START pid=5371 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Nov 1 00:24:01.546000 audit[5374]: CRED_ACQ pid=5374 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Nov 1 00:24:01.653677 sshd[5371]: pam_unix(sshd:session): session closed for user core Nov 1 00:24:01.654000 audit[5371]: USER_END pid=5371 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Nov 1 00:24:01.654000 audit[5371]: CRED_DISP pid=5371 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Nov 1 00:24:01.656073 systemd-logind[1310]: Session 21 logged out. Waiting for processes to exit. Nov 1 00:24:01.656281 systemd[1]: sshd@20-10.0.0.92:22-10.0.0.1:60876.service: Deactivated successfully. Nov 1 00:24:01.657209 systemd[1]: session-21.scope: Deactivated successfully. Nov 1 00:24:01.656000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.0.0.92:22-10.0.0.1:60876 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:24:01.657682 systemd-logind[1310]: Removed session 21. Nov 1 00:24:05.641458 env[1324]: time="2025-11-01T00:24:05.641393340Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 1 00:24:05.879547 env[1324]: time="2025-11-01T00:24:05.879499098Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:24:05.884421 env[1324]: time="2025-11-01T00:24:05.884331734Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 1 00:24:05.884587 kubelet[2124]: E1101 00:24:05.884550 2124 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 00:24:05.884943 kubelet[2124]: E1101 00:24:05.884597 2124 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 00:24:05.884943 kubelet[2124]: E1101 00:24:05.884857 2124 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bfv7s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-77cfb4d4d6-l78bq_calico-apiserver(098f9c0f-a24a-4001-88bf-ea4e44e957ea): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 1 00:24:05.885058 env[1324]: time="2025-11-01T00:24:05.884898259Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 1 00:24:05.886272 kubelet[2124]: E1101 00:24:05.886225 2124 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-77cfb4d4d6-l78bq" podUID="098f9c0f-a24a-4001-88bf-ea4e44e957ea" Nov 1 00:24:06.094375 env[1324]: time="2025-11-01T00:24:06.094181141Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:24:06.095230 env[1324]: time="2025-11-01T00:24:06.095126628Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 1 00:24:06.095428 kubelet[2124]: E1101 00:24:06.095361 2124 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 00:24:06.095428 kubelet[2124]: E1101 00:24:06.095423 2124 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 00:24:06.095642 kubelet[2124]: E1101 00:24:06.095553 2124 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6gqqt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-77cfb4d4d6-4nr5k_calico-apiserver(e3dd8f6c-3b39-4d19-a732-fff37a40f25e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 1 00:24:06.097210 kubelet[2124]: E1101 00:24:06.097176 2124 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-77cfb4d4d6-4nr5k" podUID="e3dd8f6c-3b39-4d19-a732-fff37a40f25e" Nov 1 00:24:06.642031 env[1324]: time="2025-11-01T00:24:06.641810730Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 1 00:24:06.657300 systemd[1]: Started sshd@21-10.0.0.92:22-10.0.0.1:60886.service. Nov 1 00:24:06.657000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.0.0.92:22-10.0.0.1:60886 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:24:06.661042 kernel: kauditd_printk_skb: 7 callbacks suppressed Nov 1 00:24:06.661273 kernel: audit: type=1130 audit(1761956646.657:559): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.0.0.92:22-10.0.0.1:60886 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:24:06.705000 audit[5391]: USER_ACCT pid=5391 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Nov 1 00:24:06.705619 sshd[5391]: Accepted publickey for core from 10.0.0.1 port 60886 ssh2: RSA SHA256:kb3suJ2QTjzwtL4e8dR0lwVlSg216vRjckn55cG0Sc4 Nov 1 00:24:06.707634 sshd[5391]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:24:06.706000 audit[5391]: CRED_ACQ pid=5391 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Nov 1 00:24:06.711363 kernel: audit: type=1101 audit(1761956646.705:560): pid=5391 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Nov 1 00:24:06.711439 kernel: audit: type=1103 audit(1761956646.706:561): pid=5391 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Nov 1 00:24:06.711458 kernel: audit: type=1006 audit(1761956646.706:562): pid=5391 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=22 res=1 Nov 1 00:24:06.714279 kernel: audit: type=1300 audit(1761956646.706:562): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffea3bc020 a2=3 a3=1 items=0 ppid=1 pid=5391 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:24:06.706000 audit[5391]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffea3bc020 a2=3 a3=1 items=0 ppid=1 pid=5391 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:24:06.715454 systemd[1]: Started session-22.scope. Nov 1 00:24:06.715796 systemd-logind[1310]: New session 22 of user core. Nov 1 00:24:06.717156 kernel: audit: type=1327 audit(1761956646.706:562): proctitle=737368643A20636F7265205B707269765D Nov 1 00:24:06.706000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Nov 1 00:24:06.721000 audit[5391]: USER_START pid=5391 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Nov 1 00:24:06.722000 audit[5394]: CRED_ACQ pid=5394 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Nov 1 00:24:06.728566 kernel: audit: type=1105 audit(1761956646.721:563): pid=5391 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Nov 1 00:24:06.728625 kernel: audit: type=1103 audit(1761956646.722:564): pid=5394 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Nov 1 00:24:06.862741 env[1324]: time="2025-11-01T00:24:06.862678635Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:24:06.864794 sshd[5391]: pam_unix(sshd:session): session closed for user core Nov 1 00:24:06.865000 audit[5391]: USER_END pid=5391 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Nov 1 00:24:06.867712 systemd-logind[1310]: Session 22 logged out. Waiting for processes to exit. Nov 1 00:24:06.868142 systemd[1]: sshd@21-10.0.0.92:22-10.0.0.1:60886.service: Deactivated successfully. Nov 1 00:24:06.868934 systemd[1]: session-22.scope: Deactivated successfully. Nov 1 00:24:06.869892 env[1324]: time="2025-11-01T00:24:06.869795568Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 1 00:24:06.870067 kubelet[2124]: E1101 00:24:06.870020 2124 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 1 00:24:06.870201 kubelet[2124]: E1101 00:24:06.870083 2124 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 1 00:24:06.865000 audit[5391]: CRED_DISP pid=5391 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Nov 1 00:24:06.870319 kubelet[2124]: E1101 00:24:06.870213 2124 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ppkz8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-52pqx_calico-system(a584285f-c40b-477a-8ddb-bfa9e3439fe6): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 1 00:24:06.871804 systemd-logind[1310]: Removed session 22. Nov 1 00:24:06.873246 kernel: audit: type=1106 audit(1761956646.865:565): pid=5391 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Nov 1 00:24:06.873321 kernel: audit: type=1104 audit(1761956646.865:566): pid=5391 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Nov 1 00:24:06.867000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.0.0.92:22-10.0.0.1:60886 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:24:06.874654 env[1324]: time="2025-11-01T00:24:06.874558003Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 1 00:24:07.083852 env[1324]: time="2025-11-01T00:24:07.083777087Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:24:07.084716 env[1324]: time="2025-11-01T00:24:07.084669573Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 1 00:24:07.085361 kubelet[2124]: E1101 00:24:07.084913 2124 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 1 00:24:07.085361 kubelet[2124]: E1101 00:24:07.084967 2124 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 1 00:24:07.085361 kubelet[2124]: E1101 00:24:07.085081 2124 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ppkz8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-52pqx_calico-system(a584285f-c40b-477a-8ddb-bfa9e3439fe6): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 1 00:24:07.086310 kubelet[2124]: E1101 00:24:07.086272 2124 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-52pqx" podUID="a584285f-c40b-477a-8ddb-bfa9e3439fe6" Nov 1 00:24:07.640097 kubelet[2124]: E1101 00:24:07.640053 2124 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:24:07.641088 env[1324]: time="2025-11-01T00:24:07.641050282Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 1 00:24:07.909345 env[1324]: time="2025-11-01T00:24:07.909211164Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:24:07.910304 env[1324]: time="2025-11-01T00:24:07.910268131Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 1 00:24:07.910499 kubelet[2124]: E1101 00:24:07.910462 2124 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 1 00:24:07.910603 kubelet[2124]: E1101 00:24:07.910584 2124 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 1 00:24:07.910832 kubelet[2124]: E1101 00:24:07.910784 2124 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jhn7d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-cp7jx_calico-system(bf1893fd-31bf-427a-928e-11685512f41a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 1 00:24:07.912137 kubelet[2124]: E1101 00:24:07.912065 2124 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-cp7jx" podUID="bf1893fd-31bf-427a-928e-11685512f41a" Nov 1 00:24:08.641539 env[1324]: time="2025-11-01T00:24:08.641487416Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 1 00:24:08.850371 env[1324]: time="2025-11-01T00:24:08.850297794Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:24:08.884777 env[1324]: time="2025-11-01T00:24:08.884680514Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 1 00:24:08.885019 kubelet[2124]: E1101 00:24:08.884982 2124 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 1 00:24:08.885310 kubelet[2124]: E1101 00:24:08.885030 2124 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 1 00:24:08.885310 kubelet[2124]: E1101 00:24:08.885160 2124 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-74m96,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-866bcf4d9f-tt9sm_calico-system(b83bf5c0-405f-4b6b-b82a-0980cae1df67): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 1 00:24:08.886549 kubelet[2124]: E1101 00:24:08.886521 2124 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-866bcf4d9f-tt9sm" podUID="b83bf5c0-405f-4b6b-b82a-0980cae1df67" Nov 1 00:24:10.640137 kubelet[2124]: E1101 00:24:10.640104 2124 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:24:11.867000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.0.0.92:22-10.0.0.1:60960 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:24:11.867469 systemd[1]: Started sshd@22-10.0.0.92:22-10.0.0.1:60960.service. Nov 1 00:24:11.871069 kernel: kauditd_printk_skb: 1 callbacks suppressed Nov 1 00:24:11.871167 kernel: audit: type=1130 audit(1761956651.867:568): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.0.0.92:22-10.0.0.1:60960 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:24:11.914000 audit[5407]: USER_ACCT pid=5407 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Nov 1 00:24:11.914920 sshd[5407]: Accepted publickey for core from 10.0.0.1 port 60960 ssh2: RSA SHA256:kb3suJ2QTjzwtL4e8dR0lwVlSg216vRjckn55cG0Sc4 Nov 1 00:24:11.918338 sshd[5407]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:24:11.918611 kernel: audit: type=1101 audit(1761956651.914:569): pid=5407 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Nov 1 00:24:11.918649 kernel: audit: type=1103 audit(1761956651.917:570): pid=5407 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Nov 1 00:24:11.917000 audit[5407]: CRED_ACQ pid=5407 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Nov 1 00:24:11.923168 kernel: audit: type=1006 audit(1761956651.917:571): pid=5407 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=23 res=1 Nov 1 00:24:11.923209 kernel: audit: type=1300 audit(1761956651.917:571): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=fffff4161d20 a2=3 a3=1 items=0 ppid=1 pid=5407 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:24:11.917000 audit[5407]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=fffff4161d20 a2=3 a3=1 items=0 ppid=1 pid=5407 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:24:11.917000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Nov 1 00:24:11.926593 systemd[1]: Started session-23.scope. Nov 1 00:24:11.926831 systemd-logind[1310]: New session 23 of user core. Nov 1 00:24:11.927441 kernel: audit: type=1327 audit(1761956651.917:571): proctitle=737368643A20636F7265205B707269765D Nov 1 00:24:11.930000 audit[5407]: USER_START pid=5407 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Nov 1 00:24:11.932000 audit[5410]: CRED_ACQ pid=5410 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Nov 1 00:24:11.937236 kernel: audit: type=1105 audit(1761956651.930:572): pid=5407 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Nov 1 00:24:11.937291 kernel: audit: type=1103 audit(1761956651.932:573): pid=5410 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Nov 1 00:24:12.103815 sshd[5407]: pam_unix(sshd:session): session closed for user core Nov 1 00:24:12.104000 audit[5407]: USER_END pid=5407 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Nov 1 00:24:12.106362 systemd[1]: sshd@22-10.0.0.92:22-10.0.0.1:60960.service: Deactivated successfully. Nov 1 00:24:12.107426 systemd-logind[1310]: Session 23 logged out. Waiting for processes to exit. Nov 1 00:24:12.107479 systemd[1]: session-23.scope: Deactivated successfully. Nov 1 00:24:12.108211 systemd-logind[1310]: Removed session 23. Nov 1 00:24:12.104000 audit[5407]: CRED_DISP pid=5407 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Nov 1 00:24:12.111526 kernel: audit: type=1106 audit(1761956652.104:574): pid=5407 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Nov 1 00:24:12.111607 kernel: audit: type=1104 audit(1761956652.104:575): pid=5407 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Nov 1 00:24:12.106000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.0.0.92:22-10.0.0.1:60960 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:24:12.642979 kubelet[2124]: E1101 00:24:12.642931 2124 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-69fb5f888d-pptgw" podUID="8bcd59ea-9151-4aaf-9b6c-77893bc394d7" Nov 1 00:24:14.645727 kubelet[2124]: E1101 00:24:14.645692 2124 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:24:17.106485 systemd[1]: Started sshd@23-10.0.0.92:22-10.0.0.1:60972.service. Nov 1 00:24:17.106000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.0.0.92:22-10.0.0.1:60972 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:24:17.110431 kernel: kauditd_printk_skb: 1 callbacks suppressed Nov 1 00:24:17.110511 kernel: audit: type=1130 audit(1761956657.106:577): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.0.0.92:22-10.0.0.1:60972 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:24:17.158000 audit[5422]: USER_ACCT pid=5422 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Nov 1 00:24:17.159522 sshd[5422]: Accepted publickey for core from 10.0.0.1 port 60972 ssh2: RSA SHA256:kb3suJ2QTjzwtL4e8dR0lwVlSg216vRjckn55cG0Sc4 Nov 1 00:24:17.160952 sshd[5422]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:24:17.160000 audit[5422]: CRED_ACQ pid=5422 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Nov 1 00:24:17.164315 kernel: audit: type=1101 audit(1761956657.158:578): pid=5422 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Nov 1 00:24:17.164381 kernel: audit: type=1103 audit(1761956657.160:579): pid=5422 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Nov 1 00:24:17.164416 kernel: audit: type=1006 audit(1761956657.160:580): pid=5422 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=24 res=1 Nov 1 00:24:17.164711 systemd-logind[1310]: New session 24 of user core. Nov 1 00:24:17.165493 systemd[1]: Started session-24.scope. Nov 1 00:24:17.160000 audit[5422]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffd9904c20 a2=3 a3=1 items=0 ppid=1 pid=5422 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:24:17.169004 kernel: audit: type=1300 audit(1761956657.160:580): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffd9904c20 a2=3 a3=1 items=0 ppid=1 pid=5422 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:24:17.160000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Nov 1 00:24:17.170272 kernel: audit: type=1327 audit(1761956657.160:580): proctitle=737368643A20636F7265205B707269765D Nov 1 00:24:17.170334 kernel: audit: type=1105 audit(1761956657.169:581): pid=5422 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Nov 1 00:24:17.169000 audit[5422]: USER_START pid=5422 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Nov 1 00:24:17.170000 audit[5425]: CRED_ACQ pid=5425 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Nov 1 00:24:17.176241 kernel: audit: type=1103 audit(1761956657.170:582): pid=5425 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Nov 1 00:24:17.351210 sshd[5422]: pam_unix(sshd:session): session closed for user core Nov 1 00:24:17.351000 audit[5422]: USER_END pid=5422 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Nov 1 00:24:17.353848 systemd[1]: sshd@23-10.0.0.92:22-10.0.0.1:60972.service: Deactivated successfully. Nov 1 00:24:17.354678 systemd[1]: session-24.scope: Deactivated successfully. Nov 1 00:24:17.352000 audit[5422]: CRED_DISP pid=5422 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Nov 1 00:24:17.356296 systemd-logind[1310]: Session 24 logged out. Waiting for processes to exit. Nov 1 00:24:17.359399 kernel: audit: type=1106 audit(1761956657.351:583): pid=5422 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Nov 1 00:24:17.359469 kernel: audit: type=1104 audit(1761956657.352:584): pid=5422 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Nov 1 00:24:17.353000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.0.0.92:22-10.0.0.1:60972 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:24:17.359876 systemd-logind[1310]: Removed session 24. Nov 1 00:24:17.641139 kubelet[2124]: E1101 00:24:17.641031 2124 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-77cfb4d4d6-l78bq" podUID="098f9c0f-a24a-4001-88bf-ea4e44e957ea"